Boundary AI is a comprehensive toolkit aimed primarily at facilitating tasks for AI engineers. Through its special config language known as BAML (Basically, A Made-up Language), it enhances the performance of LLMs (Large Language Models). With BAML, AI engineers can turn complex prompt templates into typed functions that are not only easier to execute but also to test, eliminating parsing boilerplate and type errors. In a sense, employing an LLM with BAML resembles invoking a regular function. Boundary AI also supports instantaneous testing of new prompts in various IDEs, including BAML’s VSCode Playground UI. Furthermore, the toolkit includes Boundary Studio, a feature for monitoring and tracking the performance of each LLM function over time. Importantly, BAML is primarily coded in Rust and supports Openai, Anthropic, Gemini, Mistral, and self-brought models with plans to include non-generative models. Deployment with BAML generates Python or Typescript code. Unlike other data modeling libraries, BAML is uniquely typesafe and never obscures prompts. It features an integrated playground and can support any model. The BAML compiler, as well as the VSCode extension for BAML, are free and open source, with paid services starting for those using the monitoring and improving functions of Boundary Studio.
Leave a Reply