How does Bito Understand My Code?
Sneak Peek into the Inner Workings of Bito
Bito deploys a Vector Database locally on the user’s machine, bundled as part of the Bito IDE plug-in. This database uses Embeddings (a vector with over 1,000 dimensions) to retrieve text, function names, objects, etc. from the codebase and then transform them into multi-dimensional vector space.
Then when you give it a function name or ask it a question, that query is converted into a vector and is compared to other vectors nearby. This returns the relevant search results. So, it's a way to perform search not on keywords, but on meaning. Vector Databases are able to do this kind of search very quickly.
Bito also uses an Agent Selection Framework that acts like an autonomous entity capable of perceiving its environment, making decisions, and taking actions to achieve certain goals. It figures out if it’s necessary to do an embeddings comparison on your codebase, do we need to perform an action against Jira, or do we do something else.
Finally, Bito utilizes Large Language Models (LLMs) from Open AI, Anthropic, and others that actually provide the answer to the question by leveraging the context provided by the Agent Selection Framework and the embeddings.
This is what makes us stand out from other AI tools like ChatGPT, GitHub Copilot, etc. that do not understand your entire codebase.
We’re making significant innovations in our AI Stack to simplify coding for everyone. To learn more about this head over to Bito’s AI Stack documentation.