Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 118 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

BITO

Loading...

Loading...

Loading...

AI Code Review Agent

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Other Bito AI tools

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Help

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Getting started

Deploy the AI Code Review Agent in Bito Cloud or opt for self-hosted service.

The AI Code Review Agent supports two deployment options:

  • Bito Cloud (fully managed)

  • Self-hosted service (run on your own infrastructure)

Each option comes with its own set of benefits and considerations.

This guide walks you through both options to help you determine which deployment model best fits your team’s needs.

Bito Cloud

Bito Cloud provides a managed environment for running the AI Code Review Agent, offering a seamless, hassle-free experience. This option is ideal for teams looking for quick deployment and minimal operational overhead.

Pros:

  • Simplicity: Enjoy a straightforward setup with a single-click installation process, making it easy to get started without technical hurdles.

  • Maintenance-Free: Bito Cloud takes care of all necessary updates and maintenance, ensuring your Agent always operates on the latest software version without any effort on your part.

  • Scalability: The platform is designed to easily scale, accommodating project growth effortlessly and ensuring reliable performance under varying loads.

Cons:

  • Handling of Pull Request Diffs: For analysis purposes, diffs from pull requests are temporarily stored on our servers.

Install/run using Bito Cloud


Self-hosted service

Self-hosted AI Code Review Agent offers a higher degree of control and customization, suited for organizations with specific requirements or those who prefer to manage their own infrastructure.

Pros:

  • Full Control: Self-hosting provides complete control over the deployment environment, allowing for extensive customization and the ability to integrate with existing systems as needed.

  • Privacy and Security: Keeping the AI Code Review Agent within your own infrastructure can enhance data security and privacy, as all information remains under your direct control.

Cons:

  • Setup Complexity: Establishing a self-hosted environment requires technical know-how and can be more complex than using a managed service, potentially leading to longer setup times.

  • Maintenance Responsibility: The responsibility of maintaining and updating the software falls entirely on your team, which includes ensuring the system is scaled appropriately to handle demand.

Install/run as a self-hosted service

Overview

On-demand, context-aware AI code reviews for GitHub, GitLab, and Bitbucket.

Bito’s is the first agent built with Bito’s AI Agent framework and engine. It is an automated AI assistant (powered by Anthropic’s Claude Sonnet 3.7) that will review your team’s code; it spots bugs, issues, code smells, and security vulnerabilities in Pull/Merge Requests (PR/MR) and provides high-quality suggestions to fix them.

It seamlessly integrates with Git providers such as GitHub, GitLab, and Bitbucket, automatically posting recommendations directly as comments within the corresponding Pull Request. It includes real-time recommendations from Static Code Analysis and OSS vulnerability tools such as fbinfer, Dependency-Check, etc. and can include high severity suggestions from other 3rd party tools you use such as Snyk.

We also support GitHub (Self-Managed) and GitLab (Self-Managed).

The AI Code Review Agent acts as a set of specialized engineers each analyzing different aspects of your PR. They analyze aspects such as Performance, Code Structure, Security, Optimization, and Scalability. By combining and filtering the results, the Agent can provide you with much more detailed and insightful code reviews, bringing you a better quality code review and helping you save time.

The AI Code Review Agent helps engineering teams merge code faster while also keeping the code clean and up to standard, making sure it runs smoothly and follows best practices.

It ensures a secure and confidential experience without compromising on reliability. Bito neither reads nor stores your code, and none of your code is used for AI model training. Learn more about our .

By accessing Bito's feature, the AI Code Review Agent can analyze relevant context from your entire repository, providing better context-aware analysis and suggestions. This tailored approach ensures a more personalized and contextually relevant code review experience.

To comprehend your code and its dependencies, we use Symbol Indexing, Abstract Syntax Trees (AST), and Embeddings. Each step feeds into the next, starting from locating specific code snippets with Symbol Indexing, getting their structural context with AST parsing, and then leveraging embedding vectors for broader semantic insights. This approach ensures a detailed understanding of the code's functionality and its dependencies. For more information, see

The AI Code Review Agent is built using Bito Dev Agents, an open framework and engine to build custom AI Agents for software developers that understands code, can connect to your organization’s data and tools, and can be discovered and shared via a global registry.

Why use an AI Agent for code review?

In many organizations, senior developers spend approximately half of their time reviewing code changes in PRs to find potential issues. The AI Code Review Agent can help save this valuable time.

AI Code Review Agent speeds up PR merges by 89%, reduces regressions by 34%, and delivers 87% human-grade feedback.

However, it's important to remember that the AI Code Review Agent is designed to assist, not replace, senior software engineers. It takes care of many of the more mundane issues involved in code review, so senior engineers can focus on the business logic and how new development is aligned with your organization’s business goals.

Pricing details

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

Learn more

Install/run using Bito Cloud

Deploy the AI Code Review Agent in Bito Cloud.

offers a single-click solution for using the , eliminating the need for any downloads on your machine. You can create multiple instances of the Agent, allowing each to be used with a different repository on a Git provider such as GitHub, GitLab, or Bitbucket.

We also support GitHub (Self-Managed) and GitLab (Self-Managed).

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

Connect Bito to your Git provider

Select your Git provider from the options below and follow the step-by-step installation guide to seamlessly set up your AI Code Review Agent.

Welcome to Bito

is an AI-powered code review tool that helps you catch bugs, security vulnerabilities, code smells, and other issues in your pull requests. By understanding your entire codebase, Bito provides context-aware, actionable suggestions to improve both code quality and security.

It includes real-time recommendations from dev tools you already use such as static code analysis, open source vulnerability scanners, linters, and secrets scanning tools (e.g., passwords, API keys, sensitive information, etc.).

Bito seamlessly integrates with GitHub, GitLab, and Bitbucket.

Quickstart guide

  1. Sign up for Bito Create your account at to get started.

  2. Connect your Git provider Select your preferred Git platform and follow the guided setup to install the agent:

    Once installed, the agent will be linked to your repositories and ready to assist.

  3. Review pull requests The AI agent will automatically review new pull requests and leave inline comments with suggestions. You can also manually trigger a review by commenting /review on any pull request.

  4. Chat with the agent You can reply to comments posted by the Bito AI agent in a pull request to ask follow-up questions or request clarification. The agent will respond with context-aware answers to help you understand the feedback better.

  5. Configure agent settings

    To customize your agent, go to and click the Settings button next to the relevant agent. From there, you can choose the review feedback mode, enable or disable automatic reviews, define custom guidelines to align with your team’s standards, and more.

Helpful resources

Feature guides

Video library

Need help?

If you have any questions, feel free to email us at

Bito Cloud
AI Code Review Agent
Pricing
Get a 14-day FREE trial of Bito's AI Code Review Agent.

Guide for GitHub

Guide for GitHub (Self-Managed)

Guide for GitLab

Guide for GitLab (Self-Managed)

Guide for Bitbucket

Privacy and security

Bito doesn't read or store your code. Nor do we use your code for AI model training.

This document explains some of Bito's privacy and security practices. Our Trust Center outlines our various accreditations (SOC 2 Type II) and our various security policies. You can read our full Privacy Policy at https://bito.ai/privacy-policy/.

Bito AI

Security is top of mind at Bito, especially when it comes to your code. A fundamental approach we have taken is to allow you to decide where you want to store your code, either locally on your machine, in your cloud, or on Bito’s cloud (coming soon). We do not store any code, code snippets, indexes or embedding vectors on Bito’s servers unless you expressly allow that. Importantly, our AI partners do not store any of this information.

All requests are transmitted over HTTPS and are fully encrypted.

None of your code or AI requests are used for AI model training. None of your code or AI requests are stored by our AI partners. Our AI model partners are OpenAI, Anthropic, and Google. Here are their policies where they state that they do not store or train on data related to API access (we access all AI models via APIs):

  1. OpenAI: https://openai.com/enterprise-privacy/

  2. Anthropic: https://www.anthropic.com/uk-government-internal-ai-safety-policy-response/data-input-controls-and-audit

  3. Google Cloud: https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-unveils-ai-and-ml-privacy-commitment (5th paragraph)

The AI requests including code snippets you send to Bito are sent to Bito servers for processing so that we can respond with an answer.

Interactions with Bito AI are auto-moderated and managed for toxicity and harmful inputs and outputs.

Any response generated by the Bito IDE AI Assistant is stored locally on your machine to show the history in Bito UI. You can clear the history anytime you want from the Bito UI.

SOC 2 Type II Compliance

Bito is SOC 2 Type II compliant. This certification reinforces our commitment to safeguarding user data by adhering to strict security, availability, and confidentiality standards. SOC 2 Type II compliance is an independent, rigorous audit that evaluates how well an organization implements and follows these security practices over time.

Our SOC 2 Type II compliance means:

  • Enhanced Data Security: We consistently implement robust controls to protect your data from unauthorized access and ensure it remains secure.

  • Operational Excellence: Our processes are designed to maintain high availability and reliability, ensuring uninterrupted service.

  • Regular Monitoring and Testing: We conduct continuous monitoring and regular internal reviews to uphold the highest security standards.

This certification is an assurance that Bito operates with a high level of trust and transparency, providing you with a secure environment for your code and data.

For any further questions regarding our SOC 2 Type II compliance or to request a copy of the audit report, please reach out to [email protected]

Code Flow through Bito’s System

AI Code Review Agent

When you use the self-hosted/docker version that you have setup in your VPC, in the docker image Bito checks out the diff and clones the repo for static analysis and also to determine relevant code context for code review. This context and the diff is passed to Bito's system. The request is then sent to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. No code is retained by the LLM. Bito then receives the response, processes it (such as formatting), and returns it to your self-hosted docker instance. This then posts it to your Git provider. However, the original query is not retained, nor are the results. After each code review is completed, the diff and the checked out repo are deleted.

If you use the Bito cloud to run the AI Code Review Agent, it runs similarly to the self-hosted version. Bito ephemerally checks out the diff and clones the repo for static analysis and to determine the relevant code context for code review. This context and the diff is passed to Bito's system. The request is then sent by Bito to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. No code is retained by the LLM. Bito then receives the response, processes it (such as formatting), and posts it to your Git provider. However, the original query is not retained, nor are the results. After each code review is completed, the diff and the checked out repo are deleted.

AI Chat and Code Completions

When we receive an AI request from a user, it is processed by Bito's system (such as adding relevant context and determining the Large Language Model (LLM) to use). However, the original query is not retained. The request is then sent to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. Bito then receives the response, processes it (such as formatting), and returns it to the user’s machine.

For enterprises, we have the ability to connect to your own private LLM accounts, including but not limited to OpenAI, Google Cloud, Anthropic, or third-party services such as AWS Bedrock, Azure OpenAI. This way all data goes through your own accounts or Virtual Private Cloud (VPC), ensuring enhanced control and security.

Data and Business Privacy Policy

In line with Bito's commitment to transparency and adherence to data privacy standards, our comprehensive data and business privacy policy is integrated into our practices. Our complete Terms of Use, including the Privacy Policy, are available at https://bito.ai/terms-of-use/, with our principal licensing information detailed at https://bito.ai/terms-of-service/.

Data Retention Policy

Our data retention policy is carefully designed to comply with legal standards and to respect our customers' privacy concerns. The policy is categorized into four levels of data:

  1. Relationship and Usage Meta Data: This includes all data related to the customer's interaction with Bito, such as address, billing amounts, user account data (name and email), and usage metrics (number of queries made, time of day, length of query, etc.). This category of data is retained indefinitely for ongoing service improvement and customer support.

  2. Bito Business Data: Includes customer-created templates and settings. This data is terminated 90 days after the end of the business relationship with Bito.

  3. Confidential Customer Business Data: This includes code, code artifacts, and other organization-owned data such as Jira, Confluence, etc. This data is either stored on-prem/locally on the customer’s machines, or, if in the cloud, is terminated at the end of the business relationship with Bito.

  4. AI Requests: Data in an AI request to Bito’s AI system. AI requests are neither retained nor viewed by Bito. We ensure the confidentiality of your AI queries; Bito and our LLM partners do not store your code, and none of your data is used for model training. All requests are transmitted via HTTPS and are fully encrypted.

Sub-processor

Bito uses the following third-party services: Amazon AWS, Anthropic, Clearbit, Github, Google Analytics, Google Cloud, HelpScout, Hubspot, Microsoft Azure, Mixpanel, OpenAI, SendGrid, SiteGround, and Slack for infrastructure, support, and functional capabilities.

Personal Data

Bito follows industry standard practices for protecting your e-mail and other personal details. Our password-less login process - which requires one-time passcode sent to your e-mail for every login - ensures the complete security of your account.

If you have any questions about our security and privacy, please email [email protected]

Get a 14-day FREE trial of Bito's AI Code Review Agent.
AI Code Review Agent
Privacy & Security practices
AI that Understands Your Code
How does Bito’s “AI that understands your code” work?
Start free trial
Get a demo
Pricing
Get a 14-day FREE trial of Bito's AI Code Review Agent.

Getting Started

Key Features

Supported Programming Languages and Tools

Agent Configuration: bito-cra.properties File

FAQs

Bito
Start free trial
Getting started guide
alpha.bito.ai
GitHub
GitHub (Self-Managed)
GitLab
GitLab (Self-Managed)
Bitbucket
See full list of available commands
Learn more
Repositories
Learn more
[email protected]
Cover

AI Code Review Agent

Cover

Account and settings

Cover

Billing and plans

Cover

Privacy and security

Cover

Get support

Cover

Changelog

Cover

AI that understands your code

Cover

Chat with AI Code Review Agent

Cover

Custom code review rules and guidelines

Cover

Code review analytics

Cover

Supported programming languages and tools

Cover

Available commands

Cover

Agent settings

Cover

FAQs

Code review analytics

Get in-depth insights into your code review process.

The user-friendly dashboards help you track key metrics such as pull requests reviewed, issues found, lines of code reviewed, and understand individual contributions.

Code Review Analytics
Code Review Analytics dashboard

CLI vs webhooks service

From one-time reviews to continuous automated reviews.

On your machine or in a Private Cloud, you can run the AI Code Review Agent via either CLI or webhooks service. This guide will teach you about the key differences between CLI and webhooks service and when to use each mode.

Difference Between CLI and webhooks service

The main difference between CLI and webhooks service lies in their operational approach and purpose. In CLI, the docker container is used for a one-time code review. This mode is ideal for isolated, single-instance analyses where a quick, direct review of the code is needed.

On the other hand, webhooks service is designed for continuous operation. When set in webhooks service mode, the AI Code Review Agent remains online and active at a specified URL. This continuous operation allows it to respond automatically whenever a pull request is opened in a repository. In this scenario, the git provider notifies the server, triggering the AI Code Review Agent to analyze the pull request and post its review as a comment directly on it.

When to Use CLI and When to Use webhooks service

Selecting the appropriate mode for code review with the AI Code Review Agent depends largely on the nature and frequency of your code review needs.

CLI: Ideal for Specific, One-Time Reviews

CLI mode is best suited for scenarios requiring immediate, one-time code reviews. It's particularly effective for:

  • Conducting quick assessments of specific pull requests.

  • Performing periodic, scheduled code analyses.

  • Reviewing code in environments with limited or no continuous integration support.

  • Integrating with batch processing scripts for ad-hoc analysis.

  • Using in educational settings to demonstrate code review practices.

  • Experimenting with different code review configurations.

  • Reviewing code on local setups or for personal projects.

  • Performing a final check before pushing code to a repository.

CLI mode stands out for its simplicity and is perfect for standalone tasks where a single, direct execution of the code review process is all that's needed.

Webhooks service: For Continuous, Automated Reviews

Webhooks service, on the other hand, is the go-to choice for continuous code review processes. It excels in:

  • Continuously monitoring all pull requests in a repository.

  • Providing instant feedback in collaborative projects.

  • Seamlessly integrating with CI/CD pipelines for automated reviews.

  • Performing automated code quality checks in team environments.

  • Conducting real-time security scans on new pull requests.

  • Ensuring adherence to coding standards in every pull request.

  • Streamlining the code review process in large-scale projects.

  • Maintaining consistency in code review across multiple projects.

  • Enhancing workflows in remote or distributed development teams.

  • Offering prompt feedback in agile development settings.

Webhooks service is indispensable in active development environments where consistent monitoring and immediate feedback are critical. It automates the code review process, integrating seamlessly into the workflow and eliminating the need for manual initiation of code reviews.

IDE extension

Templates

Installation guide

Key features

Explore the powerful capabilities of the AI Code Review Agent.

Get a 14-day FREE trial of Bito's AI Code Review Agent.

Features overview

A quick look at powerful features of Bito's AI Code Review Agent—click to jump to details.

  1. AI that understands your code

  2. One-click setup for GitHub, GitLab, and Bitbucket

  3. Automated and manually-triggered AI code reviews

  4. Pull request summary

  5. Changelist

  6. One-click to accept suggestions

  7. Chat with AI Code Review Agent

  8. Incremental code reviews

  9. Code review analytics

  10. Custom code review rules and guidelines

  11. Multiple specialized engineers for targeted code analysis

  12. Integrated feedback from dev tools you use

  13. Supports all major programming languages

  14. Enterprise-grade security


AI that understands your code

The AI Code Review Agent understand code changes in pull requests. It analyzes relevant context from your entire repository, resulting in more accurate and helpful code reviews.

To comprehend your code and its dependencies, it uses Symbol Indexing, Abstract Syntax Trees (AST), and Embeddings.

One-click setup for GitHub, GitLab, and Bitbucket

Bito Cloud offers a one-click solution for using the AI Code Review Agent, eliminating the need for any downloads on your machine.

Bito supports integration with the following Git providers:

  • GitHub

  • GitHub (Self-Managed)

  • GitLab

  • GitLab (Self-Managed)

  • Bitbucket

Automated and manually-triggered AI code reviews

By default, the AI Code Review Agent automatically reviews all new pull requests and provides detailed feedback. To initiate a manual review, simply type /review in the comment box on the pull request and submit it. This action will start the code review process.

Pull request summary

Get a concise overview of your pull request (PR) directly in the description section, making it easier to understand the code changes at a glance. This includes a summary of the PR, the type of code changes, whether unit tests were added, and the estimated effort required for review.

The agent evaluates the complexity and quality of the changes to estimate the effort required to review them, providing reviewers the ability to plan their schedule better. For more information, see What is "Estimated effort to review" in code review output?

Summary of Pull Request in the description section.

Changelist

A tabular view that displays key changes in a pull request, making it easy to spot important updates at a glance without reviewing every detail. Changelist categorizes modifications and highlights impacted files, giving you a quick, comprehensive summary of what has changed.

Changelist in AI Code Review Agent's feedback.

One-click to accept suggestions

The AI-generated code review feedback is posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

You can accept the suggestions with a single click, and the changes will be added as a new commit to the pull request.

One-click to accept AI code review suggestions

Chat with AI Code Review Agent

Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

Real-time collaboration with the AI Code Review Agent accelerates your development cycle. By delivering immediate, actionable insights, it eliminates the delays typically experienced with human reviews. Developers can engage directly with the Agent to clarify recommendations on the spot, ensuring that any issues are addressed swiftly and accurately.

Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

Incremental code reviews

AI Code Review Agent automatically reviews only the recent changes each time you push new commits to a pull request. This saves time and reduces costs by avoiding unnecessary re-reviews of all files.

You can enable or disable incremental reviews at the Agent instance level or workspace level, giving your team more control over the review process. Contact [email protected] to customize this feature according to your team's needs.

Code review analytics

Get in-depth insights into your org’s code reviews with user-friendly Code Review Analytics dashboard. Track key metrics such as pull requests reviewed, issues found, lines of code reviewed, and understand individual contributions.

Code Review Analytics dashboard

Custom code review rules and guidelines

The AI Code Review Agent offers a flexible solution for teams looking to enforce custom code review rules, standards, and guidelines tailored to their unique development practices. Whether your team follows specific coding conventions or industry best practices, you can customize the Agent to suite your needs.

We support two ways to customize AI Code Review Agent’s suggestions:

  1. Provide feedback on Bito-reported issues in pull requests, and the AI Code Review Agent automatically adapts by creating code review rules to prevent similar suggestions in the future.

  2. Submit your custom code review guidelines to Bito, and we will implement them within your Bito workspace.

Multiple specialized engineers for targeted code analysis

The AI Code Review Agent acts as a team of specialized engineers, each analyzing different aspects of your pull request. You'll get specific advice for improving your code, right down to the exact line in each file.

The areas of analysis include:

  • Security

  • Performance

  • Scalability

  • Optimization

  • Will this change break anything? Based on the diff can we include anything?

  • Code structure and formatting (e.g., tab, spaces)

  • Basic coding standards including variable names (e.g., ijk)

This multifaceted analysis results in more detailed and accurate code reviews, saving you time and improving code quality.

Integrated feedback from dev tools you use

Elevate your code reviews by harnessing the power of the development tools you already trust. Bito's AI Code Review Agent seamlessly integrates feedback from essential tools including:

  • Static code analysis

  • Open source security vulnerabilities check

  • Linter integrations

  • Secrets scanning (e.g., passwords, API keys, sensitive information)

Static code analysis

Using tools like Facebook’s open-source fbinfer (available out of the box), the Agent dives deep into your code—tailored to each language—and suggests actionable fixes. You can also configure additional tools you use for a more customized analysis experience.

Static Code Analysis feedback highlighting suggestions and fixes.

Open source security vulnerabilities check

The AI Code Review Agent checks real-time for the latest high severity security vulnerabilities in your code, using OWASP Dependency-Check (available out of the box). Additional tools such as Snyk, or GitHub Dependabot can also be configured.

Showing high-severity security vulnerabilities report.

Linter integrations

Our integrated linter support reviews your code for consistency and adherence to best practices. By catching common errors early, it ensures your code stays clean, maintainable, and aligned with modern development standards.

Secrets scanning

Safeguard your sensitive data effortlessly. With built-in scanning capabilities, the Agent checks your code for exposed passwords, API keys, and other confidential information—helping to secure your codebase throughout the development lifecycle.

Supports all major programming languages

No matter if you're coding in Python, JavaScript, Java, C++, or beyond, our AI Code Review Agent has you covered. It understands the unique syntax and best practices of every popular language, delivering tailored insights that help you write cleaner, more efficient code—every time.

Enterprise-grade security

Bito and third-party LLM providers never store or use your code, prompts, or any other data for model training or any other purpose.

Bito is SOC 2 Type II compliant. This certification reinforces our commitment to safeguarding user data by adhering to strict security, availability, and confidentiality standards. SOC 2 Type II compliance is an independent, rigorous audit that evaluates how well an organization implements and follows these security practices over time.

Delete unused Agent instances

Easily delete Agent instances you no longer need.

If you no longer need an AI Code Review Agent instance, you can delete it to keep your workspace organized. Follow the steps below to quickly remove any unused Agents.

  1. Log in to Bito Cloud and select a workspace to get started.

  2. From the left sidebar, select Code Review Agents.

    If your Bito workspace is connected to your GitHub/GitLab/Bitbucket account, a list of AI Code Review Agent instances configured in your workspace will appear.

  1. Before deleting an Agent, ensure that any repositories currently using it are reassigned to another Agent otherwise a warning popup will appear.

  1. Locate the Agent you wish to delete and click the Delete button given in front of it.

Note: The Default Agent (provided by Bito) cannot be deleted.

Clone an Agent instance

Easily duplicate Agent configurations for faster setup.

Save time and effort by quickly creating a new instance using the configuration settings of an existing one. It’s a fast and simple way to set up multiple Agent instances without having to reconfigure each one.

Follow the steps below to get started:

  1. and select a workspace to get started.

  2. From the left sidebar, select .

  1. If your Bito workspace is connected to your GitHub/GitLab/Bitbucket account, a list of AI Code Review Agent instances configured in your workspace will appear. Locate the instance you wish to duplicate and click the Clone button given in front of it.

  1. An Agent configuration form will open, pre-populated with the input field values. You can edit these values as needed.

  1. Click Select repositories to choose Git repositories for the new Agent.

  1. To enable code review for a specific repository, simply select its corresponding checkbox. You can also enable repositories later, after the Agent has been created. Once done, click Save and continue to save the new Agent configuration.

  1. When you save the configuration, your new Agent instance will be added and available on the page.

AI that Understands Your Code

Work on your code with AI that knows your code!

Chat session history

Bito automatically saves the chat session History. The session history is stored locally on your computer. You can return to any chat session and continue the AI conversation from where you left off. Bito will automatically maintain and restore the memory of the loaded chat session.

You can "Delete" any saved chat session or share a permalink to the session with your coworkers.

Here is the video overview of accessing and managing the session history.

How it Works?

Bito indexes your code locally using AI

When you open a project in Visual Studio Code or JetBrains IDEs, Bito lets you enable the of code files from that project’s folder. Basically, this indexing mechanism leverages our new that enables Bito to understand your entire codebase and answer any questions regarding it.

The index is stored locally on your system to provide better performance while maintaining the security/privacy of your private code.

It takes 12 minutes per each 10MB of code to understand your repo, as the index is being built locally.

How to Ask Questions?

Once indexing is complete, you can ask any question in the Bito chatbox. Bito uses AI to determine if you are asking about something in your codebase. If Bito is confident, it grabs the relevant parts of your code from our and feeds them to the for accurate answers. But if it's unsure, Bito will ask you to confirm before proceeding.

In case you ask a general question (not related to your codebase), then Bito will directly send your request to our LLM without first looking for the appropriate local context.

However, if you want to ask a question about your code no matter what, then you can use specific keywords such as "my code", "my repo", "my project", "my workspace", etc., in your question.

The complete list of these keywords is given on our page.

Once Bito sees any input containing these keywords, it will use the index to identify relevant portions of code or content in your folder and use it for processing your question, query, or task.

Security of your code

As usual, security is top of mind at Bito, especially when it comes to your code. A fundamental approach we have taken is to keep all code on your machine, and not store any code, code snippets, indexes, or on Bito’s servers or our API partners. All code remains on your machine, Bito does not store it. In addition, none of your code is used for AI model training.

Learn more about .

AI Chat in Bito

Bito AI chat is the most versatile and flexible way to use AI assistance. You can type any technical question to generate the best possible response. Check out these to understand all you can do with Bito.

By default, the Team Plan utilizes to process queries. You can easily AI models anytime.

To use AI Chat, type the question in the chat box, and press 'Enter' to send. You can add a new line in the question with 'SHIFT+ ENTER'.

Bito starts streaming answers within a few seconds, depending on the size and complexity of the prompt.

Bito makes it super easy to use the answer generated by AI, and take a number of actions.

Copy Answer

Copy the answer to the clipboard.

Regenerate Answer

AI may not give the best answer on the first attempt every time. You can ask Bito AI to regenerate the answer by clicking "Regenerate" button next to the answer.

Copy or Insert Code

If the AI answer includes a code snippet, Bito automatically identifies and displays code in a separate block. This makes it easy to copy the code to the clipboard or insert it in the code editor.

Rate Response

Vote response "Up" or "Down". This feedback Bito improve the prompt handling.

Modify Last Prompt

Many of these commands can be executed with keyboard shortcuts documented here:

Overview

AI Code Completions

Bito’s "AI Code Completions" capabilities offer real-time, personalized code suggestions as you type. Powered by the latest best-in-class Large Language Models (such as GPT-4o mini and Google PaLM 2 – 540B parameters compare to Copilot’s 12B parameter model), Bito understands your codebase and provides contextually accurate code suggestions right from within your IDE. Bito’s model is also trained on data until 2 months back, many other models are trained on 12-18 months old data.

Speed up your development workflow with AI-assisted code completion. Watch as lines of code, full functions, or even entire code blocks are generated for you on the fly.

AI Code Completions are disabled by default. Learn how to in settings.

Supporting a wide range of over 35 programming languages—from Python to SQL, from C++ to Go and JavaScript—this feature is designed to make coding faster, easier, and more efficient for developers like you.

Let's dive in to see .

Users on Bito's Free Plan receive 300 free AI Code Completions per month, with a daily limit of 75 completions. In contrast, paid users can enjoy unlimited AI Code Completions each month, subject to the .

Learn more about Bito's paid plans on our .

Bito's AI stack

Learn About AI Technologies & Concepts Powering Bito

Account and settings

Manage your Bito workspace, members and the personal settings

AI Code Review Agent
Log in to Bito Cloud
Code Review Agents
Code Review Agents
Overview

AI that Understands Your Code

How it Works?

Bito indexes your code locally using AI

Available Keywords

Keywords to invoke AI that understands your code

Example Questions

What type of questions can be asked?

How does Bito Understand My Code?

Sneak peek into the inner workings of Bito

Using in Visual Studio Code

AI that understands your code in VS Code

Using in JetBrains IDEs

AI that understands your code in JetBrains IDEs (e.g., PyCharm)

Managing Index Size

Exclude unnecessary files and folders from repo to index faster!

FAQs

Answers to popular questions

Embeddings

Vector databases

Indexing

Generative AI

Large Language Models (LLM)

LLM tokens

LLM parameters

Retrieval Augmented Generation (RAG)

Prompts

Prompt engineering

Creating a Bito account

Learn how to sign up or log in to Bito

Workspace

Learn how to create, join, or change workspace

Managing workspace members

Invite coworkers and manage their workspace membership

Setting AI output language

Personalize Bito to speak your language

Managing user access levels

Learn about different access levels and permissions

Access key

An alternative to standard email and OTP authentication

Use cases and examples
Advanced AI Models
switch between Basic and Advanced
Keyboard shortcuts

Start free trial

Get a demo

Learn more

Learn more

Learn more

Learn more

Learn more

Learn more

Learn more

Learn more

Supported programming languages

Supporting Over 35 Programming Languages Such as Python, SQL, C++, Go, JavaScript, and More

Bito can suggest code for these programming languages:

  • C

  • C++

  • C#

  • CSS

  • Clojure

  • Dart

  • Elixir

  • Erlang

  • Fortran

  • Go

  • GoogleSQL

  • Groovy

  • Haskell

  • HTML

  • Java

  • JavaScript

  • JavaServer Pages

  • Kotlin

  • Lean (proof assistant)

  • Lua

  • Objective-C

  • OCaml

  • Perl

  • PHP

  • Python

  • R

  • Ruby

  • Rust

  • Scala

  • Shell script

  • Solidity

  • SQL

  • Swift

  • TypeScript

  • XML

  • Verilog

  • YAML

Configuration

Manage Bito CLI settings

bito config [flags]

  • run bito config -l or bito config --list to list all config variables and values.

  • run bito config -e or bito config --edit to open the config file in default editor.

Sample Configuration

bito:
 access_key: ""
 email: [email protected]
 
 preferred_ai_model: ADVANCED
settings:
 auto_update: true
 max_context_entries: 20

What is an Access Key and How to Get it?

Access Key is an alternate authentication mechanism to Email & OTP based authentication. You can use an Access Key in Bito CLI to access various functionalities such as Bito AI Chat. Here’s a guide on how to create an Access Key. Basically, after creating the Access Key, you have to use it in the config file mentioned above. For example, access_key: “YOUR_ACCESS_KEY_HERE”

Access Key can be persisted in Bito CLI by adding it in the config file using bito config -e. Such persisted Access Key can be over-ridden by running bito -k <access-key> or bito --key <access-key> for the transient session (sessions that last only for a short time).

Preferred AI Model Type

By default AI Model Type is set to ADVANCED and it can be overridden by running bito -m <BASIC/ADVANCED>. Model type is used for AI query in the current session. Model type can be set to BASIC or ADVANCED, which is case insensitive.

"ADVANCED" refers to AI models like GPT-4o, Claude Sonnet 3.5, and best in class AI models, while "BASIC" refers to AI models like GPT-4o mini and similar models.

When using Basic AI models, your prompts and the chat's memory are limited to 40,000 characters (about 18 single-spaced pages). However, with Advanced AI models, your prompts and the chat memory can go up to 240,000 characters (about 110 single-spaced pages). This means that Advanced models can process your entire code files, leading to more accurate answers.

If you are seeking the best results for complex tasks, then choose Advanced AI models.

Access to Advanced AI models is only available in Bito's Team Plan. However, Basic AI models can be used by both free and paid users.

To see how many Advanced AI requests you have left, please visit the Requests Usage page. On this page, you can also set hard and soft limits to control usage of Advanced AI model requests for your workspace and avoid unexpected expenses.

Also note that even if you have set preferred_ai_model: ADVANCED in Bito CLI config but your Advanced AI model requests quota is finished (or your self-imposed hard limit is reached) then Bito CLI will start using Basic AI models instead of Advanced AI models.

Install/run as a self-hosted service

Deploy the AI Code Review Agent on your machine.

The self-hosted AI Code Review Agent offers a more private and customizable option for teams looking to enhance their code review processes within their own infrastructure, while maintaining complete control over their data. This approach is ideal for organizations with specific compliance, security, or customization requirements.

Understanding CLI vs webhooks service

When setting up the AI Code Review Agent, you have the flexibility to choose between two primary modes of operation: CLI and webhooks service.

  • CLI allows developers to manually initiate code reviews directly from terminal. This mode is ideal for quick, on-demand code reviews without the need for continuous monitoring or integration.

  • Webhooks service transforms the Agent into a persistent service that automatically triggers code reviews based on specific events, such as pull requests or comments on pull requests. This mode is suitable for teams looking to automate their code review processes.

For more details, visit the CLI vs webhooks service page.

Deployment Options

Based on your needs and the desired integration level with your development workflow, choose one of the following options to install and run the AI Code Review Agent:

Before proceeding, ensure you've completed all necessary prerequisites for self-hosted AI Code Review Agent.

  1. Install/run via CLI: Ideal for developers seeking a simple, interactive way to conduct code reviews from the command line.

  2. Install/run via webhooks service: Perfect for teams looking to automate code reviews through external events, enhancing their CI/CD workflow.

  3. Install/run via GitHub Actions: A great option for GitHub users to seamlessly integrate automated code reviews into their GitHub Actions workflows.

FAQs

Answers to popular questions

Enabling unicode For Windows 10 and below

Unicode characters (using other languages) might not be readily supported on command prompt if you are on Windows 10 or below. You can run command chcp 936 in cmd prior to using bito to support unicode characters in Windows 10 or below (To undo the settings done here you can follow this link).

If you are on Windows 11 then you shouldn't encounter any such issues.

Using Homebrew for Bito CLI

  1. Before using homebrew, please make sure that you uninstall any previously installed versions of Bito CLI using the uninstall guide provided here.

  2. Once above is done then you can use following commands to install Bito CLI using homebrew:

    1. First tap the CLI repo using brew tap gitbito/bitocli command, this should be a one time action and not required every time.

    2. Now you can install Bito CLI using following command:

      • brew install bito-cli - this should install Bito CLI based upon your machine architecture.

    3. To update Bito CLI to the latest version, use following commands:

      1. Please make sure you always do brew update before upgrading to avoid any errors.

      2. brew update - this will update all the required packages before upgrading.

      3. brew upgrade bito-cli - once above is done, this will update Bito CLI to the latest version.

    4. To uninstall Bito CLI you can either use the uninstall guide from here or use following commands:

      • brew uninstall bito-cli - this should uninstall Bito CLI completely from your system.

How does Bito Understand My Code?

Sneak Peek into the Inner Workings of Bito

Bito deploys a Vector Database locally on the user’s machine, bundled as part of the Bito IDE plug-in. This database uses Embeddings (a vector with over 1,000 dimensions) to retrieve text, function names, objects, etc. from the codebase and then transform them into multi-dimensional vector space.

Then when you give it a function name or ask it a question, that query is converted into a vector and is compared to other vectors nearby. This returns the relevant search results. So, it's a way to perform search not on keywords, but on meaning. Vector Databases are able to do this kind of search very quickly.

Learn more about how Bito indexes your code so that it can understand it.

Bito also uses an Agent Selection Framework that acts like an autonomous entity capable of perceiving its environment, making decisions, and taking actions to achieve certain goals. It figures out if it’s necessary to do an embeddings comparison on your codebase, do we need to perform an action against Jira, or do we do something else.

Finally, Bito utilizes Large Language Models (LLMs) from Open AI, Anthropic, and others that actually provide the answer to the question by leveraging the context provided by the Agent Selection Framework and the embeddings.

This is what makes us stand out from other AI tools like ChatGPT, GitHub Copilot, etc. that do not understand your entire codebase.

We’re making significant innovations in our AI Stack to simplify coding for everyone. To learn more about this head over to Bito’s AI Stack documentation.

Available Keywords

Keywords to invoke AI that understands your code

Here is the list of keywords in different languages to ask questions regarding your entire codebase. Use any of these keywords in your prompts inside Bito chatbox.

English:

  • my code

  • my repo

  • my project

  • my workspace

Chinese:

  • 我的代码

  • 我的仓库

  • 我的代码库

  • 我的项目

  • 我的文件夹

Chinese Traditional:

  • 我的程式碼

  • 我的倉庫

  • 我的項目

  • 我的工作區

Spanish:

  • Mi código

  • Mi repo

  • Mi proyecto

  • Mi espacio de trabajo

Japanese:

  • 私のコード

  • 私のリポ

  • 私のプロジェクト

  • 私のワークスペース

Portuguese:

  • Meu código

  • Meu repo

  • Meu projeto

  • Meu espaço de trabalho

Polish

  • Mój obszar roboczy

  • moje miejsce pracy

  • mój obszar roboczy

  • moj kod

  • mój kod

  • moim kodzie

  • moje repo

  • moje repozytorium

  • moim repo

  • moj projekt

  • mój projekt

  • moim projekcie

LLM parameters

Parameters are the individual elements of a Large Language Model that are learned from the training data. Think of them as the synapses in a human brain—tiny connections that store learned information.

How Parameters Work in LLMs

Each parameter in an LLM holds a tiny piece of information about the language patterns the model has seen during training. They are the fundamental elements that determine the behavior of the model when it generates text.

For example, imagine teaching a child what a cat is by showing them pictures of different cats. Each picture tweaks the child's understanding and definition of a cat. In LLMs, each training example tweaks the parameters to better understand and generate language.

The Role of Parameters in Understanding and Generating Language

Parameters are crucial because they allow the model to perform tasks such as translation, write articles, and even generate source code. When you ask an AI a question, the parameters work together to sift through the learned patterns and generate a response that makes sense based on the training it received.

For instance, if you ask an AI to write a poem, the parameters will determine how to structure the poem, what words to use, and how to create rhyme or rhythm, all based on the data it was trained on.

The Scale of LLM Parameters: Just How Large Are We Talking?

When we say "Large" in LLM, we're not kidding. The size of a language model is directly related to the number of parameters it has.

Take GPT-4, for example, with its 1.76 trillion parameters. That's like 1.76 trillion different dials the model can tweak to get language just right. Each parameter holds a piece of information that can contribute to understanding a sentence's structure, the meaning of a word, or even the tone of a text.

Earlier models had significantly fewer parameters. GPT-1, for instance, had only 117 million parameters. With each new generation, the number of parameters has grown exponentially, leading to more sophisticated and nuanced language generation.

Training LLMs: How Parameters Learn

Training an LLM involves a process called "backpropagation" where the model makes predictions, checks how far off it is, and adjusts the parameters accordingly.

Let's say we're training an LLM to recognize the sentiment of a sentence. We show it the sentence "I love sunny days!" tagged as positive sentiment. The LLM predicts positive but isn't very confident. During backpropagation, it adjusts the parameters to increase the confidence for future similar sentences.

This process is repeated millions of times with millions of examples, gradually fine-tuning the parameters so that the model's predictions become more accurate over time.

Parameter’s Impact on AI Performance and Limitations

The number of parameters is one of the key factors influencing an AI model's performance. However, more parameters can mean a model requires more computational power and data to train effectively, which can lead to increased costs and longer training times.

With great power comes great responsibility—and greater chances of making mistakes. More parameters can sometimes mean that the model starts seeing patterns where there aren't any, a phenomenon known as "overfitting" where the model performs well on training data but poorly on new, unseen data.

The Future of Parameters in LLMs

The future of LLMs might not just be about adding more parameters, but also about making better use of them. Innovations in how parameters are structured and how they learn are ongoing.

AI researchers are exploring ways to make LLMs more parameter-efficient, meaning they can achieve the same or better performance with fewer parameters. Techniques like "parameter sharing" and "sparse activation" are part of this cutting-edge research.

Conclusion

Parameters in LLMs are the core elements that allow these models to understand and generate human-like text. While the sheer number of parameters can be overwhelming, it's their intricate training and fine-tuning that empower AI to interact with us in increasingly complex ways.

As AI continues to evolve, the focus is shifting from simply ramping up parameters to refining how they're used, ensuring that the future of AI is not just smarter but also more efficient and accessible.

indexing
AI Stack
index
Large Language Models (LLMs)
Available Keywords
embedding vectors
Bito’s Privacy and Security Practices
Enable/Disable AI Code Completions
how it works
fair use policy
pricing page

Request changes comments

Block merges until code issues are fixed.

Bito’s Request changes comments feature helps enforce code quality by blocking merges until all AI-generated review comments are resolved—fully supported in GitHub, GitLab, and Bitbucket.

When enabled, Bito identifies actionable issues in pull requests and posts them as formal “Request changes” review comments. If your repository uses branch protection rules that require all review conversations to be resolved before merging, Bito’s flagged comments will automatically block the pull request until addressed.

This ensures developers don’t accidentally merge incomplete or unreviewed code.

How it works

1. Enable comment resolution rules in your Git provider

GitHub:

  • Go to your repository → Settings → Branches

  • Create or edit a branch protection rule (e.g., for main)

  • Enable:

    • ✅ Require a pull request before merging

    • ✅ Require conversation resolution before merging

GitLab:

  • Go to your project → Settings → Merge requests

  • Under Merge checks, enable:

    • ✅ All threads must be resolved before merging

Bitbucket:

  • Go to your repository → Repository settings → Pull requests

  • Under Merge checks, enable:

    • ✅ All pull request tasks must be resolved before merging

2. Turn on “Request changes comments” in Bito

  • Go to Repositories in the Bito dashboard.

  • Click on Settings for your desired AI Code Review Agent instance.

  • Enable the toggle: “Request changes comments”

  • Save changes

When this is on, Bito will flag actionable AI feedback as formal review comments requiring resolution. Informational or minor suggestions will remain as regular comments.

3. What happens in a pull request

  • Bito runs an AI code review on your pull request or merge request.

  • Actionable issues are posted as change requests.

  • Your Git provider treats these comments according to your configured merge rules.

  • If comment resolution is required, the merge is blocked until the flagged issues are resolved.

Example workflow

  1. Developer opens a pull request or merge request.

  2. Bito reviews the code and posts a “request change” comment on a problematic line.

  3. The Git provider blocks the merge due to unresolved comments or threads.

  4. Developer fixes the issue and marks the thread as resolved.

  5. Merge becomes possible once all conditions are met.

Why use this feature?

  • Enforces follow-up on critical AI-detected issues.

  • Works natively with GitHub, GitLab, and Bitbucket workflows.

  • Ensures only reviewed and clean code gets merged.

  • Helps maintain consistent code quality at scale.

Installing on JetBrain IDEs

It takes less than 2 minutes

Get up and running with Bito in just a few steps! Bito seamlessly integrates with popular JetBrains IDEs such as IntelliJ IDEA, PyCharm, and WebStorm, providing powerful AI-driven coding assistance directly within your editor. Click the button below to quickly install the Bito extension and start optimizing your development workflow with context-aware AI Chat, AI Code Completions, and more.

Video guide

Watch the video below to learn how to download the Bito extension on JetBrains IDEs.

Step-by-step instructions

  1. In JetBrains IDEs such as IntelliJ, go to File -> Settings to open the Settings dialog, and click Plugins -> Marketplace tab in the settings dialog. Search for Bito.

Settings in IntelliJ IDE

2. Click "Install" to install the Bito extension. We recommend you restart the IDE after the installation is complete.

Starting with Bito version 1.3.4, the extension is only supported on JetBrains versions 2021.2.4 and higher. JetBrains version 2021.1.3 is no longer supported from Bito version 1.3.4 onward.

3. Bito panel will appear on the right-hand sidebar. Click it to complete the setup process. You will either need to create a new workspace if you are the first in your company to install Bito or join an existing workspace created by a co-worker. See Managing workspace members

Bito in IntelliJ right-hand side bar

The menu to invoke the settings dialog may differ for different IDEs of the JetBrains family. The screenshots highlighted above are for the IntelliJ IDEA. You can access the Bito extension directly from the JetBrains marketplace at https://plugins.jetbrains.com/plugin/18289-bito.

Overview

AI that Understands Your Code

Bito has created the ability for our AI to understand your codebase, which produces dramatically better results that are personalized to you. This can help you write code, refactor code, explain code, debug, and generate test cases – all with the benefits of AI knowing your entire code base.

Bito AI automatically figures out if you're asking about something in your code. If it's confident, it grabs the relevant parts of your code from our index and feeds them to the Large Language Models (LLMs) for accurate answers. But if it's unsure, Bito will ask you to confirm before proceeding.

To specifically ask questions related to your codebase, add the keyword "my code" in English, Cantonese, Japanese, Mandarin, Spanish, or Portuguese (more languages coming soon) to your questions in the Bito chatbox.

Example: in my code explain the file apiUser.js

Additional keywords for various languages are listed on the Available Keywords page.

For now, this feature is only available for our Team Plan which costs $15 per user per month. We have plans to release it for our Free Plan soon. But it will be limited to repos of 10MB indexable size.

Recent breakthroughs in Generative AI and Large Language Models (LLMs) have helped make many AI Coding Assistant tools available, including Bito, to help you develop software faster.

The major issue with these AI assistants, though, is that they have no idea about your entire codebase. Some tools take context from currently opened files in your IDE, while others enable you to manually enter code snippets in a chat-like interface and then ask questions about them.

But with Bito’s AI that understands your entire repository, this is a whole new capability. For example, what if you could ask questions like:

  • how can I add a button to mute and unmute the song to my code in my music player? By default, set this button to unmute. Also, use the same design as existing buttons in UI.

  • In my code list all the files and code changes needed to add column desc in table raw_data in dailyReport DB.

  • In my code suggest code refactoring for api.py and mention all other files that needs to be updated accordingly

  • Please write the frontend and backend code to take a user’s credentials, and authenticate the user. Use the authentication service in my code

This will definitely improve the way you build software.

Share chat session

Let your friends see what you and Bito are creating together.

Easily share insights from any AI Chat session by creating a unique shareable link directly from the Bito extension in VS Code or JetBrains IDEs.

Whether you need to share AI-generated code suggestions, explanations, or any other chat insights, this feature allows you to create a public link that others can access. The link will remain active for 15 days and can be viewed by anyone with access to the URL, making collaboration and knowledge sharing seamless.

Additionally, you can quickly share your AI Chat session through a pre-written Tweet or an Email.

Note:

  • The link will expire in 15 days.

  • The session link will be publicly accessible by anyone with the link.

  • Recipients can access the chat session in any web browser by using the unique URL.

Let's see how it is done:

  1. Open Bito in Visual Studio Code or any JetBrains IDE.

  2. Start a conversation in Bito’s AI Chat user interface.

  3. Locate the share button on the top right of the Bito extension side-panel.

  4. Click the share button to open a menu with options, including X (Twitter), Email, and Link.

  5. Share on X (Twitter):

    1. Click on X (Twitter) from the menu, and a dialogue window will appear, asking whether you want to open the external site.

    2. Simply click "Open" to proceed.

    3. You will be redirected to the X (Twitter) website, with a pre-written tweet containing a link to your Chat Session ready to be published.

    4. Click the "Post" button to send the tweet.

  6. Share Through Email:

    1. Click on Email from the menu, and you will be redirected to your email application.

    2. Select your email account if needed.

    3. The email will be pre-filled with all the necessary information, including the link to your Chat Session.

    4. Add the receiver(s) of this email using the "To" input field.

    5. Click the "Send" button to send the email.

  7. Share the Link:

    1. Click on Link from the menu.

    2. A confirmation popup will appear. Click Share session to generate a unique URL for your chat session, which will automatically be copied to your clipboard for easy sharing.

    3. Feel free to share this link with anyone you'd like to.

Upgrading Bito plugin

How to Update Bito Plugin on VS Code and JetBrains IDEs

Keeping your Bito plugin up to date ensures you have access to the latest features and improvements. In this article, we will guide you through the steps to update the Bito plugin on both VS Code and JetBrains IDEs. Let's dive in!

Updating Bito Plugin on VS Code

  1. Open your VS Code IDE

  2. Navigate to the Extensions view by clicking on the square icon in the left sidebar

  3. In the search bar, type "Bito" to locate the Bito plugin

  4. Once you locate the Bito plugin, click on the update button to initiate the update

Pro Tip 💡: Enable Auto-update for Bito Plugin on VS Code (as shown in the video)

Updating Bito Plugin on JetBrains IDEs

  1. Open your JetBrains IDE (e.g., IntelliJ IDEA, PyCharm, etc.)

  2. Go to Settings by clicking on "File" in the menu bar (Windows/Linux) or by clicking on "IntelliJ IDEA" in the menu bar (macOS).

  3. In the Settings window, navigate to the "Plugins" section

  4. Switch to the "Installed" tab to view the list of installed plugins

  5. Locate the Bito plugin in the list and click on the update button to initiate the update

FAQs

Answers to popular questions about the AI Code Review Agent.

How do I whitelist Bito's gateway IP address for my on-premise Git platform?

To ensure the AI Code Review Agent operates smoothly with your GitHub (Self-Managed) or GitLab (Self-Managed), please whitelist all of Bito's gateway IP addresses in your firewall to allow incoming traffic from Bito. This will enable Bito to access your self-hosted repository.

List of IP addresses to whitelist:

  • 18.188.201.104

  • 3.23.173.30

  • 18.216.64.170

The agent response can come from any of these IPs.

How can I prevent the AI Code Review Agent from stopping due to token expiry?

You should set a longer expiration period for your GitHub Personal Access Token (Classic) or GitLab Personal Access Token. We recommend setting the expiration to at least one year. This prevents the token from expiring early and avoids disruptions in the AI Code Review Agent's functionality.

Additionally, we highly recommend updating the token before expiry to maintain seamless integration and code review processes.

For more details on how to create tokens, follow these guides:

  • GitHub Personal Access Token (Classic): View Guide

  • GitLab Personal Access Token: View Guide

What is "Estimated effort to review" in code review output?

This is an estimate, on a scale of 1-5 (inclusive), of the time and effort required to review this Pull Request (PR) by an experienced and knowledgeable developer. A score of 1 means a short and easy review, while a score of 5 means a long and hard review. It takes into account the size, complexity, quality, and the needed changes of the PR code diff. The score is produced by AI.

Why does Bito need access to my Git account?

Bito requires certain permissions to analyze pull requests and provide AI-powered code reviews. It never stores your code and only accesses the necessary data to deliver review insights.

What permissions does Bito need?

Bito requires:

  1. Read access to code and metadata: To analyze PRs and suggest improvements

  2. Read and write access to issues and pull requests: To post AI-generated review comments

  3. Read access to organization members: To provide better review context

I don’t have admin permissions. Can I still use Bito?

If you don’t have admin access, you’ll need your administrator to install Bito on your organization’s Git account. Once installed, you can use it for PR reviews on allowed repositories. GitHub also sends a notification to the organization owner to request the organization owner to install the app.

Does Bito store my code?

No, Bito does not store or train models on your code. It only analyzes pull request data in real-time and provides suggestions directly within the PR.

Can I choose which repositories Bito has access to?

Yes, after installation, you can select specific repositories instead of granting access to all. You can also manage repository access later through our web dashboard.

What happens after I install the Bito App?

Once installed, you’ll be redirected to Bito, where you can:

  1. Select repositories for AI-powered reviews

  2. Customize review settings to fit your workflow

  3. Open a pull request to start receiving AI-driven suggestions

Where can I get help if I have issues installing Bito?

Contact [email protected] for any assistance.

Diff view

Any Shortcut such as "Performance Check" or "Improve Readability" that proposes changes to your existing code automatically opens a "Diff View" between the proposed and actual code. This allows you to review the changes before accepting them into your code. The diff view opens automatically when Bito AI returns the proposed changes. You can also view the diff at any point through the "Diff" action.

Diff view between proposed and actual code change.
Diff View action on the proposed code changes

Video showing side-by-side diff view

AI Code Completions

Code Completions from AI that Understands Your Code

Accept/reject suggestions

Seamless Integration With Your Coding Workflow

Bito's AI Code Completions doesn't interfere with your coding process. It offers code suggestions only after you have paused typing for 250 milliseconds (you can adjust this in settings), or if you explicitly ask for AI Code Completions by typing Alt + Shift + K on Windows or Option + Shift + K on macOS, and the suggested code is merely displayed as a placeholder.

Alt + Shift + K will only work on a line where you already have some code. It won't work on a blank line.

To accept the entire code suggestion, simply press the "Tab" key on your keyboard. Alternatively, you can accept the code completion incrementally, word by word, by pressing "" (coming soon...). To accept one line at a time, click the three dots button in the code completion UI toolbar and then select "Accept Line" (coming soon...).

If you don’t like the suggestion, Bito does not force you to use it. You can simply dismiss it by pressing the “Esc” key on your keyboard or continue typing as normal.

Bito also provides alternative suggestions, which you can navigate using the arrow keys in the code completion UI toolbar or by using the shortcut keys mentioned below.

AI Code Completions toolbar
Description
Press on Keyboard

Show next suggestion

macOS: Option + ]

Windows: ALT + ]

Show previous suggestion

macOS: Option + [

Windows: Alt + [

How it works?

Get Real-Time Suggestions from Bito as You Type or Through Code Comments

Bito analyzes the file you are currently editing and your codebase to understand the context. It offers two types of AI Code Completions:

1- Autocompletions (complete the line as you type)

In this method, as you are writing a line of code, Bito will automatically predict what you will write next and generate relevant suggestions based on your codebase.

Use Keyboard Shortcuts to accept, reject, or navigate through multiple suggestions.

2- Comment to code

In this method, you can write any kind of requirements you have in natural language comments, and Bito will suggest the best code tailored to your codebase to fulfill those requirements – often writing the entire function.

Use Keyboard Shortcuts to accept, reject, or navigate through multiple suggestions.

Bito can understand both single-line and multi-line comments in the supported programming languages. Therefore, if you have a bit lengthy requirements, simply use multi-line comments for ease!

Explore some examples of the AI Code Completions feature.

How Good are AI Code Completions?

Since Bito is familiar with your entire codebase, it can provide more accurate code suggestions than other AI Coding Assistants available today.

For example:

  1. Bito can see your imports and predict what task you are trying to complete.

  2. Bito can read the function you're inside and predict what you'll do next.

  3. Bito can spot the APIs you've integrated and suggest possible endpoints to call.

After gathering the context, Bito uses different Large Language Models (LLMs) to come up with some options that you will most likely want to write next. So, if one solution doesn’t work, there are more you can try.

Bito provides high-quality code completions that align with the code you are working on. However, if the suggested completions are not as accurate in your specific case, you can write additional code or provide explicit instructions in comments to help Bito better understand the context and generate more precise solutions.

Bito CLI

Command Line Interface (Powered by Bito AI Chat) to Automate Your Tasks

Creating a Bito account

Try Advanced AI Coding Assistant for Free

You would need to create an account with your email to use Bito. You can sign up for Bito directly from the IDE extension or the Bito web interface at https://alpha.bito.ai/.

  1. After you install the Bito extension, click the "Sign up or Sign-in" button on the Bito sign-up flow screen.

  1. In the next screen, enter your work email address, and verify through a six-digit code sent to your email address.

  1. Once your email is verified, you will get an option to create your profile. Enter your full name and set the language for the AI setup. Bito uses this setting to generate the output regardless of prompt language.

Now, let's learn how to create a new workspace or join an existing one to start using Bito.

Basic/Advanced AI models

Basic models are free, while advanced models provide best results.

Bito's Team Plan users can either start a conversation with Basic AI Models (e.g. GPT-4o mini, Claude Haiku, Nova Lite 1.0, and similar models) or more Advanced AI Models (e.g. o3-mini, DeepSeek-V3 (served from the US and Europe), GPT-4o, Claude Sonnet 3.5, and best in class AI models). In contrast, the Free Plan users are limited to using only Basic AI Models.

By default, the Team Plan utilizes Advanced AI Models.

To switch between Basic and Advanced AI models, open the drop-down menu in the bottom-right corner of the Bito panel and select your preferred model.

You can also continue your previous chat sessions by selecting them from the Chat Session History tab.

Below, you'll find an easy guide on how to do this.

This guide will help you understand when to use Basic and when to use Advanced AI models. You will also learn how to select and chat with these models in the Bito chatbox.

When to use Basic vs Advanced AI Models?

Basic AI Models

These models are designed to provide essential AI capabilities for most everyday coding tasks. They offer a solid starting point for generating boilerplate code, writing documentation, explaining code snippets, and solving simple coding problems.

While using Basic AI models, your prompts and the memory of the chat are limited to 40,000 characters (about 18 single-spaced pages).

They are also less expensive in terms of API costs. So, if you are frequently asking less-important questions in the Bito chatbox, then these Basic AI models will definitely help you save costs.

Advanced AI Models

These models are more suitable for high-complexity tasks that require long/complex prompts and advanced reasoning.

They provide more accurate and relevant code snippets, comments, or solutions to complex coding problems.

Additionally, when using Advanced AI models, your prompts and the chat memory can extend up to 240,000 characters (about 110 single-spaced pages). This means that these models can process your entire code files, leading to more accurate answers.

So, if you are looking for the best results for complex tasks, then go with Advanced AI models.

Advanced AI Models are only available in Bito's Team Plan. If you have not subscribed to it yet, then head over to our Pricing Page to learn more about it. One of the key features of Team Plan is "AI that Understands Your Code".

How to Chat with Basic/Advanced AI Models?

When you open the Bito plugin in VS Code or JetBrains IDEs, the "AI Chat" tab is displayed by default. This tab includes a drop-down menu at the bottom-right corner that allows you to select the AI model you want to chat with.

The available AI models are categorized under two sections "BASIC" and "ADVANCED". You can either let Bito auto-select an AI model or manually pick one that best suits your needs.

To use Advanced AI Models, you need a Bito Team Plan. For details about the costs, please visit our Pricing page.

Once you select an AI model and start a chat with it, the drop-down menu will disappear, and your chosen model will handle the entire chat session.

If you want to change the AI model, click the New Chat icon located in the bottom-left corner of the Bito plugin pane. In the new chat session, select a different model from the drop-down menu.

Tip: Instead of starting a new conversation each time you want to switch between the Basic and Advanced AI models, you can revisit and continue your previous chats by navigating to the Chat Session History tab in the plugin. This allows you to pick up where you left off with any AI model.

Setting AI output language

Communicate in Your Preferred Language

Bito users come from all over the world, and it makes it super easy to set the AI output language. Bito will automatically generate the text output in the language in your user profile setting, regardless of the prompt input language.

Bito allows setting this language when creating an account, as described in Creating a Bito account.

You can also set or change this setting anytime by going to Settings > Profile settings in Bito Cloud. Here is a quick video walkthrough.

Supported Languages:

Bito offers 20+ languages for you to choose from. Here is the list of currently supported languages:

  1. English (Default Language)

  2. Bulgarian (български)

  3. Chinese (Simplified) (简体中文)

  4. Chinese (Traditional) (繁體中文)

  5. Czech (čeština)

  6. French (français)

  7. German (Deutsch)

  8. Hungarian (magyar)

  9. Italian (italiano)

  10. Japanese (日本語)

  11. Korean (한국어)

  12. Polish (polski)

  13. Portuguese (português)

  14. Russian (русский)

  15. Spanish (español)

  16. Turkish (Türkçe)

  17. Vietnamese (Tiếng Việt)

  18. Dutch (Nederlands)

  19. Hebrew (עִברִית)

  20. Arabic (عربي)

  21. Malay (Melayu)

  22. Hindi (हिंदी)

Using the Language Support Feature

Once you have selected your preferred language, Bito will communicate with you in your selected language. Take full advantage of this feature by:

  • Asking questions or giving commands to Bito in your selected language

  • Receiving responses and outputs from Bito in the language you've selected

Note: All responses from Bito will appear in the selected language, regardless of the input language

Enjoy the convenience of conversing with Bito in your native language and take your coding experience to a new level!

Wingman Coding Agent

AI Coding agent that takes action

Currently available only in and in private beta. Want early access? Contact us at

Bito Wingman is an AI coding agent designed to revolutionize the way you build software. Unlike traditional code assistants or autocomplete tools, Wingman acts as a virtual developer on your team capable of handling complex coding tasks from start to finish, with direction from you. Much of Bito Wingman was built by Bito Wingman.

Wingman understands high-level instructions, breaks them into actionable steps, researches relevant information, and executes tasks autonomously.

Wingman can handle everything from code generation to managing Jira tickets and updating files. It deeply understands your code, excels at reasoning and planning to handle complex tasks, and has access to apps such as file operations, Jira, Linear, Confluence, GitHub, GitLab, and .

AI in its purest form frees us up to work much more iteratively and on many things at one time. But you need tools that work that way too. Wingman is designed to be nimble to meet your work habits, from the browser to your local IDE. Run as many tasks as you have browser tabs open. Work in your IDE too. Switch back and forth. It’s all possible with your Wingman.

Example instructions for Wingman

Here are some real-world examples of tasks you can ask Wingman to handle, from coding and documentation to building and testing.

  • “Review jira ticket AI-5623, write the code, update the necessary files, and commit it. Mark the ticket as in testing”

  • “Document my repo and upload it to confluence. Please be sure to highlight the major modules and the key dependencies. Diagram out the system architecture in mermaid.”

  • “Update my build script, then build and run my code”

Note: If the file you want Bito Wingman to write to exceeds 24k characters, updates may not work as expected. This is due to AI model limitations, which typically handle outputs up to 8k characters. We're actively working on a fix and expect improvements within the next month.

What can Wingman do?

  • Write code: Generate high-quality, context-aware code to implement features, fix bugs, or even start entire projects from scratch.

  • Plan and execute: Understand your objectives, break them into smaller steps, and manage execution intelligently.

  • Research on demand: Use web browsing capabilities to gather information, research APIs, or solve challenges in real time.

  • Automate repetitive tasks: Handle the grunt work, from generating boilerplate code to managing Jira tickets and updating files.

  • Integrate seamlessly: Work across popular tools like Jira, Linear, Confluence, and to fit right into your workflow.

  • Collaborate intelligently: Act as an AI pair programmer, offering proactive suggestions, reasoning about solutions, and scaling alongside your team’s needs.

How does Wingman work?

Wingman uses a combination of large language models, planning algorithms, and integrations with your favorite tools to deliver results. When you give Wingman an instruction, it:

  • Understands: Processes the intent behind your high-level request.

  • Plans: Breaks down the task into actionable steps with a clear roadmap.

  • Researches: Fetches relevant information if needed, such as documentation or examples.

  • Executes: Writes, tests, and manages code or other assets to complete the task end-to-end.

Why use Bito Wingman?

  • Save time: Offload tedious and repetitive tasks, freeing you to focus on creative and strategic aspects of development.

  • Increase productivity: Tackle more in less time with an AI developer that handles projects autonomously. Many developers on our own team report being 50% to 300% more productive.

  • Boost quality: Generate clean, functional code with minimal errors thanks to Wingman’s intelligent reasoning.

  • Seamless collaboration: Work smarter with an AI assistant that integrates with your workflow and scales with your team.

Pricing details

Bito Wingman is available to use as part of our Team Plan. For detailed pricing information, visit our .

Learn more

Google Chrome extension

It takes less than 2 minutes

Step-by-Step Instructions

  1. to open the Bito Chrome Extension page.

  2. Now click on the “Add to Chrome” button.

  3. A popup will appear. Click on “Add extension” to install Bito in Google Chrome. The installation process will only take a few seconds.

  4. Once Bito is installed, a new popup will appear, informing you that the Bito extension has been added to Chrome. Simply close this popup.

  5. Now click on the “Extensions icon” given on the top-right side of your browser window. A popup box will appear. Here you will see something like “Bito” in the list. Select the small pin icon in front of it to always show the “Bito icon” in the browser ribbon.

    For example:

  6. Now, click the “Bito icon” from the browser ribbon. You will be asked to create a new Bito account or sign in to your existing account. You can also try Bito as a Guest user. But you will be limited to only asking 10 questions as a Guest. So, it is better to sign up to get unlimited access for free.

  7. Type your email address and click on “Continue”. A six-digit confirmation code will be sent to your email.

  8. Enter the confirmation code in the given input field and click on “Submit”.

  9. If the confirmation code you entered is correct, you will be logged in to Bito Chrome Extension. Here is how the user interface (UI) will look like.

  10. You can try the prompts already given in the UI by simply clicking on them. Additionally, you can type your own technical questions in the given input box and click on the “Send” button.

  11. Bito will quickly start writing the answer. Once the answer is generated, you can select the “Copy Answer” button to copy it to your clipboard and paste it wherever you want to.

Chat with AI Code Review Agent

Ask questions about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

Real-time collaboration with the AI Code Review Agent accelerates your development cycle. By delivering immediate, actionable insights, it eliminates the delays typically experienced with human reviews. Developers can engage directly with the Agent to clarify recommendations on the spot, ensuring that any issues are addressed swiftly and accurately.

Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

How to chat?

To start a conversation, type your question directly as a reply to the Agent’s code review comment.

The AI Code Review Agent will analyze your comment and determine if it’s a valid and relevant question.

  • If the agent decides it’s a valid question, it will respond with helpful insights.

  • If the agent determines it’s unclear, off-topic, or not related to its feedback, it will not respond.

To help the agent recognize your question faster, you can also tag your comment with @bitoagent or @askbito. Tagging informs the Agent that your message is intended as a question. However, tagging does not guarantee a reply. The agent will still analyze your comment and decide whether it is a valid question worth responding to.

Bito usually responds within about 10 seconds.

  • On GitHub and Bitbucket, you may need to manually refresh the page to see the response.

  • On GitLab, updates happen automatically.

Note: The AI Code Review Agent will only respond to questions posted as a reply to its own comments. It will not reply to questions added on threads that it didn’t start.

What you can ask about

When chatting with the AI Code Review Agent, you can ask questions to better understand or improve the code feedback it provided. Here are examples of what you can ask:

  • Clarifications about a highlighted issue Ask the AI to explain why it flagged a certain line of code or why something might cause a problem.

  • Request for alternative solutions Request different ways to fix or improve the code beyond what was originally suggested.

  • Deeper explanations If you want to understand the technical reasoning behind a suggestion (e.g., security concerns, performance impacts, best practices), you can ask for more detailed explanations.

  • Request for examples Ask the AI to provide an example snippet showing the corrected or improved code.

  • Trade-off discussions Ask the AI about pros and cons of different approaches it may have suggested (e.g., performance vs. readability).

  • Best practices guidance Request advice on best practices related to the specific code snippet — such as naming conventions, error handling, optimization tips, or design patterns.

  • Language-specific advice If you’re working in a particular language (e.g., JavaScript, Python, Java), you can ask for language-specific guidance related to the comment.

  • Request for more context If the suggestion feels too "short" or "surface level," you can ask the AI to explain more about the broader coding or architectural concept behind its feedback.

  • Security and safety questions If a suggestion touches on security (like input validation, authentication, or encryption), you can ask for further security-related advice.

  • Testing and validation Ask the AI if it recommends writing any tests based on its code suggestions and what those tests might look like.

Tip: Feel free to ask your question in your preferred language! Bito supports over 20 languages, including English, Hindi, Chinese, and Spanish.

What you cannot ask about

The AI can only answer questions related to its own code review comments.

  • You cannot ask general questions about the repository or unrelated topics.

  • You cannot start a new thread independently — your question must be a reply to a comment made by Bito’s AI Code Review Agent.

If your comment is not linked to a Bito review comment, the AI will not respond.

Overview

Bito CLI (Command Line Interface)

is an innovative tool that harnesses the power of functionality to automate software development workflows. It can automate repetitive tasks like software documentation, test case generation, pull request review, release notes generation, writing commit message or pull request description, and much more.

For example, you can run a command like bito –p writedocprompt.txt -f mycode.js for non-interactive mode in Bito CLI (where writedocprompt.txt will contain your prompt text such as Explain the code below in brief and mycode.js will contain the actual code on which the action is to be performed).

Here is the complete list of .

Download Bito CLI from GitHub:

With support for 50+ programming languages (Python, JavaScript, SQL, etc.) and 50+ spoken languages (English, German, Chinese, etc.), Bito CLI is versatile and adaptable to different project needs. Furthermore, it's designed to be compatible across multiple operating systems, including Windows, Mac, and Linux, ensuring a wide range of usability.

You can either use "ADVANCED" AI models like GPT-4o, Claude Sonnet 3.5, and best in class AI models, or "BASIC" AI models like GPT-4o mini and similar models inside Bito CLI.

When using Basic AI models, your prompts and the chat's memory are limited to 40,000 characters (about 18 single-spaced pages). However, with Advanced AI models, your prompts and the chat memory can go up to 240,000 characters (about 110 single-spaced pages). This means that Advanced models can process your entire code files, leading to more accurate answers.

If you are seeking the best results for complex tasks, then choose Advanced AI models.

Access to Advanced AI models is only available in Bito's . However, Basic AI models can be used by both free and paid users.

Bito CLI is an invaluable asset for developers looking to increase efficiency and productivity in their workflows. It allows developers to save time and focus on more complex and creative aspects of their work. Additionally, Bito CLI plays a crucial role in supporting continuous integration and deployment (CI/CD) processes. Explore some we've created using Bito CLI, which you can implement in your projects right now. These automations showcase the powerful capabilities of Bito CLI.

To get started, check out our guide on , ensuring you make the most out of it.

Workspace

Learn How to Create, Join, or Change Workspace

A workspace is a dedicated environment or space where teams can collaborate and use Bito services. After logging into your Bito account, you can either create a new workspace or join an existing one you've been invited to.

You can use Bito in a single-player mode for all the use cases. However, it works best when your coworkers join the Workspace for collaboration.

Create a New Workspace

The link to create a new workspace will appear at the bottom of the sign-up flow screen. Click on "Create Workspace" to get started.

Now, enter the name of the workspace. You can also choose to make this workspace discoverable by the users with the same domain email as your email. Finally, click on the "Next" button to proceed creating a new workspace.

For example, if your email is [email protected] and you enable the "Workspace discovery" feature, then any other person with an email that ends in @mywebsite.com (like [email protected], [email protected], and so on) can join your workspace after they sign in.

You can always switch this feature off later by visiting the page.

Workspace discovery feature is not available for public email addresses like @gmail.com, @outlook.com, @yahoo.com, etc.

Once you complete the Workspace setup, Bito will be ready to use.

Join an Existing Workspace

If your email domain is allowed for the Workspace, or your coworker invited you, you will see the Workspace listed during the sign-up flow under the "Workspaces Available to Join" list.

Simply click on the "Join" button given in front of the workspace you want to join. Joining your company or team Workspace takes less than a minute.

Alternatively, you can join the Workspace through the Workspace link shared by your coworker.

Change Workspace

Follow the below steps to switch to a different workspace:

  1. First log out of your Bito account.

  1. Then, log back in and choose the workspace you want from the available list.

How to See Which Workspace You Are In?

In the IDE extension, place your mouse cursor over the workspace icon. The workspace name will show up as a tooltip.

Example Questions

What Types of Questions Can be Asked?

You can try asking any question you may have in mind regarding your codebase. In most cases, Bito will give you an accurate answer. Bito uses AI to determine if you are asking about something in your codebase.

However, if you want to ask a question about your code no matter what, then you can use our pre-defined keywords such as "my code", "my repo", "my project", "my workspace", etc., in your question.

The complete list of these keywords is given on our page.

Here are some popular use cases (with example questions):

Code Explanation

  • What a particular code file does

    • In my code what does code in sendgrid/sendemail.sh do?

  • What a particular function in my code does

    • In my repo explain what function message_tokens do

Code Translation

  • In my project rewrite the code of signup.php file in nodejs

Code Refactoring

  • In my workspace suggest code refactoring for api.py and mention all other files that need to be updated accordingly

Fix Bugs

  • In my code find runtime error possibilities in script.js

  • Find logical errors in scraper.py in my code

Detect Code Smells

  • In my code detect code smells in /app/cart.php and give solution

Generate Documentation

  • Generate documentation for search.ts in my workspace in markdown format

Generate Unit tests

  • In my code write unit tests for index.php

  • In my code generate test code for code coverage of cache.c

Summarize Recent Code Changes

  • summarize recent code changes in my code

Code Search using natural language

  • Any function to compute tokens in my project?

  • Any code or script to send emails in my workspace?

  • In my repo list all the line numbers where $alexa array is used in index.php.

Give details of making modifications

  • In my code list all the files and code changes needed to add column desc in table raw_data in dailyReport DB.

Quick Overview

10X Developer with Bito

Bito's AI helps developers dramatically accelerate their impact. It's a Swiss Army knife of capabilities that can 10x your developer productivity and save you an hour a day, using the same models as ChatGPT!

Bito AI makes it easy to write code, understand syntax, write test cases, explain code, comment on code, check security, and even explain high-level concepts. Trained on billions of lines of code and millions of documents, it's pretty incredible what we can help you do without having to search the web or waste time on tedious stuff.

What can you do with Bito?

Bito AI is a general-purpose AI assistance in that developers can ask any technical question, generate code from the natural language prompts, and get feedback on the existing code. Here are some things you can do with Bito AI Knowledge Assistance.

  1. Generate Code: Ask Bito to generate code in any language with the natural language prompt. (e.g., write a Java function to convert a number from one base to another)

  2. Command Syntax: Ask for the syntax of any technical command. (e.g., How to set a global variable for git?")

  3. Test Cases: Generate test cases for the code.

  4. Explain Code: Explain the selected code. Ask how this code works or what it does.

  5. Comment Method: Generate a comment for the function or method to add to your code.

  6. Improve Performance: Ask how you can improve the performance of a given code.

  7. Check Security: Ask if the selected code has any known security issues.

  8. Learn Technical Concepts: Ask a question about any technical concept (e.g., Explains B+ trees, explain Banker's algorithm)

Through extensions, Bito meets you where you work, your IDEs, like Visual Studio Code or JetBrains family of IDE.

Next, learn how to install Bito extensions.

Integrate the AI Code Review Agent into the CI/CD pipeline

Automate code reviews in your Continuous Integration/Continuous Deployment (CI/CD) pipeline—compatible with all CI/CD tools, including Jenkins, Argo CD, GitLab CI/CD, and more.

lets you integrate the into your CI/CD pipeline for automated code reviews. This document provides a step-by-step guide to help you configure and run the script successfully.

Installation and Configuration Steps

  1. based on your Git provider, and follow the step-by-step instructions to install the AI Code Review Agent using Bito Cloud. Be sure to review the prerequisites and the installation/configuration steps provided in the documentation.

  2. , which includes a shell script (bito-actions.sh) and a configuration file (bito_action.properties).

  3. You can integrate the AI Code Review Agent into your CI/CD pipeline in two ways, depending on your preference:

  • Option 1: Using the bito_action.properties File

    • Configure the following properties in the bito_action.properties file located in the downloaded bito-action-script folder.

Property Name
Description
  • Run the following command:

    • ./bito_actions.sh bito_action.properties

    • Note: When using the properties file, make sure to provide all the three parameters in .properties file

  • Option 2: Using Runtime Values

    • Provide all necessary values directly on the command line:

      • ./bito_actions.sh agent_instance_url=<agent_instance_url> agent_instance_secret=<secret> pr_url=<pr_url>

      • Replace <agent_instance_url>, <secret>, and <pr_url> with your specific values.

    • Note: You can also override the values given in the .properties file or provide values that are not included in the file. For example, you can configure agent_instance_url and agent_instance_secret in the bito_action.properties file, and only pass pr_url on the command line during runtime.

      • ./bito_actions.sh bito_action.properties pr_url=<pr_url>

      • Replace <pr_url> with your specific values.

  1. Incorporate the AI Code Review Agent into your CI/CD pipeline by adding the appropriate commands to your build or deployment scripts. This integration will automatically trigger code reviews as part of the pipeline, enhancing your development workflow by enforcing code quality checks with every change.

Generative AI

Generative AI has been making waves across various sectors, from art to technology, leaving many people scratching their heads and wondering: WTF is Generative AI? In this guide, we'll unpack the buzzword and provide you with a clear understanding of what Generative AI is, how it works, and why it's becoming increasingly important in our digital world.

What is Generative AI?

At its core, Generative AI refers to the subset of artificial intelligence where the systems are designed to generate new content. It’s like giving an artist a canvas, but the artist is an algorithm that can create images, compose music, write text, generate programming source code, and much more.

Generative AI systems are typically powered by machine learning models that have been trained on vast datasets. They learn patterns, structures, and features from this data and use this understanding to generate new, original creations that are often indistinguishable from content created by humans.

How Does Generative AI Work?

Generative AI works using advanced machine learning models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).

These models involve two key components:

  1. Generative Models: These are the AI algorithms that create the new data. For example, a generative model might create new images of animals it has never seen before by learning from a dataset of animal pictures.

  2. Discriminative Models: In the case of GANs, the discriminative model evaluates the data generated by the generative model. This is like an art critic who tells the artist if their work is believable or not.

The two models work together in a sort of AI tug-of-war, with the generative model trying to produce better and better outputs and the discriminative model trying to get better at telling the difference between generated and real data.

Applications of Generative AI

Generative AI has a plethora of applications, here are a few:

  • Art: Apps like DeepArt and platforms like DALL-E generate original visuals and art based on user prompts.

  • Music: AI like OpenAI's Jukebox can generate music, complete with lyrics and melody, in various styles and genres.

  • Text: Tools like ChatGPT can write articles, poetry, and even code based on text prompts. Bito also falls in this category as an AI Coding Assistant.

  • Design: Generative AI can suggest design layouts for everything from websites to interior decorating.

  • Deepfakes: This controversial use involves generating realistic video and audio recordings that can mimic real people.

Benefits and Challenges

Benefits

  • Efficiency: Generative AI can produce content much faster than humans.

  • Creativity: It has the potential to create novel combinations that might not occur to human creators.

  • Personalization: AI can tailor content to individual tastes and preferences.

Challenges

  • Ethics: Generative AI raises questions about authenticity and the ownership of AI-generated content.

  • Quality Control: Ensuring consistent quality of AI-generated content can be challenging.

  • Misuse: There’s a risk of its use in creating misleading information or deepfakes.

Future Prospects

The future of Generative AI is both exciting and uncertain. It could revolutionize how we create and consume content. For instance, imagine personalized movies generated in real-time to match your mood, or educational content adapted perfectly to each student's learning style.

As technology advances, so too will the capabilities of Generative AI. It's not just about the ‘WTF’ factor; it's about recognizing the potential and preparing for the transformation it will bring about.

Conclusion

Generative AI is at the frontier of innovation, standing at the crossroads of creativity and computation. It is transforming the conventional processes of creation across various fields and presenting us with a future where the line between human and machine-made is increasingly blurred. While it brings with it a host of benefits, we must tread carefully to navigate the ethical considerations and harness its power for the greater good.

As with any transformative technology, the question isn’t just 'WTF is Generative AI?' but also 'How do we responsibly integrate it into our society?' That is the real challenge and opportunity ahead.

Managing workspace members

Bring your team together

In Bito, team members collaborate by joining a workspace. In most cases, every organization would create one Workspace. Anyone can , create a workspace for their team, and invite their coworkers to join the Workspace.

Inviting coworkers to the Workspace

You can use Bito in a single-player mode for all the use cases. However, it works best when your coworkers join the Workspace to collaborate with Bito. There are three ways you can invite your coworkers.

Option 1 - Allow your work e-mail domain for the Workspace. This setting is turned on by default, and all users with the same e-mail domain as yours will automatically see the Workspace under "Pending Invitations" when signing up in Bito. You can manage this setting after you create the Workspace through the "Settings" page in your Bito account.

You may still need to notify your coworkers about Bito and share Bito workspace URL. We don't send e-mails to your coworkers unless you invite them to the Workspace.

Option 2 - Invite your coworkers via e-mail when you create your Workspace or later from your workspace setting.

Option 3- Share a web link specific to your Workspace via the channel of your choice: e-mail, Slack, or Teams. The link is automatically created and shown when creating a workspace or on the workspace settings page.

Managing coworkers in Workspace

If you are the Owner or Admin of the Workspace, you can take the following actions:-

Deactivate User

Deactivate any user to remove them from the given Workspace. Once the user is deactivated, they can't access the workspace. They can request to join the Workspace, which requires approval from the admin or owner.

A deactivated user can be activated again by the admin/owner.

Activate User

Activate the user who was previously deactivated.

Force Reauthentication

A Bito user can check "Remember Me" to auto log-in. Admin/Owner can force the user to re-authenticate if needed for security.

Change Access Level

Bito has primarily three user types - Owner, Admin, and User as defined in the following page:. Admins/Owners can change the user's access level.

The following Loom demonstrates managing the workspace and its members.

How to use?

Learn how to work with Bito CLI (including examples)

Prerequisites

Terminal

  • Bash (for Mac and Linux)

  • CMD (for Windows)

Using Bito CLI

Before you can use Bito CLI, you need to and it. Once the setup is done, follow the steps below:

  • Execute Chat: Run bito command on command prompt to get started. Ask anything you want help with such as awk command to print first and last column.

  • Note: Bito CLI supports long prompts through multiline input. To complete and submit the prompt, press Ctrl+D. Enter/Return key adds a new line to the input.

  • Exit Bito CLI: To quit/exit from Bito CLI, type quit and press Ctrl+D .

  • Terminate: Press Ctrl+C to terminate Bito CLI.

Here is the complete list of .

Getting Started

Check out the video below to get started with Bito CLI.

Examples

Here are two examples for you to see My Prompt in action:

  1. How to Create Git Commit Messages and Markdown Documentation with Ease using Bito CLI My Prompt:

  1. How to generate test data using Bito CLI My Prompt:

Appearance settings

The IDE customization settings are accessible through the new toolbar dropdown menu titled "Extension Settings".

Light and Dark Themes

In Visual Studio Code and JetBrains IDEs, you can choose between a light or dark theme for the Bito panel to match your coding environment preference. For VS Code users, Bito also offers an adaptive theme mode in which the Bito panel and font colors automatically adjust based on your selected VS Code theme, creating a seamless visual experience.

You can set the desired theme through the Theme dropdown.

Theme Screenshots

“Always Light” Theme

“Always Dark” Theme

“Light” or “Dark” Theme - Matching IDE

“Adaptive” Theme

Theme adapted from “Noctis Lux”:

Theme adapted from “Solarized Light”:

Theme adapted from “Tomorrow Night Blue”:

Theme adapted from “barn-cat”:


Font Size Control

Take control of your code readability! Within the Bito extension settings, you can now adjust the font size for a comfortable viewing experience.

You can set the desired font size through the Font Size text field. However, if you check the Font Size (Match with IDE Font) checkbox, it will override the set font size with the Editor font size.

Available Keywords

agent_instance_url

The URL of the Agent instance provided after configuring the AI Code Review Agent with Bito Cloud.

agent_instance_secret

The secret key for the Agent instance obtained after configuring the AI Code Review Agent with Bito Cloud.

pr_url

URL of your pull request on GitLab, GitHub, or BitBucket.

Bito Cloud
AI Code Review Agent
Select the appropriate Git provider guide from this link
Download the bito-action-script folder from GitHub
Workspace Settings
Theme adapted from “Noctis Lux”
Theme adapted from “Solarized Light”
Theme adapted from “Tomorrow Night Blue”
Theme adapted from “barn-cat”

Prompts

A prompt, in the simplest terms, is the initial input or instruction given to an AI model to elicit a response or generate content. It's the human touchpoint for machine intelligence, a cue that sets the AI's gears in motion.

Prompts are more than mere commands; they are the seeds from which vast trees of potential conversations and content grow. Think of them as the opening line of a story, the question in a quiz, or the problem statement in a mathematical conundrum – the prompt is the genesis of the AI's creative or analytical output.

For example, when you ask GPT-4o "What's the best way to learn a new language?" you've given it a prompt. The AI then processes this and generates advice based on its training data.

The Art of Prompt Engineering

Prompt engineering is a discipline in itself, evolving as an art and science within AI communities. Crafting effective prompts is akin to programming without code; it's about phrasing and framing your request to the AI in a way that maximizes the quality and precision of its output.

Good prompt engineering can involve:

  • Being specific: Clearly defining what you want the AI to do.

  • Setting the tone: Informing the AI of the style or mood of the content you expect.

  • Contextualizing: Providing background information to guide the AI's responses.

Example: Instead of saying, "Tell me about France," a well-engineered prompt would be, "Write a short travel guide for first-time visitors to France, highlighting top attractions, cultural etiquette, and local cuisine."

The Role of Prompts in Generative AI

Generative AI, which includes everything from text to image generation models, relies heavily on prompts to determine the direction of content creation. Prompts for generative AI act as a blueprint from which the model can conjure up entirely new pieces of content – whether that's an article, a poem, a piece of art, or a musical composition.

Prompts tell the AI not just what to create, but can also suggest how to create it, influencing creativity, tone, structure, and detail. As generative AI grows more sophisticated, the potential for complex and nuanced prompts increases, allowing for more customized and high-fidelity outputs.

Example: Prompting an AI with "Create a poem in the style of Edgar Allan Poe about the sea" instructs the model to adopt a specific literary voice and thematic focus.

Challenges and Considerations

Crafting the perfect prompt isn't always straightforward. One of the challenges lies in the AI's interpretation of the prompt. Ambiguity can lead to unexpected or unwanted results, while overly restrictive prompts may stifle the AI's creative capabilities.

Moreover, ethical considerations arise when prompts are designed to elicit biased or harmful content. The AI's response is contingent upon its training data, and if that data includes prejudiced or false information, the output may reflect those biases. Responsible prompt engineering thus also involves an awareness of potential harm and the implementation of safeguards against it.

Example: To avoid bias in AI-generated news summaries, prompts should be engineered to require neutrality and fact-checking.

Conclusion

Prompts are the simple commands or questions we use to kickstart a conversation with AI, guiding it to understand and generate the responses or content we seek. They're like the steering wheel for the AI's capabilities, crucial for navigating the vast landscape of information and creativity the AI models offer.

As we continue to interact with and shape AI technology, mastering the use of prompts becomes our way of ensuring that the conversation flows in the right direction. Simply put, the better we become at asking, the better AI gets at answering.

So, the next time you interact with a language model, remember that the quality of the output is often a direct reflection of your input - your prompt is the key.

Vim/Neovim Plugin

Vim/ Neovim Plugin for Bito Using Bito CLI

We are excited to announce that one of our users has developed a dedicated Vim and Neovim plugin for Bito, integrating it seamlessly with your favorite code editor. This plugin enhances your coding experience by leveraging the power of Bito's AI capabilities directly within Vim and Neovim.

Installation

To get started with "vim-bitoai," follow these steps:

Step 1: Install Bito CLI

Make sure you have Bito CLI installed on your system. If you haven't installed it, you can find detailed instructions in the Bito CLI repository at https://github.com/gitbito/CLI.

Step 2: Install the Plugin

Open your terminal and navigate to your Vim or Neovim plugin directory. Then, clone the "vim-bitoai" repository using the following command:

git clone https://github.com/zhenyangze/vim-bitoai.git

Step 3: Configure the Plugin

Open your Vim or Neovim configuration file and add the following lines:

" Vim Plug
Plug 'zhenyangze/vim-bitoai'

" NeoBundle
NewBundle 'zhenyangze/vim-bitoai'

" Vundle
Plugin 'zhenyangze/vim-bitoai'

Save the configuration file and restart your editor or run :source ~/.vimrc (for Vim) or :source ~/.config/nvim/init.vim (for Neovim) to load the changes.

Step 4: Verify the Installation

Open Vim or Neovim, and you should now have the "vim-bitoai" plugin installed and ready to use.

Usage

You can use its powerful features once you have installed the "vim-bitoai" plugin. Here are some of the available commands:

  • BitoAiGenerate: Generates code based on a given prompt.

  • BitoAiGenerateUnit: Generates unit test code for the selected code block.

  • BitoAiGenerateComment: Generates comments for methods, explaining parameters and output.

  • BitoAiCheck: Performs a check for potential issues in the code and suggests improvements.

  • BitoAiCheckSecurity: Checks the code for security issues and provides recommendations.

  • BitoAiCheckStyle: Checks the code for style issues and suggests style improvements.

  • BitoAiCheckPerformance: Analyzes the code for performance issues and suggests optimizations.

  • BitoAiReadable: Organizes the code to enhance readability and maintainability.

  • BitoAiExplain: Generates an explanation for the selected code.

To execute a command, follow these steps:

  1. Open a file in Vim or Neovim that you want to work on.

  2. Select the code block you want to act on. You can use visual mode or manually specify the range using line numbers.

  3. Execute the desired command by running the corresponding command in command mode. For example, to generate code based on a prompt, use the : BitoAiGenerate command. Note: Some commands may prompt you for additional information or options.

  4. The plugin will communicate with the Bito CLI and execute the command, providing the output directly within your editor.

By leveraging the "vim-bitoai" plugin, you can directly harness the power of Bito's AI capabilities within your favorite Vim or Neovim editor. This integration lets you streamline your software development process, saving time and effort in repetitive tasks and promoting efficient coding practices.

Customization

The "vim-bitoai" plugin also offers customization options tailored to your specific needs. Here are a few variables you can configure in your Vim or Neovim configuration file:

  • g:bito_buffer_name_prefix: Sets the prefix for the buffer name in the Bito history. By default, it is set to 'bito_history_'.

  • g:vim_bito_path: Specifies the path to the Bito CLI executable. If the Bito CLI is not in your system's command path, you can provide the full path to the executable.

  • g:vim_bito_prompt_{command}: Allows you to customize the prompt for a specific command. Replace {command} with the desired command.

To define a custom prompt, add the following line to your Vim or Neovim configuration file and replace your prompt with the desired prompt text:

if !exists("g:vim_bito_prompt_{command}")
    let g:vim_bito_prompt_{command}="your prompt"
endif

Remember to restart your editor or run the appropriate command to load the changes.

We encourage you to explore the "vim-bitoai" plugin and experience the benefits of seamless integration between Bito and your Vim or Neovim editor. Feel free to contribute to the repository or provide feedback to help us further improve this plugin and enhance your coding experience.

Supported tools

Bito Wingman seamlessly integrates with various tools such as Jira, Linear, Confluence, and more.

Below is a list of developer tools available to Wingman. Each tool comes with unique parameters and capabilities, and Wingman is ready to assist you in configuring and using them effectively.

To configure a tool, simply ask Wingman, "How do I configure [Tool Name]?"

Learn more

Tool
Description
Supported/integrated

Jira

Issue tracking and project management tool

YES

Linear

Issue tracking and project management tool

YES

Confluence

Content management tool

YES

Shell/CLI

System command execution tool

YES

Web search

Web content retrieval and processing tool

YES

File operations

File system manipulation tool (CRUD operations)

YES

File search

File/directory search utility with pattern matching

YES

Read chunk

File reading utility for handling large files

YES

System info

System diagnostic tool for hardware/OS info

YES

Location info

Geolocation service based on IP address

YES

Weather info

Weather data service for locations

YES

Code symbol search

Pattern-based code search utility (like grep)

YES

Python code analyzer

Static code analysis tool for Python files

YES

Prompt engineering

Prompt Engineering is the art and science of crafting inputs (prompts) that guide AI to produce the desired outputs. It's about understanding how to communicate with an AI in a way that leverages its capabilities to the fullest. Think of it as giving directions to a supremely intelligent genie without any misunderstandings.

In Bito’s backend, we do a lot of prompt engineering to ensure that you always receive accurate outputs.

Why is it Important?

Generative AI, like OpenAI’s GPT models, are revolutionizing industries from content creation to coding. But their utility hinges on the quality of the prompts they receive. A well-engineered prompt can yield rich, accurate, and nuanced responses, while a poor one can lead to irrelevant or even nonsensical answers.

The Anatomy of a Good Prompt

Clarity and Specificity

AI models are literal. If you ask for an article, you'll get an article. If you ask for a poem about dogs in space, you’ll get exactly that. The specificity of your request can significantly alter the output.

Example:

  • Vague Prompt: Write about health.

  • Engineered Prompt: Write a comprehensive guide on adopting a Mediterranean diet for improving heart health, tailored for beginners.

Contextual Information

Providing context helps the AI understand the nuance of the request. This could include tone, purpose, or background information.

Example:

  • Without Context: Explain quantum computing.

  • With Context: Explain quantum computing in simple terms for a blog aimed at high school students interested in physics.

Closed vs. Open Prompts

Closed prompts lead to specific answers, while open prompts allow for more creativity. Depending on your goal, you may need one over the other.

Example:

  • Closed Prompt: What is the capital of France?

  • Open Prompt: Describe a day in the life of a Parisian.

The Practice of Prompt Engineering

Prompt engineering is not a "get it right the first time" kind of task. It involves iterating prompts based on the responses received. Tweaking, refining, and even overhauling prompts based on output can lead to more accurate and relevant results.

A significant part of prompt engineering is experimentation. By testing different prompts and studying the outputs, engineers learn the nuances of the AI's language understanding and generation capabilities.

Keywords are the bread and butter of prompt engineering. Identifying the right keywords can steer the AI in the desired direction.

Example:

  • Without Keyword Emphasis: Write about the internet.

  • With Keyword Emphasis: Write an article focused on the evolution of internet privacy policies.

Advanced Techniques

Chain of Thought Prompts

These prompts mimic a human thought process, providing a step-by-step explanation that leads to an answer or conclusion. This can be especially useful for complex problem-solving.

Example:

  • Chain of Thought Prompt: To calculate the gravitational force on an apple on Earth, first, we determine the mass of the apple and the distance from the center of the Earth...

Zero-Shot and Few-Shot Learning

In zero-shot learning, the AI is given a task without previous examples. In few-shot learning, it’s provided with a few examples to guide the response. Both techniques can be leveraged in prompt engineering for better results.

Example:

  • Zero-Shot Prompt: What are five innovative ways to use drones in agriculture?

  • Few-Shot Prompt: Here are two ways to use drones in agriculture: 1) Crop monitoring, 2) Automated planting. List three more innovative ways.

Ethical Considerations and Limitations

  • Bias and Sensitivity: Prompt engineers must be mindful of inherent biases and ethical considerations. This includes avoiding prompts that could lead to harmful outputs or perpetuate stereotypes.

  • Realistic Expectations: LLMs and Generative AI are powerful but not omnipotent. Understanding their limitations is crucial in setting realistic expectations for what prompt engineering can achieve.

  • Data Privacy and Security: As prompts often contain information that may be sensitive, engineers must consider data privacy and security in their designs.

Conclusion

Prompt engineering is more than a technical skill—it’s a new form of linguistic artistry. As we continue to integrate AI into our daily lives, becoming adept at communicating with these systems will become as essential as coding is today.

Whether you’re a writer, a developer, or just an AI enthusiast, mastering the craft of prompt engineering will place you at the forefront of this exciting conversational frontier. So go ahead, start crafting those prompts, and unlock the full potential of your AI companions.

Large Language Models (LLM)

Large Language Models (LLMs) are advanced AI algorithms trained to understand, generate, and sometimes translate human language. They are called “large” for a good reason: they consist of millions or even billions of parameters, which are the fundamental data points the model uses to make predictions and decisions.

How Do LLM Work?

Imagine teaching a child language by reading every book you can find. That’s essentially what LLMs go through. They are fed vast amounts of text data and use statistical methods to find patterns and learn from context. Through a process known as machine learning, these models become adept at predicting the next word in a sentence, answering questions, summarizing texts, and more.

The Training Regime

Data, Data, and More Data: LLMs are the heavyweight champions of the data world. They are trained on diverse datasets comprising encyclopedias, books, articles, and websites to learn a wide range of language patterns and concepts.

Supervised and Unsupervised Learning: Some LLMs learn through supervised learning, meaning they learn from datasets that have been labeled or corrected by humans. Others use unsupervised learning, meaning they infer patterns and rules from raw data without human annotation.

Fine-Tuning: After the initial training, LLMs can be fine-tuned for specific tasks, like legal document analysis or medical diagnosis, by training them further on specialized data.

Applications of LLMs

Writing Assistance: Grammarly or the autocomplete in your email are powered by LLMs. They predict what you’re trying to say and help you say it better.

Chatbots: If you've ever chatted with Bito and noticed that it sounds almost like a real person, that's because it is powered by several state-of-the-art Large Language Models.

Translation Services: Services like Google Translate use LLMs to convert text from one language to another, learning from vast amounts of bilingual text to improve their accuracy.

The Magic Behind the Scenes

Neural Networks: The core technology behind LLMs is artificial neural networks, particularly a type called Transformer models. These mimic some aspects of human brain function and are particularly good at handling sequential data like text.

Training Hardware: Training LLMs requires significant computational power, often involving hundreds of GPUs or specialized TPUs that work in tandem for weeks or months.

Continuous Learning: LLMs don’t stop learning after their initial training. They can continue to learn from new data, improving their performance over time.

Examples of Large Language Models

GPT Series by OpenAI

The GPT series by OpenAI has been a trailblazer in the field of LLMs. Each version of the Generative Pre-trained Transformer has been more powerful than the last, with GPT-4o as a staggering leap forward. Boasting over 200 billion parameters, this model is not just about size; it’s about the nuanced understanding and generation of human-like text. GPT-4o can craft essays that are indistinguishable from those written by humans, compose complex poetry, and even generate functional computer code across several languages, which is a testament to its versatility.

GPT-4o's influence extends across industries. For instance, it can simulate conversations, create educational content, and even assist programmers by converting natural language descriptions into code snippets. Its advanced capabilities are being integrated into various software applications and tools, enhancing productivity and sparking creative new approaches to problem-solving.

BERT by Google

BERT stands for Bidirectional Encoder Representations from Transformers. It's a complicated name, but really, it's just Google's method for making search engines smarter. Unlike earlier models, BERT examines the context of a word in both directions (left and right of the word) within a sentence, leading to a far more nuanced interpretation of the query. This ability means that BERT can grasp the full intent behind your searches, so the results you get are closer to what you actually need.

Since its integration into Google's search engine, BERT has significantly improved the relevance of results for millions of queries every day. For users, this often translates to finding answers more quickly and accurately, sometimes in subtle ways that may go unnoticed but are nonetheless powerful. Beyond search, BERT is also revolutionizing natural language processing tasks such as language translation, question answering, and text summarization.

In summary, both the GPT series and BERT are not just steps but giant leaps forward in our ability to interface with machines in a more natural, intuitive way. They are redefining what's possible in the realm of AI and continuing to pave the way for smarter, more responsive technology.

Ethical Considerations and Challenges

Bias in AI: Since LLMs learn from existing data, they can perpetuate and amplify biases present in that data. It’s an ongoing challenge to ensure that LLMs are fair and unbiased.

Privacy: Training LLMs on personal data raises privacy concerns. Ensuring data used is anonymized and secure is paramount.

Environmental Impact: The energy consumption of training and running LLMs is significant. Researchers are working on more efficient models to mitigate this.

The Future of LLMs

Evolving Intelligence: LLMs are getting more sophisticated, with future models expected to handle more complex tasks and exhibit greater understanding of human language.

Interdisciplinary Integration: LLMs are beginning to intersect with other fields, such as bioinformatics and climatology, providing unique insights and accelerating research.

Democratization of AI: As LLMs become more user-friendly, their use is expanding beyond tech companies to schools, small businesses, and individual creators.

Conclusion

Large Language Models are transforming how we interact with machines, making them more human-like than ever. They're a blend of colossal data, computing power, and intelligent algorithms, pushing the boundaries of what machines can understand and accomplish. As they evolve, LLMs will continue to shape our digital landscape in unpredictable and exciting ways. Just remember, the next time you type out a sentence and your phone suggests the end of it, there’s a little bit of LLM magic at work.

Key features

Explore the powerful capabilities of the Bito Wingman.

Advanced AI planning and reasoning capabilities

Wingman excels at understanding high-level instructions and breaking them into actionable steps. Unlike traditional assistants, it plans and executes tasks, making it an invaluable partner for complex projects.

The more detailed and specific your instructions, the better the results Wingman can deliver. You can also iterate with Wingman as the project evolves, refining its output step by step to meet your exact requirements.

Example use case: Provide a detailed prompt like:

Create an API for user authentication and integrate it into my backend. Please review my code thoroughly to suggest the key interfaces that should be created. Besides normal user registration and authentication, also include token management capabilities and risk-based scoring mechanisms to help alert us if a user might be trying to breach the system.

Contextual/focused research capabilities

When tasked with challenges requiring additional information, Wingman conducts targeted research to gather relevant details and context for your project.

Example use case: If you're building a feature but need to confirm industry-standard practices, Wingman will gather up-to-date information and incorporate it into the solution.

Seamless web browsing and information gathering

Wingman can browse the web autonomously to find and retrieve useful data. This feature ensures that your projects benefit from the latest tools, libraries, or guidelines available.

Example use case: If you ask Wingman to implement a feature using a cutting-edge library, it will search for the library, understand its documentation, and integrate it into your code.

Code writing in multiple programming languages

Wingman can generate high-quality code across a variety of programming languages. It understands your project’s requirements and provides context-aware solutions tailored to your tech stack.

Example use case: Ask Wingman to write a function in Python, JavaScript, or another language—it will deliver optimized and functional code.

Integration with popular tools

Wingman integrates seamlessly with tools you already use, including:

  • Version control: Support for Git operations like git push, git commit, and git clone for GitHub and GitLab workflows.

  • Project management: Jira, Linear

  • Documentation: Confluence

  • File operations: Manage and update files directly.

This integration ensures that Wingman fits naturally into your existing workflow.

Example use case: Assign a Jira ticket to Wingman, and it will complete the associated coding task, update the ticket, and link it to the appropriate pull request.

Extensible architecture

Wingman is designed with flexibility in mind, allowing you to easily integrate it with tools that fit your workflow. Its adaptable architecture ensures it evolves with your development needs.

Example use case: If your team starts using a new project management tool, Wingman’s flexibility lets you integrate it into your process effortlessly.

Have a specific tool in mind? Drop us a note at [email protected] to request adding support for it.

Task completion made easy

Wingman is designed to assist you in completing tasks efficiently, working alongside you and checking in as needed. Once provided with clear instructions, it handles everything from planning to execution while keeping you in the loop.

Example use case: Wingman can help implement a feature, document it in Confluence, and create a pull request—keeping you informed every step of the way.

Natural language interaction

Communicate with Wingman in any language through a chat interface. Describe what you need, and Wingman will take care of the rest. Additionally, you can set your preferred AI output language on the Advanced settings page. For example, if you set Spanish as your preferred language, Wingman will respond to you in Spanish.

Example use case: Type "Generate a REST API for user management and write tests for it" in the chat, and Wingman will handle the implementation and testing.

Install on JetBrains

AI Code Completions

Get real-time suggestions from Bito as you type or through code comments

Learn how to enable or disable AI Code Completions

Effortlessly use AI Code Completions with your keyboard

Seamless integration with your coding workflow

Supporting over 35 programming languages such as Python, SQL, C++, Go, JavaScript, and more

Discover real-world applications of AI Code Completions

Overview
How it works?
Enable/disable settings
Keyboard shortcuts
Accept/reject suggestions
Supported programming languages
Use cases and examples

Bito CLI (Command Line Interface)

Learn how to setup Bito CLI on your device (Mac, Linux, and Windows)

Manage Bito CLI settings

Learn how to work with Bito CLI (including examples)

Learn about all the powerful commands to use Bito CLI

Answers to popular questions

Overview
Install or uninstall
Configuration
How to use?
Available commands
FAQs
VS Code
[email protected]
many more
more
Pricing page
Getting started
Supported tools

Get started in VS Code

Get a demo

Click here
Bito on Chrome Web Store
Bito add extension popup
Bito added to Chrome
Pin Bito extension in Google Chrome
Bito icon added in browser ribbon
Bito sign in form in Google Chrome extension
Bito enter confirmation code
Bito submit confirmation code
Bito Chrome Extension UI
Ask question using Bito Chrome extension
Copy Answer in Bito Chrome extension
Bito CLI (Command Line Interface)
Bito AI chat
available commands for Bito CLI
Team Plan
intelligent AI automations
how to use Bito CLI
sign up on Bito
Managing user access levels
install
configure
available commands for Bito CLI

Install on VS Code

Install on JetBrains

Try AI Code Review Agent

Prerequisites

Key requirements for self-hosting the AI Code Review Agent.

Minimum System Requirements

A machine with the following minimum specifications is recommended for Docker image deployment and for obtaining optimal performance of the AI Code Review Agent.

Requirement
Minimum Specification

CPU Cores

4

RAM

8 GB

Hard Disk Drive

80 GB


Supported Operating Systems

  • Windows

  • Linux

  • macOS


OS Prerequisites

Operating System
Installation Steps

Linux

You will need:

  1. Bash (minimum version 4.x)

    • For Debian and Ubuntu systems

      sudo apt-get install bash

    • For CentOS and other RPM-based systems

      sudo yum install bash

  1. Docker (minimum version 20.x)

macOS

You will need:

  1. Bash (minimum version 4.x)

    brew install bash

  1. Docker (minimum version 20.x)

Windows

You will need:

  1. PowerShell (minimum version 5.x)

    • Note: In PowerShell version 7.x, run Set-ExecutionPolicy Unrestricted command. It allows the execution of scripts without any constraints, which is essential for running scripts that are otherwise blocked by default security settings.

  1. Docker (minimum version 20.x)


Required Access Tokens

  • Bito Access Key: Obtain your Bito Access Key. View Guide

  • GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently. View Guide

GitHub Personal Access Token (Classic)
  • GitLab Personal Access Token: For GitLab PR code reviews, a token with API access is required. View Guide

GitLab Personal Access Token
  • Snyk API Token (Auth Token): For Snyk vulnerability reports, obtain a Snyk API Token. View Guide

Install/run via CLI

CLI mode is best suited for immediate, one-time code reviews.

  1. Prerequisites: Before proceeding, ensure you've completed all necessary prerequisites for self-hosted AI Code Review Agent.

  2. Start Docker: Ensure Docker is running on your machine.

  3. Repository Download: Download the AI Code Review Agent GitHub repository to your machine.

  4. Extract and Navigate:

  • Extract the downloaded .zip file to a preferred location.

  • Navigate to the extracted folder and then to the “cra-scripts” subfolder.

  • Note the full path to the “cra-scripts” folder for later use.

  1. Open Command Line:

    • Use Bash for Linux and macOS.

    • Use PowerShell for Windows.

  2. Set Directory:

    • Change the current directory in Bash/PowerShell to the “cra-scripts” folder.

    • Example command: cd [Path to cra-scripts folder]

    • Adjust the path based on your extraction location.

  1. Configure Properties:

    • Open the bito-cra.properties file in a text editor from the “cra-scripts” folder. Detailed information for each property is provided on Agent Configuration: bito-cra.properties File page.

    • Set mandatory properties:

      • mode = cli

      • pr_url

      • bito_cli.bito.access_key

      • git.provider

      • git.access_token

Note: Valid values for git.provider are GITHUB, GITLAB, or BITBUCKET.

  • Optional properties (can be skipped or set as needed):

    • git.domain

    • code_feedback

    • static_analysis

    • dependency_check

    • dependency_check.snyk_auth_token

    • review_scope

    • exclude_branches

    • exclude_files

    • exclude_draft_pr

Note: Detailed information for each property is provided on Agent Configuration: bito-cra.properties File page.

Check the Required Access Tokens guide to learn more about creating the access tokens needed to configure the Agent.

  1. Run the Agent:

    • On Linux/macOS in Bash: Run ./bito-cra.sh bito-cra.properties

    • On Windows in PowerShell: Run ./bito-cra.ps1 bito-cra.properties

This step might take time initially as it pulls the Docker image and performs the code review.

  1. Final Steps:

    • The script may prompt values of mandatory/optional properties if they are not preconfigured.

    • Upon completion, a code review comment is automatically posted on the Pull Request specified in the pr_url property.

Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please contact support.

Screenshots

Screenshot # 1

AI-generated pull request (PR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a pull request.

Changelist in AI Code Review Agent's feedback.

Screenshot # 3

AI code review feedback posted as comments on the pull request.

FAQs

Answers to Popular Questions

How many repositories can Bito index?

Bito can index unlimited repositories for workspaces that have subscribed to our Team Plan. This feature is also coming soon for our Free Plan. But it will be limited to 10MB maximum indexable size of repository.

Why is the index creation taking a long time?

Bito takes time to thoroughly read your entire repository and understand it. This is completely normal. If your repository is a bit large, then it can take several hours to get .

Bito usually takes around 12 minutes per each 10MB of code to understand your repo.

Why is the answer not complete?

There is a limit on the amount of memory/context that can be used at a time to answer the question, so the answers sometimes may not cover all the code. To solve for this, restrict the questions by providing additional criteria like:

  • In my code explain message_tokens in ai/request.js

Where can I see the status of my Index?

Open your project in VS Code or JetBrains IDEs. From the Bito plugin pane, click the laptop icon located in the top-right corner.

On this tab, you will see the status of your current project as well as the status of any other project that you indexed previously.

List of Indexing Statuses:

  • Not Indexed: A new project that you have not started indexing yet.

  • Indexing in progress: A project that is currently being indexed.

  • Indexing is paused: A project that was previously being indexed but is now paused for some reason. Generally, if you close the IDE while the project is being indexed, its status will change from "Indexing in progress" to "Indexing is paused".

  • Indexed: A project that has already been indexed, and Bito is ready to answer any questions about it.

What happens if my IDE got closed while indexing is in progress?

In case you close the Visual Studio Code or JetBrains IDE (e.g., PyCharm) while the is in progress then don’t worry. The indexing will be paused and will automatically continue from where it left off when you reopen the IDE. Currently, the indexing will resume 5-10 minutes after reopening the IDE.

How to delete project index from IDE?

  1. To delete an index, navigate to the "Manage repos" tab.

  2. Next, click on the three dots button located in front of your project’s name, and then select the "Delete" option.

  3. A warning popup box will appear at the bottom of Bito's plugin pane. You can choose to click the "Delete" button to remove the project's index from your system, or click the "Cancel" button to go back.

How to fix indexing issues in Visual Studio Code and JetBrains IDEs (e.g., IntelliJ IDEA, PyCharm, etc.)?

Before getting started, please ensure that you have allowed your project sufficient time to be indexed. Bito typically requires approximately 12 minutes for every 10MB of code to understand your repository.

If for some reason you are struggling to index your project’s folder while using Visual Studio Code or JetBrains IDEs, then follow the below steps to delete the folder that contains all the indexes and try to re-index your project.

  1. Close all JetBrains IDEs and VS Code instances where Bito is installed.

  2. Go to your users directory. For example, on Windows it will be something like C:\Users\<your username>

  3. Now, find .bito folder and delete it. (Note: All configuration settings and project indexes created by Bito will be deleted. You will also be logged out from Bito IDE plugin)

If Windows is installed on a drive other than “C”, you will need to locate the .bito folder on that drive instead.

  1. Once you have deleted the .bito folder, open your project in the IDE again.

  2. After restarting the IDE, you will need to enter your email ID and a 6-digit code to log in. Once you're logged in, select the workspace that has an active paid subscription.

  3. After that, when Bito asks if you wish to index the folder, you can select "Maybe later".

  4. Then, navigate to the "Manage repos" tab in the Bito plugin pane, where you should see the folder name listed under the "Current project" along with its size, indicating that it is not indexed. Since you have deleted the .bito folder, the "Other projects" section will no longer display any entries.

  5. Finally, click on "Start Indexing" and it should begin indexing the folder.

For testing purposes, we suggest using a folder with a small size and avoid changing the folder in IDE until indexing is completed and the folder icon turns green.

By the way, you can continue using Bito while indexing is in progress in the background.

Available commands

Invoke the AI Code Review Agent manually or within a workflow.

The offers a suite of commands tailored to developers' needs. You can manually trigger a code review by entering any of these commands in the comment box below a pull/merge request on GitHub, GitLab, or Bitbucket and submitting the comment. Alternatively, if you are using the self-hosted version, you can configure these commands in the for automated code reviews.

It may take a few minutes to get the code review posted as a comment, depending on the size of the pull/merge request.

/review

This command provides a broad overview of your code changes, offering suggestions for improvement across various aspects but without diving deep for secure coding or performance optimizations or scalability improvements etc. This makes it ideal for catching general code quality issues that might not necessarily be critical blockers but can enhance readability, maintainability, and overall code health.

Think of it as a first-pass review to identify potential areas for improvement before delving into more specialized analyses.

Review Scope

Five specialized commands are available to perform detailed analyses on specific aspects of your code. Details for each command are given below.

  1. /review security

  2. /review performance

  3. /review scalability

  4. /review codeorg

  5. /review codeoptimize

You can provide comma-separated values to perform multiple types of code analysis simultaneously.

Example: /review performance,security,codeoptimize

Combining general feedback with specialized review scopes

If you'd like to receive general code quality feedback alongside specialized analyses, include the general keyword in your review command.

For example, to receive feedback on general code quality, performance, and security, use:

  • Example: /review general,performance,security

This ensures a holistic review encompassing both general code quality and specific areas of concern.

/review security

This command performs an in-depth analysis of your code to identify vulnerabilities that could allow attackers to steal data, gain unauthorized access, or disrupt your application. This includes checking for weaknesses in input validation, output encoding, authentication, authorization, and session management. It also looks for proper encryption of sensitive data, secure coding practices, and potential misconfigurations that could expose your system.

/review performance

This command evaluates the current performance of the code by pinpointing slow or resource-intensive areas and identifying potential bottlenecks. It helps developers understand where the code may be underperforming against expected benchmarks or standards. It is particularly useful for identifying slow processes that could benefit from further investigation and refinement.

This includes checking how well your code accesses data and manages tasks like database interactions and memory usage.

/review scalability

This command analyzes your code to identify potential roadblocks to handling increased usage or data. It checks how well the codebase supports horizontal scaling and whether it is compatible with load balancing strategies. It also ensures the code can handle concurrent requests efficiently and avoids bottlenecks from single points of failure. The command further examines error handling and retry mechanisms to promote system resilience under pressure.

/review codeorg

This command scans your code for readability, maintainability, and overall clarity. This includes checking for consistent formatting, clear comments, well-defined functions, and efficient use of data structures. It also looks for opportunities to reduce code duplication, improve error handling, and ensure the code is written for future growth and maintainability.

/review codeoptimize

This command helps identify specific parts of the code that can be made more efficient through optimization techniques. It suggests refactoring opportunities, algorithmic improvements, and areas where resource usage can be minimized. This command is essential for enhancing the overall efficiency of the code, making it faster and less resource-heavy.

Control code review workflow

These commands allow you to manage the AI Code Review Agent's behavior directly within your pull requests across GitHub, GitLab, and Bitbucket.

/pause

Pauses automatic AI reviews on the current pull request.

Use case: Useful when significant changes are underway, and you want to prevent the AI from reviewing incomplete code.

Example: Add a comment with /pause to the pull request.

/resume

Resumes the automatic AI reviews that were previously paused on the pull request.

Use case: Once your code changes are ready for review, use this command to re-enable the AI's automatic analysis.

Example: Add a comment with /resume to the pull request.

/resolve

Marks all Bito-posted review comments as resolved.

Use case: After addressing the issues highlighted by the AI, use this command to clean up the comment threads.

Example: Add a comment with /resolve to the pull request.

Note: The /resolve command is currently supported in GitLab and Bitbucket.

/abort

Cancels all in-progress AI code reviews on the current pull request.

Use case: If an AI review is no longer needed or was initiated by mistake, this command stops the process.

Example: Add a comment with /abort to the pull request.

Display Code Review in a Single Post

By default, the /review command generates inline comments, placing code suggestions directly beneath the corresponding lines in each file for clearer guidance on improvements. If you prefer a single consolidated code review instead of separate inline comments, use the #inline_comment parameter and set its value to False.

Example: /review #inline_comment=False

Example: /review scalability #inline_comment=False

Note: The /review command defaults to #inline_comment=True, so you can omit this parameter when its value is True.

Getting started

Learn how to use Bito Wingman.

Prerequisites

To get started with , ensure the following requirements are met:

  1. Bito Team Plan:

    • Subscribe to the Bito Team Plan to unlock Bito Wingman along with all other premium features. For detailed pricing information, visit our .

  2. Install or update the VS Code extension:

    • Ensure you have Bito v1.4.7 or later installed in your editor.

    • The Bito Wingman will download automatically after you install or update the Bito IDE extension.

    • Once the download is complete, Wingman will prepare itself, and you'll be ready to use it.

How to use Bito Wingman?

Bito Wingman can be used in the following ways:

Using Bito Wingman via IDE

  1. Open Bito Wingman:

    • Open Bito in your IDE and click on "Launch Bito Wingman" button from the Bito panel.

  2. Start a session:

    • In the Wingman window, type your instructions in the chatbox and submit them.

  3. Set or change the working directory:

    • Default Behavior:

      • If no project is open in the IDE, Wingman defaults to your home directory.

    • Set working directory:

      • Click "Select specific directory" to choose a working directory for a new session.

    • Change working directory:

      • For existing sessions, click the edit icon next to the folder path at the top of the Wingman screen.

      • Enter the complete path to your desired directory.

Note: The working directory determines the context Wingman uses and where it performs file operations.

  1. Manage sessions:

    • All active Wingman sessions are listed in the left sidebar.

    • You can run multiple sessions simultaneously, and Wingman will manage them in the background.

Using Bito Wingman via CLI

This section explains how to run Bito Wingman from the command line. The prerequisites are the same as above, and the CLI binary is installed automatically with the Bito IDE extension.

  1. Locating the executable:

    • After installation, the Bito Wingman binary is located at:

      • <User home directory>/.bitowingman/bin

    • The executable file is named with the version number and target platform. For example:

      • macOS: bitowingman-1.0.9-darwin-arm64

      • Windows: bitowingman-1.0.9-win32-x64.exe

Note: When Bito Wingman is updated, the executable name (and version number) changes. Before executing the commands below, please verify the executable name on your machine (typically located in <User home directory>/.bitowingman/bin) and update your command to match the current file name.

  1. CLI usage modes:

    Bito Wingman supports two modes for interacting via the CLI:

    • Interactive mode (recommended): Interactive mode provides a chat-like interface for real-time command execution.

      • On macOS: ~/.bitowingman/bin/bitowingman-1.0.9-darwin-arm64 -i

      • On Windows (PowerShell): & "$env:USERPROFILE\.bitowingman\bin\bitowingman-1.0.9-win32-x64.exe" -i

      Note: After launching interactive mode, type help and press Enter to view the list of supported commands.

    • Non-interactive mode: Non-interactive mode allows you to execute a command directly and receive the results without entering a full session.

      • On macOS: ~/.bitowingman/bin/bitowingman-1.0.9-darwin-arm64 "run git diff and summarize the changes"

      • On Windows (PowerShell): & "$env:USERPROFILE\.bitowingman\bin\bitowingman-1.0.9-win32-x64.exe" "run git diff and summarize the changes"

Configuring tools in Wingman

Bito Wingman seamlessly integrates with various tools such as Jira, Linear, Confluence, and more. Click the "Tools" button in the top-right corner of the Wingman screen to view all supported tools.

  1. To configure a tool, simply ask Wingman, "How do I configure [Tool Name]?"

  2. Wingman will provide detailed step-by-step instructions. Follow the instructions to complete the configuration process.

  3. If a tool requires an API token, Wingman will guide you through the process of obtaining it. Once you provide the token, Wingman will handle the configuration automatically.

Note: All credentials are securely stored locally in an encrypted format to ensure your data remains private.

Installing on Visual Studio Code

It takes less than 2 minutes

Get up and running with Bito in just a few steps! Bito seamlessly integrates with Visual Studio Code, providing powerful AI-driven coding assistance directly within your editor. Click the button below to quickly install the Bito extension and start optimizing your development workflow with context-aware , , and more.

Video guide

Watch the video below to learn how to download the Bito extension on VS Code.

Step-by-step instructions

  1. In Visual Studio Code, go to the extension tab and search for Bito.

  1. Install the extension. We recommend you restart the IDE after the installation is complete.

Starting with Bito version 1.3.4, the extension is only supported on VS Code versions 1.72 and higher. Bito does not support VS Code versions below 1.72, and earlier versions of Bito do not function properly on these older versions. Additionally, while Bito is supported on VS Code versions 1.72 and above, the feature in Bito only works on VS Code version 1.80 and higher.

  1. After a successful install, the Bito logo appears in the Visual Studio Code pane.

  1. Click the Bito logo to launch the extension and complete the setup process. You will either need to create a new workspace if you are the first in your company to install Bito or join an existing workspace created by a co-worker. See

Visual Studio Code Marketplace Link

Setup Bito extension in VS Code running through SSH

SSH (Secure Shell) is a network protocol that securely enables remote access, system management, and file transfer between computers over unsecured networks.

Visual Studio Code IDE allows developers to access and collaborate on projects from any connected machine remotely. The corresponding extension [Remote -SSH] must be installed on the host machine's Visual Studio Code IDE to utilize this feature.

The Bito VS Code extension seamlessly integrates with Remote development via SSH, allowing developers to utilize Bito features and capabilities on their remote machines.

Remote SSH connection and setup

Please follow the instructions given in the links below:

Video Guide:

Setup Bito extension in VS Code running through WSL

Running VS Code on WSL allows developers to work in a Linux-like environment directly from Windows. This kind of setup is to take advantage of development experience on both operating systems.

WSL provides access to Linux command-line tools, utilities, and applications, to enhance productivity and streamline the development process.

This setup ensures a consistent development environment across different systems, making it easier to develop, test, and deploy applications that will run on Linux servers.

WSL connection and setup

Please follow the instructions given in the links below:

Video Guide:

Standard templates

Generating the best possible response is as much science as art. The are built on the same technology as ChatGPT handles the science part. Crafting a good is the art part. The Templates in Bito takes the burden off your shoulder in being crafty. You can select a piece of code and use one of the eight prompts, whether you want to check the code for performance or add error handling. Behind-the-scene actor "Bito Prompt Manager" crafts a well-versed prompt that squeezes the best response out of the machine. You can also save your favorite prompts for quick access anytime. Check out .

Templates are accessible:

1- In the Bito panel

Click on the Templates button to expand or collapse Templates menu.

2- Using Slash / Command in Bito Chat Box

Type a forward slash / right at the start in the Bito chat box. Once you do, the template menu will open from where you can quickly select and use the template you want.

Want to narrow down your choices? Simply start typing after the / slash, and it'll only show you templates that match your words. And hey, you can also use the arrow keys, or Tab and Shift + Tab, to navigate the templates menu.

3- From the Code Editor's Context Menu

Select code, right click, and click Bito AI to access shortcuts

4- Through the Command Palette in Visual Studio Code

Go to View -> Command Palette -> Type "Bito" to access the templates

The following Loom demonstrates Standard Templates in Bito:

Bito includes the following standard templates out of the box.

Template Name
Purpose

Available commands

Learn about all the powerful commands to use Bito CLI

Help

Run any one of the below commands.

bito --help

or

bito config –help

Check Bito CLI Version

Run any one of the below commands to print the version number of Bito CLI installed currently.

bito -v

or

bito --version

Bito CLI MyPrompt (Automation using Bito CLI)

The below commands can help you automate repetitive tasks like software documentation, test case generation, writing pull request description, pull request review, release notes generation, writing commit message, and much more.

Explore some we've created using , which you can implement in your projects right now. These automations showcase the powerful capabilities of Bito CLI.

1- Non-Interactive Mode with File Input

Run the below command for non-interactive mode in Bito (where writedocprompt.txt will contain your prompt text such as Explain the code below in brief and mycode.js will contain the actual code on which the action is to be performed).

2- Standard Input Mode

Run the below command to read the content at standard input in Bito (where writedocprompt.txt will contain your prompt text such as Explain the code below in brief and input provided will have the actual content on which the action is to be performed).

3- Direct File Input

Run the below command to directly concatenate a file and pipe it to bito and get instant result for your query.

On Mac/Linux

On Windows

4- Redirect Output to a File

On Mac/Linux

Run the below command to redirect your output directly to a file (where -p can be used along with cat to perform prompt related action on the given content).

On Windows

Run the below command to redirect your output directly to a file (where -p can be used along with type to perform prompt related action on the given content).

5- Store Context/Conversation History

Run the below command to store context/conversation history in non-interactive mode in file runcontext.txt to use for next set of commands in case prior context is needed. If runcontext.txt is not present it will be created. Please provide a new file or an existing context file created by bito using -c option. With -c option now context is supported in non-interactive mode

On Mac/Linux

On Windows

6- Instant Response for Queries

Run the below command to instantly get response for your queries using Bito CLI.

Using Comments

Anything after # symbol in your prompt file will be considered as a comment by Bito CLI and won't be part of your prompt.

You can use \# as an escape sequence to make # as a part of your prompt and to not use it for commenting anymore.

Few examples for above:

  • Give me an example of bubble sort in python # everything written here will be considered as a comment now.

  • Explain what this part of the code do: \#include<stdio.h>

    • In the example above \# can be used as an escape sequence to include # as a part of your prompt.

  • #This will be considered as a comment as it contains # at the start of the line itself.

Using Macro

Use {{%input%}} macro in the prompt file to refer to the contents of the file provided via -f option.

Example: To check if a file contains JS code or not, you can create a prompt file checkifjscode.txt with following prompt:

Agent settings

Learn how to customize the AI Code Review Agent.

Bito's supports different configuration methods depending on the deployment environment:

  1. Bito-hosted – The agent runs on Bito's infrastructure and is configured through the .

  2. Self-hosted – The agent runs on user-managed infrastructure and is configured by editing the .

The sections below provide configuration guidance for each setup.

Bito-hosted agent configuration

In Bito-hosted AI Code Review Agent, you can configure the agent through the .

To customize an existing agent, open the page and click the Settings button next to the Agent instance to be modified.

The agent settings page allows configuration of options such as:

  • Agent name – Define a unique name for easy identification.

  • Review options – Choose the review mode (Essential or Comprehensive), set feedback language, and enable features like auto-review, incremental review, summaries, and change walkthroughs.

  • Custom guidelines – Create and apply custom code review rules tailored to your team’s standards directly from the dashboard.

  • Filters – Exclude specific files, folders, or branches from review to focus on relevant code.

  • Tools – Enable additional checks, such as secret scanning and static analysis.

  • Chat – Configure how the agent responds to follow-up questions in pull request comments and manage automatic replies.

These settings tailor the agent’s behavior to match team workflows and project needs. For detailed guidance, see .

Self-hosted agent configuration

In self-hosted deployments, configuration is managed by editing the . This file defines how the agent operates and connects to required services.

Key configuration options include:

  • Mode

    • mode = cli: Processes a single pull request using a manual URL input.

    • mode = server: Runs as a webhook service and listens for incoming events from Git platforms.

  • Authentication

    • bito_cli.bito.access_key: Required for authenticating the agent with the Bito platform.

    • git.provider, git.access_token, etc.: Required for connecting to the appropriate Git provider (e.g., GitHub, GitLab, Bitbucket).

  • General feedback settings

    • code_feedback: Enables or disables general feedback comments in reviews.

  • Analysis tools

    • static_analysis: Enables static code analysis.

    • dependency_check: Enables open-source dependency scanning.

    • dependency_check.snyk_auth_token: Required when using Snyk for vulnerability detection.

  • Review format and scope

    • review_comments: Defines output style (e.g., single post or inline comments).

    • review_scope: Limits the review focus to specific concerns such as security, performance, or style.

  • Filters

    • include_source_branches and include_target_branches: Restrict reviews to pull requests that match specified source and target branch patterns.

    • exclude_files: Skips selected files based on glob patterns.

    • exclude_draft_pr: Skips draft pull requests when enabled (default: True).

Each property is documented in detail on the page.

Indexing

Indexing involves breaking down a source code file into smaller chunks and converting these chunks into that can be stored in a . Bito indexes your entire codebase locally (on your machine) to understand it and provide answers tailored to your code.

Learn more about Bito's feature.

How Bito Indexes Your Code

In the steps below, we'll show you how Bito indexes your code, ensuring that each query you have is met with precise and contextually relevant information. From breaking down code into digestible chunks to leveraging advanced AI models for nuanced understanding, Bito transforms the daunting task of code analysis into a seamless and efficient experience.

Here's how the magic happens:

Step 1: Chunk Breakdown

Dividing Code into Pieces

Bito starts by breaking down your source code files into smaller sections, known as 'chunks'. It’s like cutting up a long text into paragraphs to make it more manageable. Each chunk represents a piece of your code that can be individually indexed and analyzed.

Step 2: Indexing Each Chunk

Creating a Searchable Reference

After breaking down the file, each chunk is indexed, similar to creating a catalog entry. This step is crucial as it allows for the efficient location of the code segment later on.

Step 3: Generating Embeddings

Translating Code into Numeric Vectors

For every chunk, Bito generates a numeric vector or . This process, which can be done using OpenAI or alternative open-source embedding models, translates the code into a mathematical representation. The idea is to create a form that can be easily compared and matched with other code chunks.

Step 4: Storing the Vectors

Saving the Essential Data

These embeddings are then stored in an index file on your machine. This index file is like a detailed directory, listing the file name, the location of the chunk within the file (start and end), and the embedding vector for each piece of code.

Step 5: Query Embedding

Understanding Your Questions

When you ask a question in Bito's chatbox, the AI checks whether it has some specific keywords like "my code", "my project", etc. If so, Bito generates a numeric vector for your query, mirroring the process used for code chunks.

The complete list of these keywords is given on our page.

Step 6: Finding the Nearest Neighbor

Matching Your Query with Code

Using the query's vector, Bito searches the index to find the code chunk with the closest matching embedding. This step identifies the relevant sections of your codebase that can answer your question.

Step 7: Contextualization

Building a Bigger Picture

Identifying chunks is just part of the process. Bito ensures that these chunks make sense in the broader context of your code. If necessary, it expands the search to include complete functions or related code segments, creating a fuller, more accurate context.

Step 8: Leveraging Language Models

Consulting the AI Experts

With the context in hand, Bito consults with language models – either basic (GPT-4o mini and similar models) or advanced (GPT-4o, Claude Sonnet 3.5, and best in class AI models) – to interpret the code within the context and provide an accurate response to your query.

Step 9: Session Privacy

Keeping Your Data Local

All the indexing and querying happens on your local machine. The index files are stored in the user’s home folder, for example on Windows the path will be something like C:\Users\Furqan\.bito\localcodesearch folder. It ensures that your code and session history remain private and secure.

Step 10: Safeguarding Data

Ensuring Confidentiality

Bito is committed to privacy. All LLM accounts it uses are under strict agreements to prevent your data from being used for training, recorded, or logged.

Step 11: Handling Hallucination

Reducing AI Fabrication

Bito is designed to minimize AI 'hallucinations' or fabrications, ensuring the answers you receive are based on your actual code. Although complete elimination of hallucination isn't feasible, as it sometimes aids in constructing beyond seen data, Bito strives to keep it in check, especially when dealing with your local code.

With these steps, Bito provides a robust and privacy-conscious method for indexing and understanding your code, simplifying navigation and enhancing productivity in your development projects.

Managing user access levels

Understanding User Roles in Bito Workspaces

A Bito Workspace represents your organization. It is the highest level of organization in Bito.

In a Bito Workspace, different user types play distinct roles in managing and collaborating within the workspace. Here is an overview of the three user types: Owner, Admin, and User. Understanding these roles will help you effectively manage your workspace and optimize team collaboration.

Owner: The Owner holds the highest level of authority within the workspace

Admin: Admins have a significant role in managing the workspace alongside the Owner

User: Users have access to the workspace with limited administrative privileges

Here's a table summarizing the roles of the different user types in a Bito Workspace:

Owner
Admin
Member

Install or uninstall

Learn how to setup Bito CLI on your device (Mac, Linux, and Windows)

Installing Bito CLI (Recommended)

We recommend you use the following methods to install Bito CLI.

Mac and Linux

sudo curl https://alpha.bito.ai/downloads/cli/install.sh -fsSL | bash

Note: curl will always download the latest version.

Archlinux

Arch and Arch based distro users can install it from

yay -S bito-cli

or

paru -S bito-cli

Note for the Mac Users: You might face issues related to verification for which you will have to manually do the steps from (we are working on fixing it as soon as possible).

Windows

  • In the , open the folder that has the latest version number.

  • From here, download the MSI file called Bito CLI.exe and then install Bito CLI using this installer.

  • On Windows 11, you might get notification related to publisher verification. Click on "Show more" or "More info" and click on "Run anyway" (we are working on fixing this as soon as possible).

Once the installation is complete, start a new command prompt and run bito command to get started.

Installing with Manual Binary Download (Not Recommended)

While it's not recommended, you can download the Bito CLI binary from our repository, and install it manually. The binary is available for Windows, Linux, and Mac OS (x86 and ARM architecture).

Mac and Linux

  1. In the , open the folder that has the latest version number.

  2. From here, download the Bito CLI binary specific to your OS platform.

  3. Start the terminal, go to the location where you downloaded the binary, move the downloaded file (in the command below use bito-* filename you have downloaded) to filename bito.

    mv bito-<os>-<arch> bito

  4. Make the file executable using following command chmod +x ./bito

  5. Copy the binary to /usr/local/bin using following command sudo cp ./bito /usr/local/bin

  6. Set PATH variable so that Bito CLI is always accessible. PATH=$PATH:/usr/local/bin

  7. Run Bito CLI with bito command. If PATH variable is not set, you will need to run command with the complete or relative path to the Bito executable binary.

Windows

  1. In the , open the folder that has the latest version number.

  2. From here, download the Bito CLI binary for Windows called bito.exe.

  3. For using Bito CLI, always move to the directory containing Bito CLI prior to running it.

  4. Set PATH variable so that Bito CLI is always accessible.

    1. Follow the instructions as per this

    2. Edit the "Path" variable and add a new path of the location where Bito CLI is installed on your machine.

Uninstalling Bito CLI

Mac and Linux

sudo curl https://alpha.bito.ai/downloads/cli/uninstall.sh -fsSL | bash

Note: This will completely uninstall Bito CLI and all of its components.

Windows

For Windows, you can uninstall Bito CLI just like you uninstall any other software from the control panel. You can follow these steps:

  1. Click on the Windows Start button and type "control panel" in the search box, and then open the Control Panel app.

  2. Under the "Programs" option, click on "Uninstall a program".

  3. Find "Bito CLI" in the list of installed programs and click on it.

  4. Click on the "Uninstall" button (given at the top) to start the uninstallation process.

  5. Follow the instructions provided by the uninstall wizard to complete the uninstallation process.

After completing these steps, Bito CLI should be completely removed from your Windows machine.

Enable/disable settings

Learn how to Enable or Disable AI Code Completions

are disabled by default. To enable them follow the steps below.

VS Code

  1. Click the gear icon at the bottom left of the VS Code window. Then select “Settings” to open the main settings page.

  1. From the settings sidebar, click on "Text Editor" and then select "Suggestions". Now, on the right-side, tick the checkbox given in front of "Inline Suggest: Enabled" option. Please note that if this option is disabled then functionality will not work.

  1. In the search bar, type "bito" and then from the sidebar click on "Bito" under "Extensions" to access the Bito extension settings.

  1. Here you will see three options that can be configured. These are:

    1. Enable Auto Completion: Tick this checkbox to enable inline code suggestion in the editor. Uncheck it to disable this feature.

    2. Enable Comment to Code: Tick this checkbox to enable generating code from comment in the editor. Uncheck it to disable this feature.

    3. Set Auto Completion Trigger Logic: Decide how fast Bito makes suggestions by setting your preferred pause time. This input field allows you to set the pause time in milliseconds. Lower values make suggestions more often. Minimum and default value is 250 milliseconds.

We recommend setting the time between 250 and 400 milliseconds.

JetBrains IDEs (e.g. IntelliJ IDEA, WebStorm, PyCharm, etc.)

  1. Click the gear icon at the top right of the JetBrains IDE window, then select "Settings" to open the main settings window.

  2. Now, in the sidebar click on "Tools" and then click "Bito" to access the Bito extension settings.

  3. Here you will see three options that can be configured. These are:

    1. Enable Auto Completion: Tick this checkbox to enable inline code suggestion in the editor. Uncheck it to disable this feature.

    2. Enable Comment to Code: Tick this checkbox to enable generating code from comment in the editor. Uncheck it to disable this feature.

    3. Set Auto Completion Trigger Logic: Decide how fast Bito makes suggestions by setting your preferred pause time. This input field allows you to set the pause time in milliseconds. Lower values make suggestions more often. Minimum and default value is 250 milliseconds.

We recommend setting the time between 250 and 400 milliseconds.

  1. After doing all of the above steps, you must click on "Apply" and then "OK" button to save your changes. Otherwise, your modifications will be lost.

Make or Remove Other Owner

Yes

No

No

Promote another user to admin or remove admin

Yes

Yes

No

Manage Subs and Billing

Yes

Yes

No

Manage Overage Limits

Yes

Yes

No

Add Member by E-mail

Yes

Yes

No

Access and Share Join workspace link

Yes

Yes

Yes

Deactivate Member

Yes

Yes

No

Edit WS Settings - Name, Discovery

Yes

Yes

No

Approve Member [When joining from the "Invite Workspace" web link]

Yes

Yes

No

Force Reauthentication

Yes

Yes

No

Bito Wingman
Pricing page
VS Code
Using Bito Wingman via IDE
Using Bito Wingman via CLI
bito –p writedocprompt.txt -f mycode.js
bito –p writedocprompt.txt
cat file.txt | bito
type file.txt | bito
cat inventory.sql | bito -p testdataprompt.txt > testdata.sql
type inventory.sql | bito -p testdataprompt.txt > testdata.sql
cat inventory.sql | bito -c runcontext.txt -p testdataprompt.txt > testdata.sql
type inventory.sql | bito -c runcontext.txt -p testdataprompt.txt > testdata.sql
echo "give me code for bubble sort in python" | bito
Context is provided below within contextstart and contextend
contextstart
{{%input%}}
contextend
Check if content provided in context is JS code.
intelligent AI automations
Bito CLI
embeddings
vector database
AI that Understands Your Code
“embedding”
Available Keywords
AUR
here
Bito CLI GitHub repo
Bito CLI GitHub repo
Bito CLI GitHub repo
link
AI Code Review Agent
bito-cra.properties file
Highlighting the security vulnerability detected and the proposed solution.
Highlighting the performance issue detected and the proposed solution.
Highlighting the scalability issue detected and the proposed solution.
Highlighting the code structure issue detected and the proposed solution.
Precise code optimization advice pinpointing exact lines in a file.
View Guide
View Guide
View Guide
View Guide

LLM tokens

At the heart of every LLM, from GPT-3.5 Turbo to the latest GPT-4o, are tokens. These are not your arcade game coins but the fundamental units of language that these models understand and process. Imagine tokens as the DNA of digital language—their sequence dictates how an LLM interprets and responds to text.

A token is created when we break down a massive text corpus into digestible bits. Think of it like slicing a cake into pieces; each slice, or token, can vary from a single word to a punctuation mark or even a part of a word. The process of creating tokens, known as tokenization, simplifies complex input text, making it manageable for LLMs to analyze.

Here’s a quick reference to understand token equivalents:

  • 1 token ≈ 4 characters in English

  • 1 token ≈ ¾ of a word

  • 100 tokens ≈ 75 words or about 1–2 sentences

Tokenization Methods

Imagine you have a sentence: "The quick brown fox jumps over the lazy dog." An LLM would use tokenization to chop this sentence into manageable pieces. Depending on the chosen method (we’ve discussed it in the next section below), this could result in a variety of tokens, such as:

  • Word-level: ["The", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"]

  • Subword-level: ["The", "quick", "brown", "fox", "jumps", "over", "the", "la", "zy", "dog"]

  • Character-level: ["T", "h", "e", " ", "q", "u", "i", "c", "k", " ", ...]

Each method has its own advantages and disadvantages.

Word-level tokenization is straightforward and aligns with the way humans naturally read and write text. It is effective for languages with clear word boundaries and for tasks where the meaning is heavily dependent on the use of specific words. However, this method can lead to very large vocabularies, especially in languages with rich morphology or in cases where the text contains a lot of different proper nouns or technical terms. This large vocabulary can become a problem when trying to manage memory and computational efficiency.

Subword-level tokenization, often implemented through methods like Byte Pair Encoding (BPE) or SentencePiece, addresses some of the issues of word-level tokenization. By breaking down words into more frequently occurring subunits, this method allows the model to handle rare or out-of-vocabulary (OOV) words more gracefully. It balances the vocabulary size and the ability to represent the full range of text seen during training. It can also be more effective for agglutinative languages (like Turkish or Finnish), where you can combine many suffixes with a base word, leading to an explosion of possible word forms if using word-level tokenization.

Character-level tokenization has the advantage of the smallest possible vocabulary. Since it deals with characters, it is very robust to misspellings and OOV words. However, because it operates at such a fine-grained level, it may require more complex models to understand higher-level abstractions in the text. Models may need to be larger or more complex to learn the same concepts that could be learned with fewer parameters at higher levels of tokenization.

Beyond these, there are other tokenization methods such as:

  • Byte-level: Similar to character-level, but treats the text as a sequence of bytes, which can be useful for handling multilingual text uniformly.

  • Morpheme-level: Breaks words down into morphemes, which are the smallest meaningful units of language. This can be useful for capturing linguistic nuances but requires sophisticated algorithms to implement effectively.

  • Hybrid approaches: Some models use a combination of the above methods, often starting with a larger unit and then falling back to smaller units when the first approach does not work.

The choice of tokenization can affect not just the performance of an LLM but also its understanding of the text. For example, using a subword tokenizer that never breaks down "dog" into smaller pieces ensures that the model always considers "dog" as a semantic unit. In contrast, if "dog" could be broken down into "d" and "og", the model might lose the understanding that "dog" represents an animal.

Tokens and Model Costs

The complexity and number of tokens directly impact the computational horsepower needed to run AI models. More tokens generally mean more memory and processing power, which translates to higher costs.

When you use services like OpenAI's GPT models, you're charged based on the number of tokens processed. With different rates for different models (like Davinci or Ada), budgeting for AI usage can get tricky. This makes the choice of tokenization method not just a technical decision but also a financial one.

Overcoming the Token Limit Challenge

A crucial point about LLMs is that they can only handle a limited number of tokens at once—this is their token limit. The more tokens they can process, the more complex the tasks they can handle.

Imagine asking an AI to write a novel in one go. If the token limit is low, it might only manage a chapter. If it's high, you could get a full book, but it might take ages to write. It's all about finding the balance between performance and practicality.

Here’s the token limits chart of popular LLMs.

Model Name
Context Window
Max Output Tokens

GPT-3.5 Turbo

16,385 tokens

4,096 tokens

GPT-3.5 Turbo Instruct

4,096 tokens

4,096 tokens

GPT-4

8,192 tokens

8,192 tokens

GPT-4o

128,000 tokens

4,096 tokens

GPT-4o mini

128,000 tokens

16,384 tokens

Claude Sonnet 3.5

200,000 tokens

8192 tokens

But what happens when you have more to say than the token limit allows?

5 Strategies to Beat Token Limits

  1. Truncation: The most straightforward approach is to cut the text down until it fits the token budget. However, this is like trimming a picture; you lose some of the scenes.

  2. Chunk Processing: Break your text into smaller pieces, process each chunk separately, and stitch the results together. It's like watching a series of short clips instead of a full movie.

  3. Summarization: Distill your text to its essence. For example, "It's sunny today. What will the weather be like tomorrow?" can be shortened to "Tell me tomorrow's weather."

  4. Remove Redundant Terms: Cut out the fluff—words that don't add significant meaning (like "the" or "and"). This streamlines the text but beware, over-pruning can alter the message.

  5. Fine-Tuning Language Models: Custom-train your model on specific data to get better results with fewer tokens. It’s like prepping a chef to make a dish they can cook blindfolded.

Conclusion

Tokens are much more than jargon—they're central to how language models process and understand our queries and commands.

Understanding tokens and their role in AI language processing is fundamental for anyone looking to leverage the power of LLMs in their work or business. By grasping the basics of tokenization and its impact on computational requirements and costs, users can make informed decisions to balance performance with budget.

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (RAG) is a paradigm-shifting methodology within natural language processing that bridges the divide between information retrieval and language synthesis. By enabling AI systems to draw from an external corpus of data in real-time, RAG models promise a leap towards a more informed and contextually aware generation of text.

RAG fuses in-depth data retrieval with creative language synthesis in AI. It's like having an incredibly knowledgeable friend who can not only recall factual information but also weave it into a story seamlessly, in real-time.

The Mechanics of RAG

To understand RAG, let's break it down:

  • Retrieval: Before generating any new text, the RAG model retrieves information from a large dataset or database. This could be anything from a simple database of facts to an extensive library of books and articles.

  • Augmented: The retrieved information is then fed into a generative model to "augment" its knowledge. This means the generative model doesn't have to rely solely on what it has been trained on; it can access external data for a more informative output.

  • Generation: Finally, the model generates text using both its pre-trained knowledge and the newly retrieved information, leading to more accurate, detailed, and relevant responses.

The Components of a RAG Model

A RAG model typically involves two major components:

  1. Document Retriever: This is a neural network or an algorithm designed to sift through the database and retrieve the most relevant documents based on the query it receives.

  2. Sequence-to-Sequence Model: After retrieval, a Seq2Seq model, often a transformer-based model like BERT or GPT, takes the retrieved documents and the initial query to generate a coherent and relevant piece of text.

How to Build a RAG

Let's imagine we want to build a RAG model that, when given a prompt about a historical figure or event, can generate a detailed and accurate paragraph.

Step 1: Choose Your Data Source

First, you need a database from which the model can retrieve information. For historical facts, this could be a curated dataset like Wikipedia articles, historical texts, or a database of historical records.

Step 2: Index Your Data Source

Before you can retrieve information, you need to index your data source to make it searchable. You can use software like Elasticsearch for efficient indexing and searching of text documents.

Step 3: Set Up the Retriever

You then need a retrieval model that can take a query and find the most relevant documents in your database. This could be a simple TF-IDF (Term Frequency-Inverse Document Frequency) retriever or a more sophisticated neural network-based approach like a Dense Retriever that maps text to embeddings.

Step 4: Integrate with a Generative AI Model

The retrieved documents are then fed into a generative AI model, like GPT-4o or BERT. This model is responsible for synthesizing the information from the documents with the original query to generate coherent text.

Step 5: Training Your RAG Model

If you're training a RAG model from scratch, you'd need to fine-tune your generative AI model on a task-specific dataset. You’d need to:

  • Provide pairs of queries and the correct responses.

  • Allow the model to retrieve documents during training and learn which documents help it generate the best responses.

Step 6: Iterative Refinement

After initial training, you can refine your model through further iterations, improving the retriever or the generator based on the quality of outputs and user feedback.

Building such a RAG system would be a significant engineering effort, requiring expertise in machine learning, NLP, and software engineering.

Why RAG is a Game-Changer

RAG significantly enhances the relevance and factual accuracy of text generated by AI systems. This is due to its ability to access current databases, allowing the AI to provide information that is not only accurate but also reflects the latest updates.

Moreover, RAG reduces the amount of training data needed for language models. By leveraging external databases for knowledge, these models do not need to be fed as much initial data to become functional.

RAG also offers the capability to tailor responses more specifically, as the source of the retrieved data can be customized to suit the particular information requirement. This functionality signifies a leap forward in making AI interactions more precise and valuable for users seeking information.

Practical Applications of RAG

The applications of RAG are vast and varied. Here are a few examples:

  • Customer Support: RAG can pull up customer data or FAQs to provide personalized and accurate support.

  • Content Creation: Journalists and writers can use RAG to automatically gather information on a topic and generate a draft article.

  • Educational Tools: RAG can be used to create tutoring systems that provide students with detailed explanations and up-to-date knowledge.

Challenges and Considerations

Despite its advantages, RAG also comes with its set of challenges:

  • Quality of Data: The retrieved information is only as good as the database it comes from. Inaccurate or biased data sources can lead to flawed outputs.

  • Latency: Retrieval from large databases can be time-consuming, leading to slower response times.

  • Complexity: Combining retrieval and generation systems requires sophisticated machinery and expertise, making it complex to implement.

Conclusion

Retrieval Augmented Generation is a significant step forward in the NLP field. By allowing machines to access a vast array of information and create something meaningful from it, RAG opens up a world of possibilities for AI applications.

Whether you're a developer looking to build smarter AI systems, a business aiming to improve customer experience, or just an AI enthusiast, understanding RAG is crucial for advancing in the dynamic field of artificial intelligence.

Chat Session History

Explain Code

Explains what the code does and how it works.

Generate Comment

Generate a comment for the selected code

Performance Check

Checks code for the performance, and rewrites the code with suggested optimization.

Security Check

Check code for the basic security checks, and rewrites the code with suggested fixes.

Style Check

Check the code for the common style issues, and rewrites with suggested fixes.

Improve Readability

Refactor the code for better readability

Clean Code

Remove debug statements

Generate Unit Tests

Generate the unit tests for the selected code,

Bito AI models
prompt
Custom Prompt Templates
Templates menu in Bito Panel
BitoYouTube
indexed
indexing
AI Chat
AI Code Completions
AI Code Completions
Managing workspace members
https://marketplace.visualstudio.com/items?itemName=Bito.bito
https://code.visualstudio.com/docs/remote/ssh
https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse
https://code.visualstudio.com/docs/remote/wsl-tutorial
https://learn.microsoft.com/en-us/windows/wsl/install

Install on VS Code

AI Code Review Agent
Bito web UI
bito-cra.properties file
Bito web UI
Code Review > Repositories
Create or customize an Agent instance
bito-cra.properties file
bito-cra.properties file documentation

Learn more

Learn more

AI Code Completions
Bito's AI Code Completions

Guide for GitHub

Integrate the AI Code Review Agent into your GitHub workflow.

Speed up code reviews by configuring the AI Code Review Agent with your GitHub repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using available commands.

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our Pricing page.

Get a 14-day FREE trial of Bito's AI Code Review Agent.

Video tutorial

Coming soon...

Installation and configuration steps

Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

Step 1: Log in to Bito

Log in to Bito Cloud and select a workspace to get started.

Step 2: Open the Code Review Agents setup

Click Repositories under the CODE REVIEW section in the sidebar.

Step 3: Select your Git provider

Bito supports integration with the following Git providers:

  • GitHub

  • GitHub (Self-Managed)

  • GitLab

  • GitLab (Self-Managed)

  • Bitbucket

Since we are setting up the Agent for GitHub, select GitHub to proceed.

This will redirect you to GitHub.

Step 4: Install the Bito app for GitHub

To enable pull request reviews, you need to install and authorize the Bito’s AI Code Review Agent app.

On GitHub, select where you want to install the app.

Grant Bito access to your repositories:

  • Choose All repositories to enable Bito for every repository in your account.

  • Or, select Only select repositories and pick specific repositories using the dropdown menu.

Bito app uses these permissions:

  • Read access to code and metadata

  • Read and write access to issues and pull requests

  • Read access to organization members

Click Install & Authorize to proceed. Once completed, you will be redirected to Bito.

Step 5: Enable AI Code Review Agent on repositories

After connecting Bito to your GitHub account, you need to enable the AI Code Review Agent for your repositories.

Click the Go to repository list button to view all repositories Bito can access in your GitHub account.

Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

Learn more

Step 6: Automated and manual pull request reviews

Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

  1. Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.

  2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the pull request and submit it. This action will start the code review process.

The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

Note: To enhance efficiency, the automated code reviews are only triggered for pull requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

To review additional branches, you can use the Include Source/Target Branches filter. Bito will review pull requests when the source or target branch matches the list.

The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

The AI Code Review Agent automatically reviews code changes up to 5000 lines when a pull request is created. For larger changes, you can use the /review command.

It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

Step 7: Specialized commands for code reviews

Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

  • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

  • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

  • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

  • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

For more details, refer to Available Commands.

Step 8: Chat with AI Code Review Agent

Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

Screenshots

Screenshot # 1

AI-generated pull request (PR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a pull request.

Changelist in AI Code Review Agent's feedback.

Screenshot # 3

AI code review feedback posted as comments on the pull request.

Custom prompt templates

🤯 Sick of typing out long prompts every time? 😩 Bito's got your back! Now, create custom prompt templates for all your frequently used prompts and save yourself some stress.

With "Create Prompt Template," you can create and save custom prompt templates for use in your IDE. By defining a custom template with a template name and prompt, Bito can execute the prompt as is on the selected code. With this feature, you can save time and streamline your workflow by quickly executing frequently used prompts without inputting them manually each time.

The custom prompt templates feature and standard prompt templates are located below the chatbox.

Bito Custom Prompt Template

Here is a quick overview of the Custom Prompt Templates in Bito

Creating a Custom Prompt Template

  1. Open Bito Plugin in your IDE

  2. Below the chatbox, click on "New Template".

Create Prompt Template Feature
  1. Enter the "Template Name" and "Prompt" for your custom template. You can use {{%code%}} as a macro to insert the selected code in your prompt. If this macro is not used, Bito will insert the selected code at the end of your prompt. Next, select the "Output Format". You currently have two options:

    1. Display in Bito panel (Default)

    2. Output in diff view

  1. Click on "Create Template" to save your new custom template. All custom templates will appear below the chatbox alongside standard templates. You can create up to four custom templates.

My custom templates

Tips: Creating a good AI prompt is an iterative process. We recommend you iterate various options and check the output to ensure your satisfaction. Here are some helpful hints:

  1. Add instruction in the template to enclose any code output in triple backticks (```). This will ensure that the code is formatted in a code block in the output.

  2. Write clear and specific instructions. If you want output in a specific format, ask to structure the output in that format.

  3. Specify the steps to complete the task. A clearly defined set of steps allows the model to think and work out a solution before generating the output.

  4. Custom templates require selecting code in IDE. Bito replaces all instances of the {{%code%}} in the template with the selected code. If no {{%code%}} macro is found, the selected code is appended to the end of the code.

Example 1: OWASP Security Issue

Your task is to analyze the code given below: {{%code%}}. The code needs to be analyzed for the top 10 security vulnerabilities defined by OWASP. If no vulnerability is found, the output should be "No Issue Found". If any issue is found, identify all instances of the code. The output should include a list of all issues and the fixed code in the following format. The code should be enclosed in the three backticks.

Issues: List of Issues

Code: Fixed code enclosed in three back ticks

Editing or Deleting a Custom Template

You can edit or remove the templates anytime by clicking on the three dots over the template that you want to edit or remove. Note that you can edit or remove the standard templates provided by Bito.

Edit/Remove my custom templates

Using a Custom Template

  1. Select any code that you want to execute the prompt.

  2. Run the Custom Template by clicking it in the Bito Templates panel or from the IDE context menu.

  3. Bito starts generating output

Here’s an example of how to execute prompts using your custom templates.

Let’s say you want to create a custom template to add a comment describing the logic behind the code. Here's how you can do it:

  1. Below the chatbox, click on "New template".

  2. Enter a name for your custom template, e.g. "Add Comment"

  3. In the "Prompt" field, enter the following: "Please add a comment describing the logic behind the code. " and then click on "Create Template" to save your new custom template

  1. Now, select the code to which you want to comment and click on the "Add Comment" template.

  1. Bito adds the selected code at the end of the prompt and executes it via Bito AI.

Embeddings

Bito leverages the power of embeddings to . But WTF are these embeddings, and how do they help Bito understand your code?

If you are curious to know, this guide is for you!

What is Embedding?

Embeddings, at their essence, are like magic translators. They convert data—whether words, images, or, in Bito's case, code—into vectors in a dense numerical space. These vectors encapsulate meaning or semantics. Basically, these vectors help computers understand and work with data more efficiently.

Imagine an embedding as a vector (list) of floating-point numbers. If two vectors are close, they're similar. If they're far apart, they're different. Simple as that!

A vector embedding looks something like this: [0.02362240, -0.01716885, 0.00493248, ..., 0.01665339]

Why Embeddings?

In this section, we'll explore the most common and impactful ways embeddings are used in everyday tech and applications.

Word Similarity & Semantics: Word embeddings, like Word2Vec, map words to vectors such that semantically similar words are closer in the vector space. This allows algorithms to discern synonyms, antonyms, and more based on their vector representations.

Sentiment Analysis: By converting text into embeddings, machine learning models can be trained to detect and classify the sentiment of a text, such as determining if a product review is positive or negative.

Recommendation Systems: Embeddings can represent items (like movies, books, or products) and users. By comparing these embeddings, recommendation systems can suggest items similar to a user's preferences. For example, by converting audio or video data into embeddings, systems can recommend content based on similarity in the embedded space, leading to personalized user recommendations.

Document Clustering & Categorization: Text documents can be turned into embeddings using models like Doc2Vec. These embeddings can then be used to cluster or categorize documents based on their content.

Translation & Language Models: Models like BERT and GPT use embeddings to understand the context within sentences. This contextual understanding aids in tasks like translation and text generation.

Image Recognition: Images can be converted into embeddings using convolutional neural networks (CNNs). These embeddings can then be used to recognize and classify objects within the images.

Anomaly Detection: By converting data points into embeddings, algorithms can identify outliers or anomalies by measuring the distance between data points in the embedded space.

Chatbots & Virtual Assistants: Conversational models turn user inputs into embeddings to understand intent and context, enabling more natural and relevant responses.

Search Engines: Text queries can be converted into embeddings, which are then used to find relevant documents or information in a database by comparing embeddings.

Let’s look at an example

Suppose you have two functions in your codebase:

Function # 1:

Function # 2:

Using embeddings, Bito might convert these functions into two vectors. Because these functions perform different operations, their embeddings would be at a certain distance apart. Now, if you had another function that also performed addition but with a slight variation, its embedding would be closer to the add function than the subtract function.

Let's oversimplify and imagine these embeddings visually:

Embedding for Function # 1 (add):

[0.9, 0.2, 0.1]

Embedding for Function # 2 (subtract):

[0.2, 0.9, 0.1]

Notice the numbers? The first positions in these lists are quite different: 0.9 for addition and 0.2 for subtraction. This difference signifies the varied operations these functions perform.

Now, let's add a twist. Suppose you wrote another addition function, but with an extra print statement:

Function # 3:

Bito might give an embedding like:

[0.85, 0.3, 0.15]

If you compare, this new list is more similar to the add function's list than the subtract one, especially in the first position. But it's not exactly the same as the pure add function because of the added print operation.

This distance or difference between lists is what Bito uses to determine how similar functions or chunks of code are to one another. So, when you ask Bito about a piece of code, it quickly checks these number lists, finds the closest match, and guides you accordingly!

How Bito Uses Embeddings

When you ask Bito a question or seek assistance with a certain piece of code, Bito doesn't read the code the way we do. Instead, it refers to these vector representations (embeddings). By doing so, it can quickly find related pieces of code in your repository or understand the essence of your query.

For example, if you ask Bito, "Where did I implement addition logic?", Bito will convert your question into an embedding and then look for the most related (or closest) embeddings in its index. Since it already knows the add function's embedding represents addition, it can swiftly point you to that function.

Models for Generating Embeddings

When we talk about turning data into these nifty lists of numbers (embeddings), several models and techniques come into play. These models have been designed to extract meaningful patterns from vast amounts of data and represent them as compact vectors. Here are some of the standout models:

Word2Vec: One of the pioneers in the world of embeddings, this model, developed by researchers at Google, primarily focuses on words. Given a large amount of text, Word2Vec can produce a vector for each word, capturing its context and meaning.

Doc2Vec: An extension of Word2Vec, this model is designed to represent entire documents or paragraphs as vectors, making it suitable for larger chunks of text.

GloVe (Global Vectors for Word Representation): Developed by Stanford, GloVe is another method to generate word embeddings. It stands out because it combines both global statistical information and local semantic details from a text.

BERT (Bidirectional Encoder Representations from Transformers): A more recent and advanced model from Google, BERT captures context from both left and right (hence, bidirectional) of a word in all layers. This deep understanding allows for more accurate embeddings, especially in complex linguistic scenarios.

FastText: Developed by Facebook’s AI Research lab, FastText enhances Word2Vec by considering sub-words. This means it can generate embeddings even for misspelled words or words not seen during training by breaking them into smaller chunks.

ELMo (Embeddings from Language Models): This model dynamically generates embeddings based on the context in which words appear, allowing for richer representations.

Universal Sentence Encoder: This model, developed by Google, is designed to embed entire sentences, making it especially useful for tasks that deal with larger text chunks or require understanding the nuances of entire sentences.

GPT (Generative Pre-trained Transformer): Developed by OpenAI, GPT is a series of models (from GPT-1 to GPT-4o) that use the Transformer architecture to generate text. While GPT models are famous for generating text, they can also produce vector embeddings. Their latest embeddings model is text-embedding-ada-002 which can generate embeddings for text search, code search, sentence similarity, and text classification tasks.

Bito uses text-embedding-ada-002 from OpenAI and we’re also trying out some open-source embedding models for our feature.

These models, among many others, power a wide range of applications, from natural language processing tasks like sentiment analysis and machine translation to aiding assistants like Bito in understanding and processing code or any other form of data.

Embeddings: More Than Just Numbers

While embeddings might seem like just another technical term or a mere list of numbers, they are crucial bridges that connect human logic and machine understanding. The ability to convert complex data, be it code, images, or even human language, into such vectors, and then use the 'distance' between these vectors to find relatedness, is nothing short of magic.

In the context of Bito, embeddings aren't just a feature—it's the core that powers its deep understanding of your code, making it an indispensable tool for developers. So, the next time you think of Bito's answers as magical, remember, it's the power of embeddings at work!

Keyboard shortcuts

Effortlessly Use AI Code Completions With Your Keyboard

Description
Press on Keyboard

Set Your Own Keyboard Shortcuts in VS Code

In VS Code settings, you can customize the keyboard shortcuts for feature according to your preferences. To do so, follow the below steps:

  1. Click the gear icon at the bottom left of the VS Code window. Then, select “Keyboard Shortcuts” to view all the keyboard shortcuts used by VS Code and its extensions.

  1. In the search bar, type "bito" to view all the keyboard shortcuts used by the Bito extension.

  1. Find the command for which you want to change the keyboard shortcut. Then, click on the edit icon in front of it.

  1. A popup modal will appear. Enter your new key combination and press the Enter button to save it.

Reset Keyboard Shortcut to Default Value in VS Code

If you change a keyboard shortcut and want to revert to the original, just right-click on the specific command. A menu will pop up. Choose "Reset Keybinding" from this menu.

Set Your Own Keyboard Shortcuts in JetBrains

In JetBrains IDE settings, you can customize the keyboard shortcuts for the AI Code Completions feature according to your preferences. To do so, follow the steps below:

  1. Click the gear icon at the top right of the JetBrains IDE window, then select "Settings" to open the settings window.

  2. In the settings window, click on the "Keymap" button given in the left sidebar. Then, in the search bar, type "bito" to view all the keyboard shortcuts used by the Bito extension.

  3. Find the command for which you want to change the keyboard shortcut and right-click on it. Then select "Add Keyboard Shortcut".

  4. A popup modal will appear. Enter your new key combination and click the "OK" button to save it.

  5. Now you will have more than one keyboard shortcuts assigned to a command. To remove the previously set keyboard shortcut, right-click on the command again. From here, you can remove the desired keyboard shortcut by clicking on the "Remove [keyboard_shortcut_here]" button.

  6. After doing all of the above steps, you must click on "Apply" and then "OK" button to save your changes. Otherwise, your modifications will be lost.

Reset Keyboard Shortcut to Default Value in JetBrains

If you change a keyboard shortcut and want to revert to the original, just right-click on the specific command. A menu will pop up. Choose "Reset Shortcuts" from this menu.

After resetting the shortcut, you must click on "Apply" and then "OK" button to save your changes. Otherwise, your modifications will be lost.

def add(x, y):
    return x + y
def subtract(x, y):
    return x - y
def add_and_print(x, y):
    result = x + y
    print(result)
    return result
understand your entire codebase
AI that Understands Your Code

Vector databases

Think of a huge, never-ending stream of information like photos, tweets, and songs pouring in every second. We need special storage boxes to keep all this info organized and find what we need quickly. One of the new, cool storage boxes people are talking about is called a “Vector Database”. So, what's this Vector Database thing, and why is it something you might want to know about? Let's unwrap this mystery and make it super easy to understand.

What is a Vector Database?

A vector database is designed to handle vectorized data - that is, data represented as vectors. A vector, in this context, is a mathematical construct that embeds information into a high-dimensional space, with each dimension representing a different feature of the data.

Traditionally, databases have been adept at handling structured data (like rows and columns in a spreadsheet) or even semi-structured data (like JSON documents). However, with the rise of machine learning and artificial intelligence, there is an increasing need to efficiently store and query data that isn't just numbers or text but is represented in multi-dimensional space.

Vector database fills this gap by excelling at managing and querying data in the form of vectors. This is particularly useful for tasks that involve similarity search, like finding the most similar images, text, or even audio clips, in a process known as "nearest neighbor search".

Why are Vector Databases Important?

Imagine trying to search for a song that sounds like another song or finding images that are visually similar to a given image. These tasks are non-trivial because they involve understanding the content at a deeper, more abstract level. Vector databases allow us to convert these abstract, complex features into a mathematical space where 'similarity' can be computed and searched efficiently.

For instance, in the world of machine learning, models like neural networks can convert images or text into vectors during their processing stages. These vectors, known as embeddings, capture the essence of the data. When you query a vector database with another vector, it retrieves the most similar items based on the vector's position and distance in that high-dimensional space.

Key Features of Vector Databases

Efficient Similarity Search: They use specialized indexing and search algorithms to perform fast and efficient nearest neighbor searches.

Scalability: They are designed to handle large volumes of data and high-dimensional vectors without sacrificing performance.

Machine Learning Integration: They are often integrated with machine learning models and pipelines to enable real-time embedding and querying.

Language Agnosticism: Vector databases can handle any data that can be vectorized, whether it's images, text, audio, or any other form of media.

Real-World Applications

Recommendation Systems: Vector databases can power recommendation engines that suggest products, movies, or songs by finding items that are similar to a user’s past behavior.

Image Retrieval: They are used in image search engines to find images that are visually similar to a query image.

Natural Language Processing: In the field of NLP, vector databases enable searching through large corpora of text for documents or entries that are contextually similar to a given piece of text.

Fraud Detection: They can be used to detect anomalies or patterns in transaction data that signify fraudulent activity by comparing against typical transaction vectors.

Best Free, Paid, and Open-Source Vector Databases

Let's look at some top players:

Pinecone: A cloud-native, managed vector database that doesn't require infrastructure management. Pinecone offers fast data processing and quality relevance features like metadata filters and supports both sparse and dense vectors. Key offerings include duplicate detection, rank tracking, and deduplication.

Milvus: An open-source vector database tailored for AI applications and similarity search, it provides fast search capabilities across trillions of vector datasets and boasts high scalability and reliability. Its use cases span across image and chatbot applications to chemical structure search.

Chroma: Aimed at building LLM applications, Chroma is an open-source, AI-native embedding database offering features like filtering and intelligent grouping. It positions itself as a database that combines document retrieval capabilities with AI to enhance data querying processes.

Weaviate: This is a cloud-native, open-source vector database that stands out with its AI modules and ability to handle text, images, and other data conversions into searchable vectors. It offers quick neighbor search and is designed with scalability and security in mind.

Deep Lake: Designed for deep learning and LLM-based applications, Deep Lake supports a wide array of data types and integrates with various tools to facilitate model training and versioning. It emphasizes ease in deploying enterprise-grade products.

Qdrant: A versatile open-source vector search engine and database that supports payload-based storage and extensive filtering. It is well-suited for semantic matching and faceted search, with a focus on efficiency and configuration simplicity.

Elasticsearch: A highly scalable open-source analytics engine capable of handling diverse data types, Elasticsearch is part of the Elastic Stack, offering fast search, fine-tuned relevance, and sophisticated analytics.

Vespa: Vespa is an open-source data serving engine that enables machine-learned decisions on massive datasets at serving time. It's built for high-performance and high-availability use cases, facilitating a variety of complex query operations.

Vald: Focused on dense vector search, Vald is a distributed, cloud-native search engine that uses the ANN Algorithm NGT for neighbor searches. It features automatic indexing, index backup, and horizontal scaling.

ScaNN: A Google-developed method that improves search accuracy and performance for vector similarity, ScaNN is known for its effective compression techniques and support for different distance functions.

Pgvector: As a PostgreSQL extension, pgvector brings vector similarity search to the robust, feature-rich environment of PostgreSQL, enabling embeddings to be stored and searched alongside other application data.

Faiss: Developed by Facebook AI Research, Faiss is a library for efficient similarity search and clustering of dense vectors. It's versatile, supporting various distances and batch processing, and it can operate on datasets larger than available RAM.

How to Choose the Right Vector Database for Your Project

When you're picking out the perfect vector database, think about these things:

  • Do you need someone else to handle the techy database stuff, or do you have wizards in-house?

  • Got your vectors ready, or do you need the database to make them for you?

  • How fast do you need the data – right now, or can it wait?

  • How much experience does your team have with this kind of tech?

  • Is the database easy to learn, or is it going to be lots of late nights?

  • Can you trust the database to be up and running when you need it?

  • What's the price tag for setting it up and keeping it going?

  • How secure is it, and does it check all the legal boxes?

Challenges and Considerations

While vector databases are powerful, they come with challenges. The management and querying of high-dimensional data can be resource-intensive. The efficiency of a vector database often depends on the underlying infrastructure and the effectiveness of its indexing and compression algorithms.

Furthermore, security and privacy are crucial, especially when handling sensitive data. Vector databases must ensure that they incorporate robust security measures to protect against unauthorized access and data breaches.

The Future of Vector Databases

As data continues to grow in volume and complexity, the importance of vector databases will only increase. Their integration with AI and machine learning is a match set for the future where almost every digital interaction may involve some form of similarity search or content-based retrieval.

Conclusion

Vector Databases are a cutting-edge solution designed to handle the complexity of modern data needs, particularly in the realm of similarity search and AI applications. Understanding and leveraging vector databases can unlock a plethora of opportunities across industries, making them an exciting area of development in the database technology landscape.

As companies and developers keep using AI more and more, the use of vector databases is expected to increase a lot. This signals the start of a new period in how we handle data, where the way we sort and keep information is as complex and varied as the data itself.

How to install Bito extension on JetBrains IDEs
Updating Bito Plugin on JetBrains IDEs
Updating Bito plugin on VS Code
AI that understands your code
Setting AI output language in Bito
Bito AI Demo
How to use Bito Chrome Extension

Get/Trigger suggestions manually

macOS: Option + Shift + K

Windows: Alt + Shift + K

Accept entire suggestion

Tab

Accept single word from suggestion

Coming Soon...

Accept single line from suggestion

Coming Soon...

Dismiss suggestion

Esc

Show next suggestion

macOS: Option + ]

Windows: ALT + ]

Show previous suggestion

macOS: Option + [

Windows: Alt + [

AI Code Completions

Create or customize an Agent instance

Customize the AI Code Review Agent to match your workflow needs.

Connecting your Bito workspace to GitHub, GitLab, or Bitbucket provides immediate access to the AI Code Review Agent. To get you started quickly, Bito offers a Default Agent instance—pre-configured and ready to deliver AI-powered code reviews for pull requests and code changes within supported IDEs such as VS Code and JetBrains.

While the Default Agent is ready for use right away, Bito also gives you the option to create new Agent instances or customize existing ones to suit your specific requirements. This flexibility ensures that the Agent can adapt to a range of workflows and project needs.

For example, you might configure one Agent to disable automatic code reviews for certain repositories, another to exclude specific Git branches from review, and yet another to filter out particular files or folders.

This guide will walk you through how to create or customize an Agent instance, unlocking its full potential to streamline your code reviews.

Creating or customizing AI Code Review Agents

Once Bito is connected to your GitHub/GitLab/Bitbucket account, you can easily create a new Agent or customize an existing one to suit your workflow.

  1. To create a new Agent, navigate to the Code Review > Repositories dashboard and click the New Agent button to open the Agent configuration form.

  1. If you’d like to customize an existing agent, simply go to the same Code Review > Repositories dashboard and click the Settings button next to the Agent instance you wish to modify.

Once you have selected an Agent to customize, you can modify its settings in the following areas:

1. General settings

Agent name

Assign a unique alphanumeric name to your Agent. This name acts as an identifier and allows you to invoke the Agent in supported clients using the @<agent_name> command.

2. Customization options

Bito provides five tabs for in-depth Agent customization.

These include:

  1. Review

  2. Custom Guidelines

  3. Filters

  4. Tools

  5. Chat

Let's have a look at each tab in detail.

a. Review

In this tab, you can configure how and when the Agent performs reviews:

  • Review language: Select the output language for code review feedback. Bito supports over 20 languages, including English, Hindi, Chinese, and Spanish. The AI code review feedback will be posted on the pull requests in the selected language.

  • Review feedback mode: Choose between Essential and Comprehensive review modes and tailor review request settings to fit your team's unique workflow requirements.

    • In Essential mode, only critical issues are posted as inline comments, and other issues appear in the main review summary under "Additional issues".

    • In Comprehensive mode, Bito also includes minor suggestion and potential nitpicks as inline comments.

  • Automatic review: Toggle to enable or disable automatic reviews when a pull request is created and ready for review.

  • Automatic incremental review: Toggle to enable or disable reviews for new commits added to a pull request. Only changes since the last review are assessed.

    • Batch time (hours): Set the wait time (0 to 24 hours) for batching new commits before triggering a review. Lower values result in more frequent incremental reviews.

  • Request changes comments: Enable this option to get Bito feedback as "Request changes" review comments. Depending on your organization's Git settings, you may need to resolve all comments before merging.

  • Draft pull requests: By default, the Agent excludes draft pull requests from automated reviews. Disable this toggle to include drafts.

  • Automatic summary: Toggle to enable automatic generation of AI summaries for changes, which are appended to the pull request description.

  • Change Walkthrough: Enable this option to generate a table of changes and associated files, posted as a comment on the pull request.

b. Custom Guidelines

Create, apply, and manage custom code review guidelines to align the AI agent’s reviews with your team’s specific coding standards.

The agent will follow your guidelines when reviewing pull requests.

Learn more

c. Filters

Use filters to customize which files, folders, and Git branches are reviewed when the Agent triggers automatically on pull requests:

  • Exclude Files and Folders: A list of files/folders that the AI Code Review Agent will not review if they are present in the diff. You can specify the files/folders to exclude from the review by name or glob/regex pattern. The Agent will automatically skip any files or folders that match the exclusion list. This filter applies to both manual reviews initiated through the /review command and automated reviews.

  • Include Source/Target Branches: This filter defines which pull requests trigger automated reviews based on their source or target branch, allowing you to focus on critical code and avoid unnecessary reviews or AI usage. By default, pull requests merging into the repository’s default branch are subject to review. To review additional branches, you can use the Include Source/Target Branches filter. Bito will review pull requests when the source or target branch matches the list. This filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

For more information and examples, see Excluding Files, Folders, or Branches with Filters.

d. Tools

Enhance the Agent’s reviews by enabling additional tools for static analysis, security checks, and secret detection:

  • Secret Scanner: Enable this tool to detect and report secrets left in code changes.

e. Chat

You can chat with the AI Code Review Agent to ask follow-up questions, request alternative solutions, or get clarification on review comments. From this tab, you can manage how the agent responds to these interactions.

  • Auto reply: Enable Bito to automatically reply to user questions posted as comments on its code review suggestions—no need to tag @bitoagent or @askbito.

If you are editing an existing agent, click Save to apply the changes.

3. Select repositories for code review

  1. If you are creating a new agent instance, click Select repositories after configuration to choose the Git repositories the agent will review.

  1. To enable code review for a specific repository, simply select its corresponding checkbox. You can also enable repositories later, after the Agent has been created. Once done, click Save and continue to save the new Agent configuration.

  1. When you save the configuration, your new Agent instance will be added and available on the Code Review > Repositories page.

Using in Visual Studio Code

AI that understands your codebase in VS Code

This feature is only available for our Team Plan. Visit the pricing page or billing documentation to learn more about our paid plans.

  1. Open your project’s folder using Visual Studio Code.

  2. Bito uses AI to create an index of your project’s codebase. It enables Bito to understand the code and provide relevant answers. There are three ways to start the indexing process:

    • When you open a new project, a popup box will appear through which Bito asks you whether you want to enable indexing of this project or not. Click on the “Enable” button to start the indexing process. You can also skip this step by clicking the “Maybe later” button. You can always index the project later if you want.

  • In the bottom-left of Bito plug-in pane, hover your mouse cursor over this icon. You can also enable indexing from here by clicking on the “Click to enable it” text.

  • Another option is to open the "Manage Repos" tab by clicking the laptop icon in the top-right corner of the Bito plugin pane.

  • From here you can start the indexing process by clicking on the “Start Indexing” button. Here, you will also see the total indexable size of the repository. Read more about What is Indexable Size?

Bito usually takes around 12 minutes per each 10MB of code to understand your repo.

Bito will still work correctly if you don’t enable indexing of your project. However, in that case, Bito will only be able to provide general answers.

If you have previously indexed some projects using Bito then they will show in the “Other projects” section.

Index building is aborted if the user logs out or if the user's subscription is canceled (downgraded from a paid plan to a free plan).

  1. Let’s start the indexing process by using any of the above-mentioned methods.

  2. The status will now be updated to “Indexing in progress...” instead of “Not Indexed”. You will also see the real-time indexing progress for the current folder, based on the number of files indexed.

In case you close the VS Code while the indexing is in progress then don’t worry. The indexing will be paused and will automatically continue from where it left off when you reopen VS Code. Currently, the indexing will resume 5-10 minutes after reopening the IDE.

The progress indicator for the other folders is updated every 5 minutes.

  1. Once the indexing is complete, the status will be updated from “Indexing in progress...” to “Indexed”, and will look like this.

  2. Now you can ask any question regarding your codebase by adding the keyword "my code" to your AI requests in the Bito chatbox. Bito is ready to answer them!

Example: in my code explain the file apiUser.js

Additional keywords for various languages are listed on the Available Keywords page. Also, here are some Example Questions.

  1. In case you ever want to delete an index then you can do that by clicking on this three dots button and then clicking the “Delete” button.

Index deletion is allowed even if the index is in progress or in a paused state.

  1. A warning popup box will open in the bottom of Bito’s plugin pane. You can either click on the “Delete” button to delete the project’s index from your system or click on the “Cancel” button to go back.

A Quick Example from a Real Project

For the sake of this tutorial, we’ve created a simple “Music Player using JavaScript”.

Here’s how it looks:

We have added a bunch of songs to this project. The song details like name, artist, image, and the music file name are stored in a file called music-list.js

Question # 1

Let’s ask Bito to list names of all song artists used in my code

As you can see, Bito gave the correct answer by utilizing its understanding of our repository.

Similarly, we can ask any coding-related question like find bugs, improve code, add new features, etc.

Question # 2

Our music player is working fine, but we don’t have any option to mute/unmute the song.

Let’s ask Bito to add this feature.

Here’s the question I used:

In my code how can i add a button to mute and unmute the song? By default, set this button to unmute. Also, use the same design as existing buttons in UI.

After adding the code suggested by Bito, here’s how the music player looks when it starts (unmuted).

And when muted:

Keyboard shortcuts

Bito UI in Visual Studio Code and JetBrains IDEs is entirely keyboard accessible. You can navigate Bito UI with standard keyboard actions such as TAB, SHIFT+TAB, ENTER, and ESC keys. Additionally, you can use the following shortcuts for quick operations.

The following video demonstrates important keyboard shortcuts.

General

Command
Shortcuts

Open Bito Panel: Toggle Bito Panel on and off in the JetBrains IDE. In the Visual Studio Code, the shortcut opens the Bito panel if not already opened.

SHIFT + CTRL + O

Puts cursor in the chatbox when Bito panel is in focus.

SPACEBAR (Or start typing your question directly)

Execute the chat command

ENTER

Add a new line in the chatbox

CTRL + ENTER or SHIFT + ENTER

Modify the most recently executed prompt. This copies the last prompt in the chatbox for any edits.

CTRL + M

Expands and Collapse the "Shortcut" panel

WINDOWS: CTRL + ⬆️ / ⬇️ MAC: CTRL + SHIFT+ ⬆️ / ⬇️

Question & Answers

The following keyboard shortcuts work after the Q/A block is selected.

Command
Keyboard Shortcut

Navigate between the Questions/Answers block.

Note: You must select the Q/A container with TAB/SHIFT+TAB.

⬆️ / ⬇️

Copy the answer to the clipboard.

CTRL + C

Insert the answer in the code editor

CTRL + I

Toggle the diff view (when Diff View is applicable)

CTRL + D

Expand/Collapse the code block in the question.

WINDOWS: CTRL + ⬆️ / ⬇️ MAC: CTRL + SHIFT+ ⬆️ / ⬇️

Regenerate the answer

CTRL + L

Modify the prompt for the selected Q&A. Bito copies the prompt in the chatbox that you can modify as needed.

CTRL + U

Change Default Keyboard Shortcuts

Bito has carefully selected the keyboard shortcuts after thorough testing. However, it's possible that Bito selected key combination may conflict with IDE or other extensions shortcut. You can change the Bito default shortcut keys to avoid such conflicts.

Visual Studio Code Editor

  1. To Open the Keyboards Shortcuts editor in VS Code, navigate to the menu under File > Preferences > Keyboard Shortcuts. (Code > Preferences > Keyboard Shortcuts on macOS)

  1. Search for default available commands, keybindings, or Bito extension-specific commands in VSCode keyboard shortcut editor.

  1. Finding a conflict in Key binding → Search for the key and take necessary action, e.g., Remove or Reset.

  1. Add a new key binding or map the existing Bito extension command. Provide the necessary information (Command ID) to add the new key binding.

JetBrains

JetBrains Document: https://www.jetbrains.com/help/idea/configuring-keyboard-and-mouse-shortcuts.html

  1. File > settings > keymaps > configure keymaps

  1. Bito extension shortcuts can be overwritten by going into the File > Settings > Keymaps > configure keymaps > to the action you want to assign. It will also overwrite the Bito shortcut if there are conflicts.

  2. Bito extension keyboard shortcuts can be changed from the IntelliJ settings. File > Settings > Keymaps > configure keymaps > plugins > Bito > action you want to change by right click.

  3. Bito extension Keyboard shortcuts can be deleted from the IntelliJ settings. File > Settings > Keymaps > configure keymaps > plugins > Bito > action you want to delete by right click.

Implementing custom code review rules

Customize Bito’s AI Code Review Agent to enforce your coding practices.

Bito’s offers a flexible solution for teams looking to enforce custom code review rules, standards, and guidelines tailored to their unique development practices. Whether your team follows specific coding conventions or industry best practices, you can customize the Agent to suite your needs.

We support two ways to customize AI Code Review Agent’s suggestions:

  1. , and the AI Code Review Agent automatically adapts by creating code review rules to prevent similar suggestions in the future.

  2. . Define rules through the dashboard in Bito Cloud and apply them to agent instances in your workspace.

1- Provide feedback on Bito-reported issues

AI Code Review Agent refines its suggestions based on your feedback. When you provide negative feedback on Bito-reported issues in pull requests, the Agent automatically adapts by creating custom code review rules to prevent similar suggestions in the future.

Depending on your Git platform, you can provide negative feedback in the following ways:

  • GitHub: Select the checkbox given in feedback question at the end of each Bito suggestion or leave a negative comment explaining the issue with the suggestion.

  • GitLab: React with negative emojis (e.g., thumbs down) or leave a negative comment explaining the issue with the suggestion.

  • Bitbucket: Provide manual review feedback by leaving a negative comment explaining the issue with the suggestion.

The custom code review rules are displayed on the dashboard in Bito Cloud.

These rules are applied at the repository level for the specific programming language.

By default, newly generated custom code review rules are disabled. Once negative feedback for a specific rule reaches a threshold of 3, the rule is automatically enabled. You can also manually enable or disable these rules at any time using the toggle button in the Status column.

Note: Providing a positive reaction emoji or comment has no effect and will not generate a new code review rule.

After you provide negative feedback, Bito generates a new code review rule in your workspace. The next time the AI Code Review Agent reviews your pull requests, it will automatically filter out the unwanted suggestions.

2- Create custom code review guidelines

We understand that different development teams have unique needs. To accommodate these needs, we offer the ability to implement custom code review guidelines in Bito’s .

Once you add guidelines, the agent will follow them when reviewing pull requests. You can manage guidelines (create, apply, and edit) entirely in the dashboard.

By enabling custom code review guidelines, Bito helps your team maintain consistency and improve code quality.

Note: Custom review guidelines are available only on the . Enabling them also upgrades your workspace to the Enterprise Plan.

How to add a guideline

Step 1: Open the Custom Guidelines tab

  • Sign in to .

  • Click in the sidebar.

Step 2: Fill the form

A. Manual setup

  1. Click Add guidelines button from the top right.

  2. Fill out:

    • Guideline name

    • Language (select a specific programming language or select General if the guideline applies to all languages)

    • Custom Guidelines and Rules (enter your guidelines here)

  3. Click Create guideline.

B. Use a Template

  1. Click Add guidelines button from the top right.

  2. Choose a template from the Use template dropdown menu.

  3. Review/edit fields as needed.

  4. Click Create guideline.

Step 3: Apply to an Agent

  • After creating a guideline, you’ll see an Apply review guideline dropdown.

  • Select the Agent instance, then click Manage review guidelines to open its settings.

To apply the guideline later: go to , find the Agent instance, click Settings, and manage guidelines there.

Step 4: Save configuration

On the Agent settings page, hit Save (top-right) to apply guideline changes.

Note: Visit the tab to edit or delete any guideline.

Managing review guidelines from agent settings

Efficiently control which custom guidelines apply to your AI Code Review Agent through the Agent settings interface.

  1. Go to dashboard from the Bito Cloud sidebar.

  2. Click Settings next to the target agent instance.

  1. Navigate to the Custom Guidelines section. Here you can either create a new guideline or select from existing guidelines.

  1. Create a new guideline

    • If you click Create a new guideline button, you will see the same form as mentioned earlier where you can enter the details to create a review guideline.

  1. Or select an existing guideline

    • If you click Select from existing guidelines button, you will get a popup screen from where you can select from a list review guidelines you already created. Use checkboxes to enable or disable each guideline for the selected agent and then click Add selected.

  1. Once you’ve applied or adjusted guidelines, click the Save button in the top-right corner to confirm changes.

FAQs

What types of custom code review guidelines can be implemented?

You can implement a wide range of custom code review guidelines, including:

  • Style and formatting guidelines

  • Security best practices

  • Performance optimization checks

  • Code complexity and maintainability standards

  • etc.

Is "custom code review guidelines" feature available in Team Plan?

No, this feature is available exclusively on the . Enabling the "custom code review guidelines" feature also upgrades your workspace to the Enterprise Plan.

For more details on Enterprise Plan, visit our .

Managing Index Size

Exclude unnecessary files and folders from repo to index faster!

What is Indexable Size?

Indexable size is size of all code files, excluding following from the folder:

  • Directory/File based filtering

    • logs, node_modules, dist, target, bin, package-lock.json, data.json, build, .gradle, .idea, gradle, extension.js, vendor.js, ngsw.json, polyfills.js, ngsw-worker.js, runtime.js, runtime-main.js, service-worker.js, bundle.js, bundle.css

  • Extension based filtering

    • bin, exe, dll, log, aac, avif, bmp, cda, gif ,mp3, mp4, mpeg, weba, webm, webp, oga, ogv, png, jpeg, jpg, bmp, wpa, tif, tiff, svg, ico, wav, mov, avi, doc, docx, ppt, pptx, xls, xlsx, ods, odp, odt, pdf, epub, rar, tar, zip, vsix, 7z, bz, bz2, gzip, jar, war, gz, tgz, woff, woff2, eot, ttf, map, apk, app, ipa, lock, tmp, logs, gmo, pt

  • Hidden files are filtered i.e., files starting with "."

  • All Empty files are filtered.

  • All Binary files are also filtered.

Is there any limit on repository size?

For workspaces that have upgraded to Bito's Team Plan, we have set the indexable size limit to 120MB per repo. However, once we launch the "AI that Understands Your Code" feature for our Free Plan users, they will be restricted to repositories with an indexable size limit of 10MB.

Learn more about above and see which files and folders are excluded by default.

You can reduce your repo's indexable size by excluding certain files and folders from indexing using file and remain within the limit.

What if a repo hits 120MB limit?

If a repo hits 120MB limit, then the below error message will be displayed in the "Manage repos" tab and the repo's index status will be changed to "Not Indexed".

Sorry, we don’t currently support repos of this size. Please use .bitoignore to reduce the size of the repo you want Bito to index.

To fix this issue, follow our instructions regarding and reduce your repo's size and bring it under the max limit of 120MB.

After that, you must and then restart the indexing by clicking on the "Start Indexing" button shown for the repo folder. You can also follow our step-by-step guides to and IDEs.

What is .bitoignore and how to use it?

A .bitoignore file is a plain text file where each line contains a pattern or rules that tells Bito which files or directories to ignore and not index. In other words, it's a way to reduce your repo's indexable size. You can also see .

There are two ways to use .bitoignore file:

  1. Create a .bitoignore file inside the folder where indexes created by Bito are stored. e.g. <user-home-directory>/.bito/localcodesearch/.bitoignore

    • On Windows, this path will be something like: C:\Users\<your username>\.bito\localcodesearch\.bitoignore

    • Note: The custom ignore rules you set in this .bitoignore file will be applied to all the repositories where you have enabled indexing.

  2. Create a .bitoignore file inside your repository's root folder.

If a .gitignore file is available in your repo then Bito will also use that to ignore files & folders from indexing process. Both .bitoignore and .gitignore files can work together without any issues.

At present, Bito considers only those .gitignore files that are placed in the project root directory and .bitoignore files that are placed either in <user-home-directory\.bito\localcodesearch> or <project-root-directory>

Changes to the .bitoignore file are taken into account at the beginning of the indexing process, not during or after the indexing itself.

Therefore, to implement changes made to the .bitoignore file, you'll need to and then restart the indexing by clicking on the "Start Indexing" button shown for the repo folder. You can also follow our step-by-step guides to and IDEs.

Please note that any changes to the .bitoignore or .gitignore file will take a minimum of 3 to 5 minutes to trigger new indexing.

Common .bitoignore Patterns

Understanding these patterns/rules is crucial for effectively managing the files and directories that Bito indexes and excludes in your projects.

Sample Rule
Description

Negation ! (exclamation mark)

When a pattern starts with ! it negates the pattern, meaning it explicitly includes files or directories that would otherwise be ignored. For example, have a look at this sample .bitoignore file:

Here !Engine/Build/BatchFiles/** pattern includes all files in the Engine/Build/BatchFiles directory and its subdirectories, even though Engine/** pattern would ignore them.

Avoid Ambiguous Patterns: Negation patterns can become confusing when they potentially match multiple files. Target specific files or folders rather than using wildcards in negation patterns.

For example, it is better to use patterns like !Engine/Build/BatchFiles/script.bat instead of !Engine/Build/BatchFiles/**

.bitoignore Examples

Exclude Files/Folders

Exclude Everything Except Specific Files

To exempt a file, ensure that the negation pattern ! appears afterward, thereby overriding any broader exclusions.

Use cases and examples

Discover Real-World Applications of AI Code Completions

Say goodbye to the endless searches on Google or Stack Overflow for answers to your coding dilemmas. Discover the numerous advantages offered by the Bito's feature outlined below, designed to streamline your coding process and boost productivity efficiently.

Click the tabs below to discover more cool examples!

Autocomplete Repetitive Code

Comment Used:

Writing Regular Expressions (regex)

Autocomplete regex patterns as you type or generate from comment.

Comment Used:

Writing SQL Queries

Autocomplete SQL queries for CRUD operations, table structure definitions, SQL Joins, Wildcard Characters, etc. You can even ask Bito to write safer queries to prevent SQL Injection.

Comment Used:

Internationalization (i18n)

Effortlessly translate your user interface (UI) into any widely spoken language of your choice.

Comment Used:

Generating Sample Data

Populate arrays, variables, objects, and more with dummy data to facilitate thorough testing scenarios.

Comment Used:

Writing Custom Functions

Bito is really good at writing custom functions. Just provide your requirements in comments and watch Bito generate entire function for you.

Comment Used:

Generate OOP Boilerplate Code (Class, Properties, Methods)

Quickly generate boilerplate code for class definitions, including properties, constructor, and getter/setter methods. You may need to provide additional comments to generate methods with custom functionality.

Here, the first screenshot displays an example of . The other two screenshots are examples of .

Comment Used:

Comment Used:

Writing Docstrings

Automatically generate docstrings for functions and classes.

Comment Used:

Not Just These, There's More!

  • Generate try...catch blocks.

  • Test-driven development (TDD)

  • Writing unit tests.

  • Writing test double.

  • Generate code for Object-Relational Mapping (ORM).

  • Generate code for Object Document Mapper (ODM).

  • Autocomplete loops (for, while, do...while, foreach)

  • Autocomplete conditional statements (if...else, if...elseif...else, switch)

  • Suggest existing functions from your codebase that can be called in the current scope.

  • Autocomplete Dockerfile Commands

  • Get Code for Popular Algorithms (e.g. A*, Dijkstra, etc.)

  • etc.

Using in JetBrains IDEs

AI that understands your codebase in JetBrains IDEs (e.g., PyCharm)

This feature is only available for our Team Plan. Visit the or to learn more about our paid plans.

  1. Open your project’s folder using a JetBrains IDE. For this guide, we are using PyCharm.

  2. Bito uses AI to create an of your project’s codebase. It enables Bito to understand the code and provide relevant answers. There are three ways to start the indexing process:

    • When you open a new project, a popup box will appear through which Bito asks you whether you want to enable indexing of this project or not. Click on the “Enable” button to start the indexing process. You can also skip this step by clicking the “Maybe later” button. You can always index the project later if you want.

  • In the bottom-left of Bito plug-in pane, hover your mouse cursor over this icon. You can also enable indexing from here by clicking on the “Click to enable it” text.

  • Another option is to open the "Manage Repos" tab by clicking the laptop icon in the top-right corner of the Bito plugin pane.

  • From here you can start the by clicking on the “Start Indexing” button given in front of your current project. Here, you will also see the total indexable size of the repository. Read more about

Bito usually takes around 12 minutes per each 10MB of code to understand your repo.

Bito will still work correctly if you don’t enable indexing of your project. However, in that case, Bito will only be able to provide general answers.

If you have previously indexed some projects using Bito then they will show in the “Other projects” section.

Index building is aborted if the user logs out or if the user's subscription is canceled (downgraded from a paid plan to a free plan).

  1. Let’s start the indexing process by using any of the above-mentioned methods.

  2. The status will now be updated to “Indexing in progress...” instead of “Not Indexed”. You will also see the real-time indexing progress for the current folder, based on the number of files indexed.

In case you close the JetBrains IDE (e.g., PyCharm) while the indexing is in progress then don’t worry. The indexing will be paused and will automatically continue from where it left off when you reopen the IDE. Currently, the indexing will resume 5-10 minutes after reopening the IDE.

The progress indicator for the other folders is updated every 5 minutes.

  1. Once the indexing is complete, the status will be updated from “Indexing in progress...” to “Indexed”, and will look like this.

  2. Now you can ask any question regarding your codebase by adding the keyword "my code" to your AI requests in the Bito chatbox. Bito is ready to answer them!

Example: in my code explain the file apiUser.js

Additional keywords for various languages are listed on the page. Also, here are some .

  1. In case you ever want to delete an index then you can do that by clicking on this three dots button and then clicking the “Delete” button.

Index deletion is allowed even if the index is in progress or in a paused state.

  1. A warning popup box will open in the bottom of Bito’s plugin pane. You can either click on the “Delete” button to delete the project’s index from your system or click on the “Cancel” button to go back.

A Quick Example from a Real Project

For the sake of this tutorial, we’ve created a clone of popular game “Wordle” using Python.

Here’s how it looks:

We have stored the list of words in files that are inside the “word_files” folder. A word is selected from these files randomly at the start of the game that the player has to guess.

Question # 1

Let’s ask Bito to understand my code and briefly write about what this game is all about and how to play it

Bito correctly described the game by just looking at its source code.

Question # 2

Our game (PyWordle) is working fine, but there is no count down timer to make it a bit more challenging.

So, let’s ask Bito to add this feature.

Here’s the question I used:

suggest code for main.py "class PyWordle" to add a count down timer for this game in my code. I'm using "self" in functions and variable names, so suggest the code accordingly. The player will lose the game if the time runs out. Set the time limit to 2 minutes (format like 02:00). The timer will start when the game starts. Also reset the timer when the game restarts, such as when player closes the "you won / you lost" popup. Display this real-time count down timer on the right-side of where the player score is displayed. Use the similar design as the player score UI. Also tell me exactly where to add your code. Make sure all of this functionality is working.

Bito suggested the code which looks good. But, it was a bit incomplete and needs some improvements. So, I further asked a series of questions to Bito (one-by-one) to fix the remaining issues.

After adding the code suggested by Bito, here's how the PyWordle game looks now. As you can see the countdown timer is accurately added where we want it to be added.

# this is a comment.

Any line that starts with a # symbol is considered as a comment and will not be processed.

*

(Wildcard character) Ignores all files

**

(Wildcard character) Match any number of directories.

todo.txt

Ignores a specific file named todo.txt

*.txt

Ignores all files ending with .txt

*.*

Ignores all files with any extension.

Engine/ or Engine/**

Ignores all files in the Engine directory and their subdirectories (contents).

subdirectory1/example.html

Ignore the file named example.html, specifically located in the directory named subdirectory1.

!contacts.txt

(Negation Rule) Explicitly tracks contacts.txt, even if all .txt files are ignored.

!Engine/Batch/Builds

(Negation Rule) Tracks the Builds directory inside Engine/Batch, overriding a broader exclusion.

!Engine/Batch/Builds/**

(Negation Rule) Tracks the Builds directory and all of its subdirectories inside Engine/Batch, overriding a broader exclusion.

!.java

(Negation Rule) Ensures that all .java files are included, overriding any previous ignore rules that might apply to them.

!subdirectory1/*.txt

(Negation Rule) Track files with the .txt extension located specifically in the subdirectory1 directory, even if other rules might otherwise ignore .txt files.

BitoUtil?.java

The ? (question mark) matches any single character in a filename or directory name.

Engine/**
!Engine/Build/BatchFiles/**
# Ignore specific file named "config.ini"
config.ini

# Ignore all files with a '.bak' extension
*.bak

# Ignore all files with a '.kunal' extension
*.kunal

# Exclude directories
backup
temp/
dist/vendor
# Ignore all files except C++, header and python files
*
!*.cpp
!*.h
!*.py
indexable size
.bitoignore
how to use .bitoignore file
delete the index
Start Indexing in Visual Studio Code
Start Indexing in JetBrains
which files/folders are excluded by default
delete the index
Start Indexing in Visual Studio Code
Start Indexing in JetBrains
AI Code Review Agent
Provide feedback on Bito-reported issues in pull requests
Create custom code review guidelines via the dashboard
Custom Guidelines
Review Rules
AI Code Review Agent
Enterprise Plan
Visit pricing page
Bito Cloud
Custom Guidelines
Repositories
Custom Guidelines
Repositories
Enterprise Plan
Pricing Page
pricing page
billing documentation
index
indexing process
What is Indexable Size?
Available Keywords
Example Questions
Standard Templates in Bito
How to install Bito extension on VS Code
/*
write a function checkPasswordStrength($password)
it checks the strength of password based on below criteria:
	- Minimum password length must be 12 characters
	- 2 letters in Upper Case
	- 1 Special Character (!@#$&*)
	- 2 numerals (0-9)
	- 3 letters in Lower Case
return true if the password meets the above criteria otherwise return false
use regex for comparison
*/
# write a function get_questions(self)
# Inside it, write an SQL query that retrieves everything from questions, answers, and question_votes tables.
# Use joins to connect the tables questions.id with answers.question_id and questions.id with question_votes.question_id to get all questions with its answers and votes.
# finally, return the result
// i want to support these languages for my user interface:
// english, chinese, french, hindi, urdu
// create an array of arrays that contains blog posts info, such as: title, post_slug, excerpt, author name
# write datetime count down timer function for coming soon page
# target date is 29th Oct 2023
# The format should be like:
#   Coming Soon...
#   Remaining Time: 02 Days 6 Hours 34 Minutes 59 Seconds
# write getter/setter functions for each property
# write a function that uses the properties from this class and send email using SMTP
# suggest docstring for this Email class
AI Code Completions
Autocompletions
Comment to Code
GitHub - gitbito/CLI: Bito CLI (Command Line Interface) provides a command line interface to the Bito AI chat functionality. Over time, CLI will add more functions and new command options to support complex automation and workflows. This is a very early Alpha version. We would love to get your feedback on the new features or improvements.GitHub

Guide for GitHub (Self-Managed)

Integrate the AI Code Review Agent into your self-hosted GitHub Enterprise workflow.

Speed up code reviews by configuring the AI Code Review Agent with your self-managed GitHub Enterprise server. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using available commands.

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our Pricing page.

Get a 14-day FREE trial of Bito's AI Code Review Agent.

Video tutorial

coming soon...

Prerequisites

Before proceeding, ensure you've completed all necessary prerequisites.

1. Create a GitHub Personal Access Token (classic):

For GitHub pull request code reviews, ensure you have a CLASSIC personal access token with repo scope. We do not support fine-grained tokens currently.

View Guide

GitHub Personal Access Token (classic)

2. Authorizing a GitHub Personal Access Token for use with SAML single sign-on:

If your GitHub organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token (classic) through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.

For detailed instructions, please refer to the GitHub documentation.

Installation and configuration steps

Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

Step 1: Log in to Bito

Log in to Bito Cloud and select a workspace to get started.

Step 2: Open the Code Review Agents setup

Click Repositories under the CODE REVIEW section in the sidebar.

Step 3: Select your Git provider

Bito supports integration with the following Git providers:

  • GitHub

  • GitHub (Self-Managed)

  • GitLab

  • GitLab (Self-Managed)

  • Bitbucket

Since we are setting up the Agent for self-managed GitHub Enterprise server, select GitHub (Self-Managed) to proceed.

Supported versions:

  • GitHub Enterprise Server: 3.0 and above

Step 4: Register & install the Bito App for GitHub

To enable pull request reviews, you need to register and install the Bito’s AI Code Review Agent app on your self-managed GitHub Enterprise server.

If your network blocks external services from interacting with the GitHub server, whitelist all of Bito's gateway IP addresses in your firewall to ensure Bito can access your self-hosted repositories. The Agent response can come from any of these IPs.

  • List of IP addresses to whitelist:

    • 18.188.201.104

    • 3.23.173.30

    • 18.216.64.170

You need to enter the details for the below mentioned input fields:

  • Hosted GitHub URL: This is the domain portion of the URL where you GitHub Enterprise Server is hosted (e.g., https://yourcompany.github.com). Please check with your GitHub administrator for the correct URL.

  • Personal Access Token: Generate a Personal Access Token (classic) with “repo” scope in your GitHub (Self-Managed) account and enter it into the Personal Access Token input field. We do not support fine-grained tokens currently. For guidance, refer to the instructions in the Prerequisites section.

Click Validate to ensure the login credentials are working correctly. If the credentials are successfully validated, click the Install Bito App for GitHub button. This will redirect you to your GitHub (Self-Managed) server.

Now select where you want to install the app:

  • Choose All repositories to enable Bito for every repository in your account.

  • Or, select Only select repositories and pick specific repositories using the dropdown menu.

Bito app uses these permissions:

  • Read access to code and metadata

  • Read and write access to issues and pull requests

  • Read access to organization members

Click Install & Authorize to proceed. Once completed, you will be redirected to Bito.

Step 5: Enable AI Code Review Agent on repositories

After connecting Bito to your self-managed GitHub Enterprise server, you need to enable the AI Code Review Agent for your repositories.

Click the Go to repository list button to view all repositories Bito can access in your self-managed GitHub Enterprise server.

Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

Learn more

Step 6: Automated and manual pull request reviews

Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

  1. Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.

  2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the pull request and submit it. This action will start the code review process.

The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

Note: To enhance efficiency, the automated code reviews are only triggered for pull requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

To review additional branches, you can use the Include Source/Target Branches filter. Bito will review pull requests when the source or target branch matches the list.

The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

The AI Code Review Agent automatically reviews code changes up to 5000 lines when a pull request is created. For larger changes, you can use the /review command.

It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

Step 7: Specialized commands for code reviews

Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

  • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

  • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

  • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

  • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

For more details, refer to Available Commands.

Step 8: Chat with AI Code Review Agent

Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

Screenshots

Screenshot # 1

AI-generated pull request (PR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a pull request.

Changelist in AI Code Review Agent's feedback.

Screenshot # 3

AI code review feedback posted as comments on the pull request.

Install/run via GitHub Actions

Seamlessly integrate automated code reviews into your GitHub Actions workflows.

Prerequisites

  • Bito Access Key: Obtain your Bito Access Key. View Guide

  • GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently. View Guide

GitHub Personal Access Token (Classic)

Installation and Configuration Steps:

  1. Enable GitHub Actions:

    • Login to your GitHub account.

    • Open your repository and click on the "Settings" tab.

    • Select "Actions" from the left sidebar, then click on "General".

    • Under "Actions permissions", choose "Allow all actions and reusable workflows" and click "Save".

  2. Set Up Environment Variables:

    • Still in the "Settings" tab, navigate to "Secrets and variables" > "Actions" from the left sidebar.

    • Configure the following under the "Secrets" tab:

      For each secret, click the New repository secret button, then enter the exact name and value of the secret in the form. Finally, click Add secret to save it.

      • Name: BITO_ACCESS_KEY

        • Secret: Enter your Bito Access Key here. Refer to the guide for obtaining your Bito Access Key.

      • Name: GIT_ACCESS_TOKEN

        • Secret: Enter your GitHub Personal Access Token (Classic) with repo access. We do not support fine-grained tokens currently. For more information, see the Prerequisites section.

Check the above "Prerequisites" section to learn more about creating the access tokens needed to configure the Agent.

  • Configure the following under the "Variables" tab:

    For each variable, click the New repository variable button, then enter the exact name and value of the variable in the form. Finally, click Add variable to save it.

    • Name: STATIC_ANALYSIS_TOOL

      • Value: Enter the following text string as value: fb_infer,astral_ruff,mypy

    • Name: GIT_DOMAIN

      • Value: Enter the domain name of your Enterprise or self-hosted GitHub deployment or skip this if you are not using Enterprise or self-hosted GitHub deployment.

      • Example of domain name: https://your.company.git.com

    • Name: EXCLUDE_BRANCHES

      • Value: Specify branches to exclude from the review by name or valid glob/regex patterns. The agent will skip the pull request review if the source or target branch matches the exclusion list.

      • Note: For more information, see Source or Target branch filter.

    • Name: EXCLUDE_FILES

      • Value: Specify files/folders to exclude from the review by name or glob/regex pattern. The agent will skip files/folders that match the exclusion list.

      • Note: For more information, see Files and folders filter.

    • Name: EXCLUDE_DRAFT_PR

      • Value: Enter True to disable automated review for draft pull requests, or False to enable it.

      • Note: For more information, see Draft pull requests filter.

  1. Create the Workflow Directory:

    • In your repository, create a new directory path: .github/workflows.

  2. Add the Workflow File:

    • Download this test_cra.yml file from AI Code Review Agent's GitHub repo.

    • In your repository, upload this test_cra.yml file inside the .github/workflows directory either in your source branch of each PR or in a branch (e.g. main) from which all the source branches for PRs will be created.

    • Commit your changes.

Customizations for self-hosted GitHub

  1. Create a self-hosted Runner using Linux image and x64 architecture as described in the GitHub documentation.

  2. Create a copy of Bito's repository gitbito/codereviewagent main branch into your self-hosted GitHub organization e.g. "myorg" under the required name e.g. "gitbito-bitocodereview". In this example, now this repository will be accessible as "myorg/gitbito-bitocodereview".

  3. Update test_cra.ymlas below:

    • Change line from:

      • runs-on: ubuntu-latest

    • to:

      • runs-on: <label of the self-hosted GitHub Runner> e.g. self-hosted, linux etc.

  4. Update test_cra.ymlas below:

    • Replace all lines having below text:

      • uses: gitbito/codereviewagent@main

    • with:

      • uses: myorg/gitbito-bitocodereview@main

  5. Commit and push your changes in test_cra.yml .

Using the AI Code Review Agent

After configuring the GitHub Actions, you can invoke the AI Code Review Agent in the following ways:

Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please contact support.

  1. Automated Code Review: The agent will automatically review new pull requests as soon as they are created and post the review feedback as a comment within your PR.

  2. Manually Trigger Code Review: To start the process, simply type /review in the comment box on the pull request and submit it. This command prompts the agent to review the pull request and post its feedback directly in the PR as a comment.

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

    • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to Available Commands.

It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

Screenshots

Screenshot # 1

AI-generated pull request (PR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a pull request.

Changelist in AI Code Review Agent's feedback.

Screenshot # 3

AI code review feedback posted as comments on the pull request.

Guide for GitLab

Integrate the AI Code Review Agent into your GitLab workflow.

Speed up code reviews by configuring the AI Code Review Agent with your GitLab repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using available commands.

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our Pricing page.

Get a 14-day FREE trial of Bito's AI Code Review Agent.

Video tutorial

Coming soon...

Prerequisites

Before proceeding, ensure you've completed all necessary prerequisites.

1. Create a GitLab Personal Access Token:

For GitLab merge request code reviews, a token with api scope is required. Make sure that the token is created by a GitLab user who has the Maintainer access role.

View Guide

We recommend setting the token expiration to at least one year. This prevents the token from expiring early and avoids disruptions in the AI Code Review Agent's functionality.

Additionally, we highly recommend updating the token before expiry to maintain seamless integration and code review processes.

GitLab Personal Access Token

2. Authorizing a GitLab Personal Access Token for use with SAML single sign-on:

If your GitLab organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.

For more information, please refer to these GitLab documentation:

  • https://docs.gitlab.com/ee/user/group/saml_sso/

  • https://docs.gitlab.com/ee/integration/saml.html

  • https://docs.gitlab.com/ee/integration/saml.html#password-generation-for-users-created-through-saml

Installation and configuration steps

Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

Step 1: Log in to Bito

Log in to Bito Cloud and select a workspace to get started.

Step 2: Open the Code Review Agents setup

Click Repositories under the CODE REVIEW section in the sidebar.

Step 3: Select your Git provider

Bito supports integration with the following Git providers:

  • GitHub

  • GitHub (Self-Managed)

  • GitLab

  • GitLab (Self-Managed)

  • Bitbucket

Since we are setting up the Agent for GitLab, select GitLab to proceed.

Step 4: Connect Bito to GitLab

To enable merge request reviews, you’ll need to connect your Bito workspace to your GitLab account.

You can either connect using OAuth (recommended) for a seamless, one-click setup or manually enter your Personal Access Token.

To connect via OAuth, simply click the Connect with OAuth (Recommended) button. This will redirect you to the GitLab website, where you'll need to log in. Once authenticated, you'll be redirected back to Bito, confirming a successful connection.

If you prefer not to use OAuth, you can connect manually using a Personal Access Token.

Start by generating a Personal Access Token with api scope in your GitLab account. For guidance, refer to the instructions in the Prerequisites section.

Once generated, click the Alternatively, use Personal or Group Access Token button.

Now, enter the token into the Personal Access Token input field in Bito.

Click Validate to ensure the token is functioning properly.

If you've successfully connected via OAuth or manually validated your token, you can select your GitLab Group from the dropdown menu.

Click Connect Bito to GitLab to proceed.

Step 5: Enable AI Code Review Agent on repositories

After connecting Bito to your GitLab account, you need to enable the AI Code Review Agent for your repositories.

Click the Go to repository list button to view all repositories Bito can access in your GitLab account.

Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

Learn more

Step 6: Automated and manual merge request reviews

Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

  1. Automated code review: By default, the Agent automatically reviews all new merge requests and provides detailed feedback.

  2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the merge request and submit it. This action will start the code review process.

The AI-generated code review feedback will be posted as comments directly within your merge request, making it seamless to view and address suggestions right where they matter most.

Note: To enhance efficiency, the automated code reviews are only triggered for merge requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

To review additional branches, you can use the Include Source/Target Branches filter. Bito will review merge requests when the source or target branch matches the list.

The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

The AI Code Review Agent automatically reviews code changes up to 5000 lines when a merge request is created. For larger changes, you can use the /review command.

It may take a few minutes to get the code review posted as a comment, depending on the size of the merge request.

Step 7: Specialized commands for code reviews

Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

  • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

  • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

  • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

  • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

For more details, refer to Available Commands.

Step 8: Chat with AI Code Review Agent

Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

To start the conversation, type your question in the comment box within the inline suggestions on your merge request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

Screenshots

Screenshot # 1

AI-generated merge request (MR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a merge request.

Changelist in AI Code Review Agent's feedback.

Screenshot # 3

AI code review feedback posted as comments on the merge request.

Guide for Bitbucket

Integrate the AI Code Review Agent into your Bitbucket workflow.

Speed up code reviews by configuring the AI Code Review Agent with your Bitbucket repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using available commands.

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our Pricing page.

Get a 14-day FREE trial of Bito's AI Code Review Agent.

Video tutorial

Coming soon...

Prerequisites

Before proceeding, ensure you've completed all necessary prerequisites.

1. Create a Bitbucket App Password:

For Bitbucket pull request code reviews, you’ll need to connect your Bito workspace to your Bitbucket account.

Start by creating an App Password. App Passwords allow apps like Bito to access your Bitbucket account. Make sure that the App Password is created by a Bitbucket user who has the Admin access role to the repositories.

Ensure the required permissions are checked:

  • Under Account, select Read.

  • Under Pull requests, select Write.

  • Under Webhooks, select Read and write.

Installation and configuration steps

Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

Step 1: Log in to Bito

Log in to Bito Cloud and select a workspace to get started.

Step 2: Open the Code Review Agents setup

Click Repositories under the CODE REVIEW section in the sidebar.

Step 3: Select your Git provider

Bito supports integration with the following Git providers:

  • GitHub

  • GitHub (Self-Managed)

  • GitLab

  • GitLab (Self-Managed)

  • Bitbucket

Since we are setting up the Agent for Bitbucket, select Bitbucket to proceed.

Step 4: Connect Bito to Bitbucket

To enable pull request reviews, you’ll need to connect your Bito workspace to your Bitbucket account.

Start by creating an App Password. App Passwords allow apps like Bito to access your Bitbucket account. Make sure that the App Password is created by a Bitbucket user who has the Admin access role to the repositories.

Ensure the required permissions are checked:

  • Under Account, select Read.

  • Under Pull requests, select Write.

  • Under Webhooks, select Read and write.

For guidance, refer to the instructions in the Prerequisites section.

Once generated, enter your Bitbucket username and App password into the input fields in Bito.

Click Authorize to ensure the login credentials are working correctly.

If the credentials are successfully authorized, you can select your Bitbucket workspace from the dropdown menu.

Click Connect Bito to Bitbucket to proceed.

Step 5: Enable AI Code Review Agent on repositories

After connecting Bito to your Bitbucket account, you need to enable the AI Code Review Agent for your repositories.

Click the Go to repository list button to view all repositories Bito can access in your Bitbucket account.

Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

Learn more

Step 6: Automated and manual pull request reviews

Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

  1. Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.

  2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the pull request and click Add comment now to submit it. This action will start the code review process.

Note: After typing /review, add a space inside the comment box to ensure that /review is not highlighted as a Bitbucket slash command so that the comment can be posted correctly.

The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

Note: To enhance efficiency, the automated code reviews are only triggered for pull requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

To review additional branches, you can use the Include Source/Target Branches filter. Bito will review pull requests when the source or target branch matches the list.

The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

The AI Code Review Agent automatically reviews code changes up to 5000 lines when a pull request is created. For larger changes, you can use the /review command.

It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

Step 7: Specialized commands for code reviews

Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

  • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

  • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

  • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

  • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

For more details, refer to Available Commands.

Step 8: Chat with AI Code Review Agent

Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

Screenshots

Screenshot # 1

AI-generated pull request (PR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a pull request.

Changelist in AI Code Review Agent's feedback.

Screenshot # 3

AI code review feedback posted as comments on the pull request.

Open Bito in a new tab or window

Learn how to customize Bito’s view by switching from a side panel to a new tab or a separate window.

Guide for GitLab (Self-Managed)

Integrate the AI Code Review Agent into your self-hosted GitLab workflow.

Speed up code reviews by configuring the with your GitLab (Self-Managed) server. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a merge request, as well as how to manually initiate reviews using .

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

Video tutorial

coming soon...

Prerequisites

Before proceeding, ensure you've completed all necessary prerequisites.

1. Create a GitLab Personal Access Token:

For GitLab merge request code reviews, a token with api scope is required. Make sure that the token is created by a GitLab user who has the Maintainer access role.

We recommend setting the token expiration to at least one year. This prevents the token from expiring early and avoids disruptions in the AI Code Review Agent's functionality.

Additionally, we highly recommend updating the token before expiry to maintain seamless integration and code review processes.

2. Authorizing a GitLab Personal Access Token for use with SAML single sign-on:

If your GitLab organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.

For more information, please refer to these GitLab documentation:

Installation and configuration steps

Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

Step 1: Log in to Bito

and select a workspace to get started.

Step 2: Open the Code Review Agents setup

Click under the CODE REVIEW section in the sidebar.

Step 3: Select your Git provider

Bito supports integration with the following Git providers:

  • GitHub

  • GitHub (Self-Managed)

  • GitLab

  • GitLab (Self-Managed)

  • Bitbucket

Since we are setting up the Agent for GitLab (Self-Managed) server, select GitLab (Self-Managed) to proceed.

Supported versions:

  • GitLab (Self-Managed): 15.5 and above

Step 4: Connect Bito to GitLab

To enable merge request reviews, you’ll need to connect your Bito workspace to your GitLab (Self-Managed) server.

If your network blocks external services from interacting with the GitLab server, whitelist all of Bito's gateway IP addresses in your firewall to ensure Bito can access your self-hosted repositories. The Agent response can come from any of these IPs.

  • List of IP addresses to whitelist:

    • 18.188.201.104

    • 3.23.173.30

    • 18.216.64.170

You need to enter the details for the below mentioned input fields:

  • Hosted GitLab URL: This is the domain portion of the URL where you GitLab Enterprise Server is hosted (e.g., https://yourcompany.gitlab.com). Please check with your GitLab administrator for the correct URL.

  • Personal Access Token: Generate a GitLab Personal Access Token with api scope in your GitLab (Self-Managed) account and enter it into the Personal Access Token input field. For guidance, refer to the instructions in the section.

Click Validate to ensure the token is functioning properly.

If the token is successfully validated, you can select your GitLab Group from the dropdown menu.

Click Connect Bito to GitLab to proceed.

Step 5: Enable AI Code Review Agent on repositories

After connecting Bito to your GitLab self-managed server, you need to enable the AI Code Review Agent for your repositories.

Click the Go to repository list button to view all repositories Bito can access in your GitLab self-managed server.

Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

Step 6: Automated and manual merge request reviews

Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

  1. Automated code review: By default, the Agent automatically reviews all new merge requests and provides detailed feedback.

  2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the merge request and submit it. This action will start the code review process.

The AI-generated code review feedback will be posted as comments directly within your merge request, making it seamless to view and address suggestions right where they matter most.

Note: To enhance efficiency, the automated code reviews are only triggered for merge requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

To review additional branches, you can use the . Bito will review merge requests when the source or target branch matches the list.

The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

The AI Code Review Agent automatically reviews code changes up to 5000 lines when a merge request is created. For larger changes, you can use the /review command.

It may take a few minutes to get the code review posted as a comment, depending on the size of the merge request.

Step 7: Specialized commands for code reviews

Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

  • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

  • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

  • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

  • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

For more details, refer to .

Step 8: Chat with AI Code Review Agent

Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

To start the conversation, type your question in the comment box within the inline suggestions on your merge request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

Screenshots

Screenshot # 1

AI-generated merge request (MR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a merge request.

Screenshot # 3

AI code review feedback posted as comments on the merge request.

AI Code Review Agent
available commands
Pricing
Get a 14-day FREE trial of Bito's AI Code Review Agent.
View Guide
https://docs.gitlab.com/ee/user/group/saml_sso/
https://docs.gitlab.com/ee/integration/saml.html
https://docs.gitlab.com/ee/integration/saml.html#password-generation-for-users-created-through-saml
Log in to Bito Cloud
Repositories
Prerequisites
Learn more
Include Source/Target Branches filter
Available Commands
GitLab Personal Access Token
Changelist in AI Code Review Agent's feedback.
Logo

Agent Configuration: bito-cra.properties File

Setting up your agent: understanding the bito-cra.properties file

Note: This file is only available for people who are using the self-hosted version of AI Code Review Agent.

The bito-cra.properties file offers a comprehensive range of options for configuring the AI Code Review Agent, enhancing its flexibility and adaptability to various workflow requirements.

bito-cra.properties Available Options

Property Name
Supported Values
Is Mandatory?
Description

mode

  • cli

  • server

Yes

Whether to run the Docker container in CLI mode for a one-time code review or as a webhooks service to continuously monitor for code review requests.

pr_url

Pull request URL in GitLab, GitHub and Bitbucket

Yes, if the mode is CLI.

The pull request provides files with changes and the actual code modifications. When the mode is set to server, the pr_url is received either through a webhook call or via a REST API call.

This release only supports webhook calls; other REST API calls are not yet supported.

code_feedback

  • True

  • False

No

Setting it to True activates general code review comments to identify functional issues. If set to False, general code review will not be conducted.

bito_cli.bito.access_key

A valid Bito Access Key generated through Bito's web UI.

Yes

Bito Access Key is an alternative to standard email and OTP authentication.

git.provider

  • GITLAB

  • GITHUB

  • BITBUCKET

Yes, if the mode is CLI.

The name of git repository provider.

git.access_token

A valid Git access token provided by GITLAB or GITHUB or BITBUCKET

Yes

You can use a personal access token in place of a password when authenticating to GitHub/GitLab/BitBucket in the command line or with the API.

git.domain

A URL where Git is hosted.

No

It is used to enter the custom URL of self-hosted GitHub/GitLab Enterprise.

static_analysis

  • True

  • False

No

Enable or disable static code analysis, which is used to uncover functional issues in the code.

static_analysis_tool

  • fb_infer

  • astral_ruff

  • mypy

No

Comma-separated list of static analysis tools to run (e.g., fb_infer,astral_ruff,mypy).

linters_feedback

  • True

  • False

No

Enables feedback from linters like ESLint, golangci-lint, and Astral Ruff.

secret_scanner_feedback

  • True

  • False

No

Enables detection of secrets in code. For example, passwords, API keys, sensitive information, etc.

dependency_check

  • True

  • False

No

This feature is designed to identify security vulnerabilities in open-source dependency packages, specifically for JS/TS/Node.JS and GoLang. Without this input, reviews for security vulnerabilities will not be conducted.

dependency_check.snyk_auth_token

A valid authentication token for accessing Snyk's cloud-based security database.

No

If not provided, access to Snyk's cloud-based security database for checking security vulnerabilities in open-source dependency packages will not be available.

code_context

  • True

  • False

No

Enables enhanced code context awareness.

server_port

A valid and available TCP port number.

No

This is applicable when the mode is set to server. If not specified, the default value is 10051.

review_comments

  • 1

  • 2

No

Set the value to 1 to display the code review in a single post, or 2 to show code review as inline comments, placing suggestions directly beneath the corresponding lines in each file for clearer guidance on improvements.

The default value is 2.

review_scope

  • security

  • performance

  • scalability

  • codeorg

  • codeoptimize

No

Specialized commands to perform detailed analyses on specific aspects of your code. You can provide comma-separated values to perform multiple types of code analysis simultaneously.

include_source_branches

Glob/regex pattern.

No

Comma-separated list of branch patterns (glob/regex) to allow as pull request sources.

include_target_branches

Glob/regex pattern.

No

Comma-separated list of branch patterns (glob/regex) to allow as pull request targets.

exclude_files

Glob/regex pattern.

No

A list of files/folders that the AI Code Review Agent will not review if they are present in the diff.

By default, these files are excluded: *.xml, *.json, *.properties, .gitignore, *.yml, *.md

exclude_draft_pr

  • True

  • False

No

A binary setting that enables/disables automated review of pull requests (PR) based on the draft status. The default value is True which skips automated review of draft PR.

cra_version

  • latest

  • Any specific version tag

No

Sets the agent version to run (latest or a specific version tag).

post_as_request_changes

  • True

  • False

No

Posts feedback as 'Request changes' review comments. Depending on your organization's Git settings, you may need to resolve all comments before merging.

support_email

Email address

No

Contact email shown in error messages.

suggestion_mode

  • essential

  • comprehensive

No

Controls AI suggestion verbosity. Available options are essential and comprehensive.

In Essential mode, only critical issues are posted as inline comments, and other issues appear in the main review summary under "Additional issues".

In Comprehensive mode, Bito also includes minor suggestion and potential nitpicks as inline comments.

Learn More
Learn more
Learn more
Learn more

Excluding files, folders, or branches with filters

Customize which files, folders, and Git branches are reviewed when the Agent triggers automatically on pull requests.

The AI Code Review Agent offers powerful filters to exclude specific files and folders from code reviews and gives you precise control over which Git branches are included in automated reviews.

These filters can be configured at the Agent instance level, overriding the default behavior.

Exclude Files and Folders filter

A list of files/folders that the AI Code Review Agent will not review if they are present in the diff. You can specify the files/folders to exclude from the review by name or glob/regex pattern. The Agent will automatically skip any files or folders that match the exclusion list.

This filter applies to both manual reviews initiated through the /review command and automated reviews triggered via webhook.

By default, these files are excluded: *.xml, *.json, *.properties, .gitignore, *.yml, *.md

Examples

Note:

  • Patterns are case sensitive.

  • Don’t use double quotes, single quotes or comma in the pattern.

  • Users can pass both types of patterns - Unix files system based glob pattern or regex.

Exclusion Rule for Files & Folders
Applicable Pattern
Matched Examples
Not Matched Examples

Exclude all properties files in all folders and subfolders

*.properties

resource/config.properties, resource/server/server.properties

resource/config.yaml, resource/config.json

Exclude all files, folders and subfolders in folder starting with resources

resources/

resources/application.properties, resources/config/config.yaml

app/resources/file.txt, config/resources/service.properties

Exclude all files, folders and subfolders in folder src/com/resources

src/com/resources/

resources/application.properties, resources/config/config.yaml

app/resources/file.txt, config/resources/service.properties

Exclude all files, folders and subfolders in subfolder resource and in parent folder src

src/*/resource/*

src/com/resource/main.html,

src/com/resource/script/file.css, src/com/resource/app/script.js

src/resource/file.txt, src/com/config/file.txt, app/com/config/file.txt

Exclude non-css files from folder src/com/resource/ and subfolders

^src\/com\/resource\\/(?!.*\\.css$).*$

src/com/resource/main.html, src/com/resource/app/script.js,

src/com/config/file.txt

src/com/resource/script/file.css

Exclude specific file controller/webhook_controller.go

controller/webhook_controller.go

controller/webhook_controller.go

controller/controller.go, controller/webhook_service.go

Exclude non-css files from folder starting with config and its subfolders

^config\\/(?!.*\\.css$).*$

config/server.yml, config/util/conf.properties

config/profile.css, config/styles/main.css

Exclude all files & folders

*

resource/file.txt, config/file.properties, app/folder/

-

Exclude all files & folders starting with name bito in module folder

module/bito*

module/bito123, module/bitofile.js, module/bito/file.js

module/filebito.js, module/file2.txt, module/util/file.txt

Exclude single-character folder names

*/?/*

src/a/file.txt, app/b/folder/file.yaml

folder/file.txt, ab/folder/file.txt

Exclude all folders, subfolders and files in those folders except folder starting with service folder

^(?!service\\/).*$

config/file.txt, resources/file.yaml

service/file.txt, service/config/file.yaml

Exclude all files in all folders except .py, .go, and .java files

^(?!.*\\.(py|go|java)$).*$

config/file.txt, app/main.js

main.py, module/service.go, test/Example.java

Exclude non-css files from folder src/com/config and its subfolders

^config\\/(?!.*\\.css$).*$

config/server.yml, config/util/conf.properties

config/profile.css, config/styles/main.css

Include Source/Target Branches filter

This filter defines which pull requests trigger automated reviews based on their source or target branch, allowing you to focus on critical code and avoid unnecessary reviews or AI usage.

By default, pull requests merging into the repository’s default branch are subject to review. To extend review coverage, additional branches may be specified using explicit branch names or valid glob/regex patterns. When the source or target branch of a pull request matches one of the patterns on your inclusion list, Bito’s AI Code Review Agent will trigger an automated review.

This filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

Watch video tutorial:

Examples

Note:

  • Patterns are case sensitive.

  • Don’t use double quotes, single quotes or comma in the pattern.

  • Users can pass both types of patterns - Unix files system based glob pattern or regex.

Inclusion Rules for Branch
Pattern
Matched Examples
Not Matched Examples

Include any branch that starts with name BITO-

BITO-*

BITO-feature, BITO-123

feature-BITO, development

Include any branch that does not start with BITO-

^(?!BITO-).*

feature-123, release-v1.0

BITO-feature, BITO-123

Include any branch which is not BITO

^(?!BITO$).*

feature-BITO, development

BITO

Include branches like release/v1.0 and release/v1.0.1

release/v\\d+\\.\\d+(\\.\\d+)?

release/v1.0, release/v1.0.1

release/v1, release/v1.0.x

Include any branch ending with -test

*-test

feature-test, release-test

test-feature, release-testing

Include the branch that has keyword main

main

main, main-feature, mainline

master, development

Include the branch named main

^main$

main

main-feature, mainline, master, development

Include any branch name that does not start with feature- or release-

^(?!release-|feature-).*$

hotfix-123, development

feature-123, release-v1.0

Include branches with names containing digits

.*\\d+.*

feature-123, release-v1.0

feature-abc, main

Include branches with names ending with test or testing

.*(test|testing)$

feature-test, bugfix-testing

testing-feature, test-branch

Include branches with names containing a specific substring test

*test*

feature-test, test-branch, testing

feature, release

Include branches with names containing exactly three characters

^.{3}$

abc, 123

abcd, ab

Include branch names starting with release, hotfix, or development but not starting with Bito or feature

^(?!Bito|feature)(release|hotfix|development).*$

release-v1.0, hotfix-123, development-xyz

Bito-release, feature-hotfix, main-release

Include all branches where name do not contains version like 1.0, 1.0.1, etc.

^(?!.\\b\\d+\\.\\d+(\\.\\d+)?\\b).*

feature-xyz, main

release-v1.0, hotfix-1.0.1

Include all branches which are not alphanumeric

^.[^a-zA-Z0-9].$

feature-!abc, release-@123

feature-123, release-v1.0

Include all branches which contains space

.*\\s.*

feature 123, release v1.0

feature-123, release-v1.0

Draft pull requests filter

A binary setting that enables/disables automated review of pull requests (PR) based on the draft status. Enter True to disable automated review for draft pull requests, or False to enable it.

The default value is True which skips automated review of draft PR.

How to configure the filters?

Bito Cloud (Bito-hosted Agent)

You can configure filters using the Agent configuration page. For detailed instructions, please refer to the Install/run Using Bito Cloud documentation page.

CLI or webhooks service (self-hosted Agent)

You can configure filters using the bito-cra.properties file. Check the options exclude_branches, exclude_files, and exclude_draft_pr for more details.

GitHub Actions (self-hosted Agent)

You can configure filters using the GitHub Actions repository variables: EXCLUDE_BRANCHES, EXCLUDE_FILES, and EXCLUDE_DRAFT_PR. For detailed instructions, please refer to the Install/Run via GitHub Actions documentation page.

Logo
// check if all form fields are entered by user

Use cases and examples

Supercharging development with AI

Bito AI Assistance can help with generating new content or help with your existing code.

Click on each use case for an example instruction and response.

  • Generate Code

  • Command Syntax

  • Learn technical concepts

  • Generate test cases

  • Explain code

  • Comment method

  • Improve code performance

  • Check security

  • Find style issues

Generate Code

Instruction

Implement a simple rest API in Go.

Response

package main
import ( 
"encoding/json" "fmt" "net/http" )
type user struct { Name string json:"name" Age int json:"age" }
func main() 
{http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { 
	// Create user 
	u := user{ Name: "John Doe", Age: 25, }
	// Encode user
	uj, err := json.Marshal(u)
	if err != nil {
		fmt.Fprintf(w, err.Error())
	}
	// Return user
	w.Header().Set("Content-Type", "application/json")
	w.WriteHeader(http.StatusOK)
	fmt.Fprintf(w, "%s\n", uj)
})
http.ListenAndServe(":3000", nil)
}

Command Syntax

Instruction

How to rebase a branch in git?

Response

  1. Checkout the branch you want to rebase:

$ git checkout

  1. Rebase your branch against the upstream branch:

$ git rebase upstream/

  1. Resolve any conflicts that arise.

  2. Once all conflicts are resolved, do a git status to verify that all files have been updated correctly.

  3. Finally, push the rebased branch to the remote repository:

$ git push origin --force

Learn Technical Concepts

Instruction

Can you explain the B+ tree with a code example?

Response

A B+ tree is a self-balancing tree data structure used in databases to store and retrieve data efficiently. It is a variation of the B-tree and is designed to minimize the number of disk accesses required for insertion, deletion, and retrieval operations. B+ trees are characterized by the number of keys stored per node, the order of the nodes, and the number of children per node.

For example, a B+ tree of order 5 would have 5 keys per node and 5 children per node. When a node becomes full, it will split into two nodes, each containing half of the keys and half of the children. This means that all the nodes at the same level will have the same number of keys, making retrieval operations more efficient.

Here is an example in Python

class Node:
    def __init__(self, order):
        self.order = order
        self.keys = []
        self.children = [None] * (order + 1)

class BPlusTree:
    def __init__(self, order):
        self.root = Node(order)
 
    def insert(self, key):
        if not self.root.keys:
            self.root.keys.append(key)
        else:
            pass # traverse the tree and insert the key
 
    def delete(self, key):
        pass # traverse the tree and delete the key
 
    def search(self, key):
        pass # traverse the tree and search for the key
 
# Create a B+ tree of order 5
bplus_tree = BPlusTree(5)
 
# Insert a key
bplus_tree.insert(10)
 
# Delete a key
bplus_tree.delete(10)

# Search for a key
bplus_tree.search(10)

Test Case

Instruction

Generate test cases for the following method.

genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
  {
    let new_ =[],old_=[],merged_=[];
    newLabelArray.forEach((lbl)=>{
      if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
      ++new_[lbl];
      ++merged_[lbl];
    });
    
    oldLabelArray.forEach((lbl)=>{
      if(old_[lbl] === undefined){ old_[lbl] =0; }
      if(merged_[lbl] === undefined){ merged_[lbl] =0; }

      ++old_[lbl];
      --merged_[lbl]
    });
    let final = [];

    merged_.forEach((value,key)=>{
      if(value!=0){ 
      //  final[key.toString()] = value;
      final.push(key+":"+value);
       }
    })
    
      return final;
  }

Response

Test Case 1: Input: newLabelArray = [ 'Apple', 'Banana', 'Mango', 'Apple' ] oldLabelArray = [ 'Banana', 'Mango', 'Orange' ] Expected Output: ['Apple:2', 'Orange:-1']

Test Case 2: Input: newLabelArray = [ 'Apple', 'Banana', 'Apple', 'Orange' ] oldLabelArray = [ 'Banana', 'Mango', 'Orange' ] Expected Output: ['Apple:2', 'Mango:-1']

Explain this Code

Instruction

What does this code do?

genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
  {
    let new_ =[],old_=[],merged_=[];
    newLabelArray.forEach((lbl)=>{
      if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
      ++new_[lbl];
      ++merged_[lbl];
    });
    
    oldLabelArray.forEach((lbl)=>{
      if(old_[lbl] === undefined){ old_[lbl] =0; }
      if(merged_[lbl] === undefined){ merged_[lbl] =0; }

      ++old_[lbl];
      --merged_[lbl]
    });
    let final = [];

    merged_.forEach((value,key)=>{
      if(value!=0){ 
      //  final[key.toString()] = value;
      final.push(key+":"+value);
       }
    })
    
      return final;
  }

Response

This code takes two arrays, newLabelArray and oldLabelArray, and generates a new array that contains the difference between the two arrays. It does this by first creating three empty arrays, new_, old_, and merged_. It then iterates through the newLabelArray and adds each label to the new_ array, as well as the merged_ array. It then iterates through the oldLabelArray and does the same for the old_ array, but also subtracts it from the merged_ array. Finally, it iterates through the merged_ array and checks for any values that are not equal to 0 and adds them to the final array. It then returns this final array.

Comment Method

Instruction

Generate a comment for this method, explaining the parameters and output

genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
  {
    let new_ =[],old_=[],merged_=[];
    newLabelArray.forEach((lbl)=>{
      if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
      ++new_[lbl];
      ++merged_[lbl];
    });
    
    oldLabelArray.forEach((lbl)=>{
      if(old_[lbl] === undefined){ old_[lbl] =0; }
      if(merged_[lbl] === undefined){ merged_[lbl] =0; }

      ++old_[lbl];
      --merged_[lbl]
    });
    let final = [];

    merged_.forEach((value,key)=>{
      if(value!=0){ 
      //  final[key.toString()] = value;
      final.push(key+":"+value);
       }
    })
    
      return final;
  }

Response

/* This method takes two arrays, newLabelArray and oldLabelArray. It uses the elements of both arrays to generate a merged array, merged_, where it counts the occurrences of each element in both arrays. It then creates a third array, final, containing the elements of merged_ where the difference is not 0. The output is the final array. */

Improve Performance

Instruction

How can I improve the performance of this code?

genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
  {
    let new_ =[],old_=[],merged_=[];
    newLabelArray.forEach((lbl)=>{
      if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
      ++new_[lbl];
      ++merged_[lbl];
    });
    
    oldLabelArray.forEach((lbl)=>{
      if(old_[lbl] === undefined){ old_[lbl] =0; }
      if(merged_[lbl] === undefined){ merged_[lbl] =0; }

      ++old_[lbl];
      --merged_[lbl]
    });
    let final = [];

    merged_.forEach((value,key)=>{
      if(value!=0){ 
      //  final[key.toString()] = value;
      final.push(key+":"+value);
       }
    })
    
      return final;
  }

Response

  1. Use ES6 syntax to simplify the code.

  2. Use a hashmap instead of looping through the arrays.

  3. Use a single loop to build the final array instead of two loops.

  4. Use array methods instead of for-each loops.

  5. Use optimized data structures like Set and Map.

Instruction

Rewrite this code with performance improvements?

genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
  {
    let new_ =[],old_=[],merged_=[];
    newLabelArray.forEach((lbl)=>{
      if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
      ++new_[lbl];
      ++merged_[lbl];
    });
    
    oldLabelArray.forEach((lbl)=>{
      if(old_[lbl] === undefined){ old_[lbl] =0; }
      if(merged_[lbl] === undefined){ merged_[lbl] =0; }

      ++old_[lbl];
      --merged_[lbl]
    });
    let final = [];

    merged_.forEach((value,key)=>{
      if(value!=0){ 
      //  final[key.toString()] = value;
      final.push(key+":"+value);
       }
    })
    
      return final;
  }

Response

// Rewritten Code
function genrateLabelDiffArray(newLabelArray, oldLabelArray = []) {
  let new_ = {},
    old_ = {},
    merged_ = {},
    final = [];

  newLabelArray.forEach((lbl) => {
    new_[lbl] = new_[lbl] ? new_[lbl] + 1 : 1;
    merged_[lbl] = merged_[lbl] ? merged_[lbl] + 1 : 1;
  });

  oldLabelArray.forEach((lbl) => {
    old_[lbl] = old_[lbl] ? old_[lbl] + 1 : 1;
    merged_[lbl] = merged_[lbl] ? merged_[lbl] - 1 : -1;
  });

  Object.entries(merged_).forEach(([key, value]) => {
    if (value !== 0) {
      final.push(key + ":" + value);
    }
  });
  
  return final;
}

Check for Security

Instruction

Find potential security issues in this code

import database
search_term = input("Enter a search term: ")
query = "SELECT * FROM users WHERE name = '" + search_term + "'"
results = database.query(query)
print(results)

Response

  1. A lack of input validation: the user-supplied search term is not being validated to ensure it is safe to use in a database query. This could leave the system open to SQL injection attacks.

  2. A lack of output encoding: the results from the database are not being encoded before they are printed. This could leave the system open to cross-site scripting attacks.

Instruction

Rewrite this code with fix to potential security issues

import database
search_term = input("Enter a search term: ")
query = "SELECT * FROM users WHERE name = '" + search_term + "'"
results = database.query(query)
print(results)

Response

import database

# Get the user-supplied search term
search_term = input("Enter a search term: ")

# Use the user-supplied search term to query the database
query = "SELECT * FROM users WHERE name = %s"
results = database.query(query, (search_term,))

# Print the results
print(results)

Find Style Issues

Instruction

Identify potential issues that EsLint would find in this code

genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
  {
    let new_ =[],old_=[],merged_=[];
    newLabelArray.forEach((lbl)=>{
      if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
      ++new_[lbl];
      ++merged_[lbl];
    });
    
    oldLabelArray.forEach((lbl)=>{
      if(old_[lbl] === undefined){ old_[lbl] =0; }
      if(merged_[lbl] === undefined){ merged_[lbl] =0; }

      ++old_[lbl];
      --merged_[lbl]
    });
    let final = [];

    merged_.forEach((value,key)=>{
      if(value!=0){ 
      //  final[key.toString()] = value;
      final.push(key+":"+value);
       }
    })
    
      return final;
  }

Response

//ESLint would find the following issues:

  • Missing semicolons (potential syntax errors)

  • Missing spacing after 'forEach' statements

  • Unused variables (e.g. 'old_')

  • Missing spacing between operators (e.g. '++new_[lbl]')

Install/run via webhooks service

The webhooks service is best suited for continuous, automated reviews.

Prerequisites

Minimum System Requirements

A machine with the following minimum specifications is recommended for Docker image deployment and for obtaining optimal performance of the AI Code Review Agent.

Requirement
Minimum Specification

Supported Operating Systems

  • Windows

  • Linux

  • macOS


OS Prerequisites

Operating System
Installation Steps

Required Access Tokens

  • Bito Access Key: Obtain your Bito Access Key.

  • GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.

  • GitLab Personal Access Token: For GitLab PR code reviews, a token with API access is required.

  • Snyk API Token (Auth Token): For Snyk vulnerability reports, obtain a Snyk API Token.


Installation and Configuration Steps

  1. Prerequisites: Before proceeding, ensure you've completed all necessary AI Code Review Agent.

  2. Server Requirement: Ensure you have a server with a domain name or IP address.

  3. Start Docker: Initialize Docker on your server.

  4. Clone the repository: GitHub repository to your server using the following command:

    • git clone https://github.com/gitbito/CodeReviewAgent.git

    • Note: It is recommended to clone the repository instead of downloading the .zip file. This approach allows you to easily later using the git pull command.

  5. Open the repository folder:

    • Navigate to the repository folder and then to the “cra-scripts” subfolder.

  • Note the full path to the “cra-scripts” folder for later use.

  1. Open Command Line:

    • Use Bash for Linux and macOS.

    • Use PowerShell for Windows.

  2. Set Directory:

    • Change the current directory in Bash/PowerShell to the “cra-scripts” folder.

    • Example command: cd [Path to cra-scripts folder]

    • Note: Adjust the path based on where you cloned the repository on your system.

  1. Configure Properties:

    • Open the bito-cra.properties file in a text editor from the “cra-scripts” folder. Detailed information for each property is provided on page.

    • Set mandatory properties:

      • mode = server

      • bito_cli.bito.access_key

      • git.access_token

    • Optional properties (can be skipped or set as needed):

      • git.provider

      • git.domain

      • code_feedback

      • static_analysis

      • dependency_check

      • dependency_check.snyk_auth_token

      • server_port

      • review_scope

      • exclude_branches

      • exclude_files

      • exclude_draft_pr

Note: Valid values for git.provider are GITHUB or GITLAB.

Note: Detailed information for each property is provided on page.

Check the guide to learn more about creating the access tokens needed to configure the Agent.

  1. Run the Agent:

    • On Linux/macOS in Bash:

      • Run ./bito-cra.sh service start bito-cra.properties

      • Note: It will provide the Git Webhook secret in encrypted format.

    • On Windows in PowerShell:

      • Install OpenSSL

        • Reference-1:

        • Reference-2:

      • Run ./bito-cra.ps1 service start bito-cra.properties

      • Note: It will provide the Git Webhook secret in encrypted format.

This step might take time initially as it pulls the Docker image and performs the code review.

  1. Provide Missing Property Values: The script may prompt for values of mandatory/optional properties if they are not preconfigured.

  2. Copy Webhook Secret: During the script execution, a webhook secret is generated and displayed in the shell. Copy the secret displayed under "Use below as Gitlab and Github Webhook secret:" for use in GitHub or GitLab when setting up the webhook.

Webhook Setup Guide

:

  • Login to your account.

  • Navigate to the main page of the repository. Under your repository name, click Settings.

  • In the left sidebar, click Webhooks.

  • Click Add webhook.

  • Under Payload URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.

    • Note: The GitHub Payload URL should follow this format: https://<domain name/ip-address>/api/v1/github_webhooks, where https://<domain name/ip-address> should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/github_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.

    • For example, a typical webhook URL would be https://cra.example.com/api/v1/github_webhooks

  • Select the Content type “application/json” for JSON payloads.

  • In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.

  • Click on Let me select individual events to select the events that you want to trigger the webhook. For code review select these:

    • Issue comments - To enable Code Review on-demand by issuing a command in the PR comment.

    • Pull requests - To auto-trigger Code Review when a pull request is created.

    • Pull request review comments - So, you can share feedback on the review quality by answering the feedback question in the code review comment.

  • To make the webhook active immediately after adding the configuration, select Active.

  • Click Add webhook.

:

  • Login to your account.

  • Select the repository where the webhook needs to be configured.

  • On the left sidebar, select Settings > Webhooks.

  • Select Add new webhook.

  • In URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.

    • Note: The GitLab webhook URL should follow this format: https://<domain name/ip-address>/api/v1/gitlab_webhooks, where https://<domain name/ip-address> should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/gitlab_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.

    • For example, a typical webhook URL would be https://cra.example.com/api/v1/gitlab_webhooks

  • In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.

  • In the Trigger section, select the events to trigger the webhook. For code review select these:

    • Comments - for on-demand code review.

    • Merge request events - for automatic code review when a merge request is created.

    • Emoji events - So, you can share feedback on the review quality using emoji reactions.

  • Select Add webhook.

:

  • Login to your account.

  • Navigate to the main page of the repository. Under your repository name, click Repository Settings.

  • In the left sidebar, click Webhooks.

  • Click Add webhook.

  • Under URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.

    • Note: The BitBucket Payload URL should follow this format: https://<domain name/ip-address>/api/v1/bitbucket_webhooks, where https://<domain name/ip-address> should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/bitbucket_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.

    • For example, a typical webhook URL would be https://cra.example.com/api/v1/bitbucket_webhooks

  • In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.

  • In the Triggers section, select the events to trigger the webhook. For code review select these:

    • Pull Request > Comment created - for on-demand code review.

    • Pull Request > Created - for automatic code review when a merge request is created.

  • Select Save.


Using the AI Code Review Agent

After configuring the webhook, you can invoke the AI Code Review Agent in the following ways:

Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please .

  1. Automated Code Review: If the webhook is configured to be triggered on the Pull requests event (for GitHub) or Merge request event (for GitLab), the agent will automatically review new pull requests as soon as they are created and post the review feedback as a comment within your PR.

  2. Manually Trigger Code Review: To start the process, simply type /review in the comment box on the pull request and submit it. If the webhook is configured to be triggered on the Issue comments event (for GitHub) or Comments event (for GitLab), this action will initiate the code review process. The /review command prompts the agent to review the pull request and post its feedback directly in the PR as a comment.

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

    • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to .

It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

Screenshots

Screenshot # 1

AI-generated pull request (PR) summary

Screenshot # 2

Changelist showing key changes and impacted files in a pull request.

Screenshot # 3

AI code review feedback posted as comments on the pull request.


How to update the self-hosted AI Code Review Agent

Please follow these steps:

  1. Update the Agent's repository:

    • Pull the latest changes from the repository by running the following command in your terminal, ensuring you are inside the repository folder:

    • git pull origin main

  2. Restart the Docker container:

    • To restart the Docker container running as a service, use the below command.

    • On Linux/macOS in Bash: Run ./bito-cra.sh service restart bito-cra.properties

    • On Windows in PowerShell: Run ./bito-cra.ps1 service restart bito-cra.properties


Stop Docker Container

To stop the Docker container running as a service, use the below command.

  • On Linux/macOS in Bash: Run ./bito-cra.sh service stop

  • On Windows in PowerShell: Run ./bito-cra.ps1 service stop


Check Status

To check the status of Docker container running as a service, use the below command.

  • On Linux/macOS in Bash: Run ./bito-cra.sh service status

  • On Windows in PowerShell: Run ./bito-cra.ps1 service status

CPU Cores

4

RAM

8 GB

Hard Disk Drive

80 GB

Linux

You will need:

  1. Bash (minimum version 4.x)

    • For Debian and Ubuntu systems

      sudo apt-get install bash

    • For CentOS and other RPM-based systems

      sudo yum install bash

  1. Docker (minimum version 20.x)

    • View Guide

macOS

You will need:

  1. Bash (minimum version 4.x)

    brew install bash

  1. Docker (minimum version 20.x)

    • View Guide

Windows

You will need:

  1. PowerShell (minimum version 5.x)

    • View Guide

    • Note: In PowerShell version 7.x, run Set-ExecutionPolicy Unrestricted command. It allows the execution of scripts without any constraints, which is essential for running scripts that are otherwise blocked by default security settings.

  1. Docker (minimum version 20.x)

    • View Guide

View Guide
View Guide
View Guide
View Guide
prerequisites for self-hosted
Clone the AI Code Review Agent
update the Agent
Agent Configuration: bito-cra.properties File
Agent Configuration: bito-cra.properties File
Required Access Tokens
https://wiki.openssl.org/index.php/Binaries
https://slproweb.com/products/Win32OpenSSL.html
GitHub Webhook Setup Guide
GitHub
GitLab Webhook Setup Guide
GitLab
BitBucket Webhook Setup Guide
BitBucket
contact support
Available Commands
https://github.com/gitbito/CodeReviewAgent
GitHub Personal Access Token (Classic)
GitLab Personal Access Token
Changelist in AI Code Review Agent's feedback.

Supported programming languages and tools

Supports key languages & tools, including fbInfer, Dependency Check, and Snyk.

Supported Programming Languages

AI Code Review

The AI Code Review Agent understands code changes in pull requests by analyzing relevant context from your entire repository, resulting in more accurate and helpful code reviews. The agent provides either Basic Code Understanding or Advanced Code Understanding based on the programming languages used in the code diff. Learn more about all the supported languages in the table below.

Basic Code Understanding is providing the surrounding code for the diff to help AI better understand the context of the diff.

Advanced Code Understanding is providing detailed information holistically to the LLM about the changes the diff is making—from things such as global variables, libraries, and frameworks (e.g., Lombok in Java, React for JS/TS, or Angular for TS) being used, the specific functions/methods and classes the diff is part of, to the upstream and downstream impact of a change being made. Using advanced code traversal and understanding techniques, such as symbol indexes, embeddings, and abstract syntax trees, Bito deeply tries to understand what your changes are about and the impact and relevance to the greater codebase, like a senior engineer does when doing code review. Read more here about our approach.

For requests to add support for specific programming languages, please reach out to us at [email protected]

Languages
AI Code Review
Basic Code Understanding
Advanced Code Understanding
File extensions supported

C

YES

YES

YES

.c, .h

C++

YES

YES

YES

.cpp, .hpp

C#

YES

YES

YES

.cs

Go

YES

YES

YES

.go

HTML/CSS

YES

YES

YES

.html, .css

SCSS

YES

YES

YES

.scss

Java

YES

YES

YES

.java

JavaScript

YES

YES

YES

.js

JavaScript Framework

YES

YES

YES

.jsx

Kotlin

YES

YES

YES

.kt

PHP

YES

YES

YES

.php

Python

YES

YES

YES

.py

Ruby

YES

YES

YES

.rb

Rust

YES

YES

YES

.rs

Scala

YES

YES

YES

.scala, .sc

Swift

YES

YES

YES

.swift

Terraform

YES

YES

YES

.tf

TypeScript

YES

YES

YES

.ts

TypeScript Framework

YES

YES

YES

.tsx

Vue.js

YES

YES

YES

.vue

SQL

YES

YES

Coming soon

Coming soon

Bash/Shell

YES

YES

Coming soon

Coming soon

PowerShell

YES

YES

Coming soon

Coming soon

Dart

YES

YES

Coming soon

Coming soon

Lua

YES

YES

Coming soon

Coming soon

Visual Basic .NET

YES

YES

Coming soon

Coming soon

R

YES

YES

Coming soon

Coming soon

Assembly

YES

YES

Coming soon

Coming soon

Groovy

YES

YES

Coming soon

Coming soon

Delphi

YES

YES

Coming soon

Coming soon

Objective-C

YES

YES

Coming soon

Coming soon

Others

YES

YES

Coming soon

Coming soon


Static Code Analysis and Open Source Vulnerabilities Check

For custom SAST tools configuration to support specific languages in the AI Code Review Agent, please reach out to us at [email protected]

Languages
Static Code Analysis / Linters
Open Source Vulnerabilities Check

C

YES (using Facebook Infer)

NO

C++

YES (using Facebook Infer)

NO

C#

NO

NO

Go

YES (using golangci-lint)

YES

HTML/CSS

NO

NO

SCSS

NO

NO

Java

YES (using Facebook Infer)

NO

JavaScript

YES (using ESLint)

YES

Kotlin

NO

NO

PHP

NO

NO

Python

YES (using Astral Ruff and Mypy)

NO

Ruby

NO

NO

Rust

NO

NO

Scala

NO

NO

Swift

NO

NO

Terraform

NO

NO

TypeScript

YES (using ESLint)

YES

Vue.js

NO

NO

SQL

NO

NO

Bash/Shell

NO

NO

PowerShell

NO

NO

Dart

NO

NO

Lua

NO

NO

Visual Basic .NET

NO

NO

R

NO

NO

Assembly

NO

NO

Groovy

NO

NO

Delphi

NO

NO

Objective-C

YES (using Facebook Infer)

NO

Others

NO

NO


Supported Tools and Platforms

Tool
Type
Supported/Integrated

Facebook Infer

Static Code Analysis for Java, C, C++, and Objective-C

YES

ESLint

Linter for JavaScript and TypeScript

YES

golangci-lint

Linter for Go

YES

Astral Ruff

Linter for Python

YES

Mypy

Static Type Checker for Python

YES

OWASP dependency Check

Security

YES

Snyk

Security

YES

Whispers

Secrets scanner (e.g., passwords, API keys, sensitive information)

YES

detect-secrets

Secrets scanner (e.g., passwords, API keys, sensitive information)

YES

GitHub cloud

Code Repository

YES

GitHub (Self-Managed)

Code Repository

YES, supports version 3.0 and above.

GitLab cloud

Code Repository

YES

GitLab (Self-Managed)

Code Repository

YES, supports version 15.5 and above.

Bitbucket

Code Repository

YES

Azure DevOps

Code Repository

Coming soon


Supported output languages for code review feedback

Bito supports posting code review feedback in over 20 languages. You can choose your preferred language in the agent settings. Supported languages include the following:

  1. Arabic (عربي)

  2. Bulgarian (български)

  3. Chinese (Simplified) (简体中文)

  4. Chinese (Traditional) (繁體中文)

  5. Czech (čeština)

  6. Dutch (Nederlands)

  7. English (English)

  8. French (français)

  9. German (Deutsch)

  10. Hebrew (עִברִית)

  11. Hindi (हिंदी)

  12. Hungarian (magyar)

  13. Italian (italiano)

  14. Japanese (日本語)

  15. Korean (한국어)

  16. Malay (Melayu)

  17. Polish (polski)

  18. Portuguese (português)

  19. Russian (русский)

  20. Spanish (español)

  21. Turkish (Türkçe)

  22. Vietnamese (Tiếng Việt)