Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 119 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

BITO

Loading...

Loading...

Loading...

Loading...

AI Code Reviews in Git

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

AI Code Reviews in IDE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

AI Architect

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Other Bito AI tools

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Help

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Try Bito's AI code review

Coming soon...

Getting started

Deploy the AI Code Review Agent in Bito Cloud or opt for self-hosted service.

The AI Code Review Agent supports two deployment options:

  • Bito Cloud (fully managed)

  • Self-hosted service (run on your own infrastructure)

Each option comes with its own set of benefits and considerations.

This guide walks you through both options to help you determine which deployment model best fits your team’s needs.

Bito Cloud

provides a managed environment for running the AI Code Review Agent, offering a seamless, hassle-free experience. This option is ideal for teams looking for quick deployment and minimal operational overhead.

Pros:

  • Simplicity: Enjoy a straightforward setup with a single-click installation process, making it easy to get started without technical hurdles.

  • Maintenance-Free: Bito Cloud takes care of all necessary updates and maintenance, ensuring your Agent always operates on the latest software version without any effort on your part.

  • Scalability: The platform is designed to easily scale, accommodating project growth effortlessly and ensuring reliable performance under varying loads.

Cons:

  • Handling of Pull Request Diffs: For analysis purposes, diffs from pull requests are temporarily stored on our servers.


Self-hosted service

AI Code Review Agent offers a higher degree of control and customization, suited for organizations with specific requirements or those who prefer to manage their own infrastructure.

Pros:

  • Full Control: Self-hosting provides complete control over the deployment environment, allowing for extensive customization and the ability to integrate with existing systems as needed.

  • Privacy and Security: Keeping the AI Code Review Agent within your own infrastructure can enhance data security and privacy, as all information remains under your direct control.

Cons:

  • Setup Complexity: Establishing a self-hosted environment requires technical know-how and can be more complex than using a managed service, potentially leading to longer setup times.

  • Maintenance Responsibility: The responsibility of maintaining and updating the software falls entirely on your team, which includes ensuring the system is scaled appropriately to handle demand.

Bito Cloud
Install/run using Bito Cloud
Self-hosted
Install/run as a self-hosted service

Guide for Junie (JetBrains)

Integrate Junie (JetBrains) with AI Architect for more accurate, codebase-aware AI assistance.

Coming soon...

Welcome to Bito

Bito is an AI-powered code review tool that helps you catch bugs, security vulnerabilities, code smells, and other issues in your pull requests and code editors. By understanding your entire codebase, Bito provides context-aware, actionable suggestions that improve code quality and security.

It includes real-time recommendations from dev tools you already use such as static code analysis, open source vulnerability scanners, linters, and secrets scanning tools (e.g., passwords, API keys, sensitive information, etc.).

Supported platforms

AI Code Reviews in Git: GitHub, GitLab, Bitbucket

AI Code Reviews in IDE: VS Code, Cursor, Windsurf, JetBrains

See AI Code Review Agent in action

Quickstart guide

1

Sign up for Bito

Create your account at to get started.

2

Helpful resources

Feature guides

Video library

Need help?

If you have any questions, feel free to email us at

Privacy and security

Bito doesn't read or store your code. Nor do we use your code for AI model training.

This document explains some of Bito's privacy and security practices. Our outlines our various accreditations (SOC 2 Type II) and our various security policies. You can read our full Privacy Policy at .

Bito AI

Security is top of mind at Bito, especially when it comes to your code. A fundamental approach we have taken is we do not store any code, code snippets, indexes or embedding vectors on Bito’s servers unless you expressly allow it. You decide where you want to store your code, either locally on your machine, in your cloud, or on Bito’s cloud. Importantly, our AI partners do not store any of this information.

All requests are transmitted over HTTPS and are fully encrypted.

None of your code or AI requests are used for AI model training. None of your code or AI requests are stored by our AI partners. Our AI model partners are OpenAI, Anthropic, and Google. Here are their policies where they state that they do not store or train on data related to API access (we access all AI models via APIs):
  1. OpenAI: https://openai.com/enterprise-privacy/

  2. Anthropic: https://www.anthropic.com/uk-government-internal-ai-safety-policy-response/data-input-controls-and-audit

  3. Google Cloud: https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-unveils-ai-and-ml-privacy-commitment (5th paragraph)

The AI requests including code snippets you send to Bito are sent to Bito servers for processing so that we can respond with an answer.

Interactions with Bito AI are auto-moderated and managed for toxicity and harmful inputs and outputs.

Any response generated by the Bito IDE AI Assistant is stored locally on your machine to show the history in Bito UI. You can clear the history anytime you want from the Bito UI.

SOC 2 Type II Compliance

Bito is SOC 2 Type II compliant. This certification reinforces our commitment to safeguarding user data by adhering to strict security, availability, and confidentiality standards. SOC 2 Type II compliance is an independent, rigorous audit that evaluates how well an organization implements and follows these security practices over time.

Our SOC 2 Type II compliance means:

  • Enhanced Data Security: We consistently implement robust controls to protect your data from unauthorized access and ensure it remains secure.

  • Operational Excellence: Our processes are designed to maintain high availability and reliability, ensuring uninterrupted service.

  • Regular Monitoring and Testing: We conduct continuous monitoring and regular internal reviews to uphold the highest security standards.

This certification is an assurance that Bito operates with a high level of trust and transparency, providing you with a secure environment for your code and data.

For any further questions regarding our SOC 2 Type II compliance or to request a copy of the audit report, please reach out to [email protected]

Code Flow through Bito’s System

AI Code Review Agent

When you use the self-hosted/docker version that you have setup in your VPC, in the docker image Bito checks out the diff and clones the repo for static analysis and also to determine relevant code context for code review. This context and the diff is passed to Bito's system. The request is then sent to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. No code is retained by the LLM. Bito then receives the response, processes it (such as formatting), and returns it to your self-hosted docker instance. This then posts it to your Git provider. However, the original query is not retained, nor are the results. After each code review is completed, the diff and the checked out repo are deleted.

If you use the Bito cloud to run the AI Code Review Agent, it runs similarly to the self-hosted version. Bito ephemerally checks out the diff and clones the repo for static analysis and to determine the relevant code context for code review. This context and the diff is passed to Bito's system. The request is then sent by Bito to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. No code is retained by the LLM. Bito then receives the response, processes it (such as formatting), and posts it to your Git provider. However, the original query is not retained, nor are the results. After each code review is completed, the diff and the checked out repo are deleted.

AI Chat and Code Completions

When we receive an AI request from a user, it is processed by Bito's system (such as adding relevant context and determining the Large Language Model (LLM) to use). However, the original query is not retained. The request is then sent to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. Bito then receives the response, processes it (such as formatting), and returns it to the user’s machine.

For enterprises, we have the ability to connect to your own private LLM accounts, including but not limited to OpenAI, Google Cloud, Anthropic, or third-party services such as AWS Bedrock, Azure OpenAI. This way all data goes through your own accounts or Virtual Private Cloud (VPC), ensuring enhanced control and security.

Data and Business Privacy Policy

In line with Bito's commitment to transparency and adherence to data privacy standards, our comprehensive data and business privacy policy is integrated into our practices. Our complete Terms of Use, including the Privacy Policy, are available at https://bito.ai/terms-of-use/, with our principal licensing information detailed at https://bito.ai/terms-of-service/.

Data Retention Policy

Our data retention policy is carefully designed to comply with legal standards and to respect our customers' privacy concerns. The policy is categorized into four levels of data:

  1. Relationship and Usage Meta Data: This includes all data related to the customer's interaction with Bito, such as address, billing amounts, user account data (name and email), and usage metrics (number of queries made, time of day, length of query, etc.). This category of data is retained indefinitely for ongoing service improvement and customer support.

  2. Bito Business Data: Includes customer-created templates and settings. This data is terminated 90 days after the end of the business relationship with Bito.

  3. Confidential Customer Business Data: This includes code, code artifacts, and other organization-owned data such as Jira, Confluence, etc. This data is either stored on-prem/locally on the customer’s machines, or, if in the cloud, is terminated at the end of the business relationship with Bito.

  4. AI Requests: Data in an AI request to Bito’s AI system. AI requests are neither retained nor viewed by Bito. We ensure the confidentiality of your AI queries; Bito and our LLM partners do not store your code, and none of your data is used for model training. All requests are transmitted via HTTPS and are fully encrypted.

Sub-processor

Bito uses the following third-party services: Amazon AWS, Anthropic, Clearbit, Github, Google Analytics, Google Cloud, HelpScout, Hubspot, Microsoft Azure, Mixpanel, OpenAI, SendGrid, SiteGround, and Slack for infrastructure, support, and functional capabilities.

Personal Data

Bito follows industry standard practices for protecting your e-mail and other personal details. Our password-less login process - which requires one-time passcode sent to your e-mail for every login - ensures the complete security of your account.

If you have any questions about our security and privacy, please email [email protected]

Trust Center
https://bito.ai/privacy-policy/

Guide for JetBrains AI Assistant

Integrate JetBrains AI Assistant with AI Architect for more accurate, codebase-aware AI assistance.

Coming soon....

Connect your Git provider

Select your preferred Git platform and follow the guided setup to install the agent:

  • GitHub

  • GitHub (Self-Managed)

  • GitLab

  • GitLab (Self-Managed)

Once installed, the agent will be linked to your repositories and ready to assist.

3

Review pull requests

The AI agent will automatically review new pull requests and leave inline comments with suggestions. You can also manually trigger a review by commenting /review on any pull request.

See full list of available commands

4

Chat with the agent

You can reply to comments posted by the Bito AI agent in a pull request to ask follow-up questions or request clarification. The agent will respond with context-aware answers to help you understand the feedback better.

Learn more

5

Configure agent settings

To customize your agent, go to Repositories and click the Settings button next to the relevant agent. From there, you can choose the review feedback mode, enable or disable automatic reviews, define custom guidelines to align with your team’s standards, and more.

Learn more

Start free trial
Getting started guide
alpha.bito.ai
[email protected]
Cover

AI Code Review Agent

Cover

Account and settings

Cover

Billing and plans

Cover

Privacy and security

Cover

Get support

Cover

Changelog

Cover

AI that understands your code

Cover

Chat with AI Code Review Agent

Cover

Custom code review rules and guidelines

Cover

Code review analytics

Cover

Supported programming languages and tools

Cover

Available commands

Cover

Agent settings

Cover

FAQs

Install/run as a self-hosted service

Deploy the AI Code Review Agent on your machine.

The self-hosted AI Code Review Agent offers a more private and customizable option for teams looking to enhance their code review processes within their own infrastructure, while maintaining complete control over their data. This approach is ideal for organizations with specific compliance, security, or customization requirements.

Understanding CLI vs webhooks service

When setting up the AI Code Review Agent, you have the flexibility to choose between two primary modes of operation: CLI and webhooks service.

  • CLI allows developers to manually initiate code reviews directly from terminal. This mode is ideal for quick, on-demand code reviews without the need for continuous monitoring or integration.

  • Webhooks service transforms the Agent into a persistent service that automatically triggers code reviews based on specific events, such as pull requests or comments on pull requests. This mode is suitable for teams looking to automate their code review processes.

For more details, visit the page.

Deployment Options

Based on your needs and the desired integration level with your development workflow, choose one of the following options to install and run the AI Code Review Agent:

Before proceeding, ensure you've completed all necessary AI Code Review Agent.

  1. : Ideal for developers seeking a simple, interactive way to conduct code reviews from the command line.

  2. : Perfect for teams looking to automate code reviews through external events, enhancing their CI/CD workflow.

  3. : A great option for GitHub users to seamlessly integrate automated code reviews into their GitHub Actions workflows.

Install/run using Bito Cloud

Deploy the AI Code Review Agent in Bito Cloud.

offers a single-click solution for using the , eliminating the need for any downloads on your machine. You can create multiple instances of the Agent, allowing each to be used with a different repository on a Git provider such as GitHub, GitLab, or Bitbucket.

We also support GitHub (Self-Managed), GitLab (Self-Managed), and Bitbucket (Self-Managed).

The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

Integrate the AI Code Review Agent into the CI/CD pipeline

Automate code reviews in your Continuous Integration/Continuous Deployment (CI/CD) pipeline—compatible with all CI/CD tools, including Jenkins, Argo CD, GitLab CI/CD, and more.

lets you integrate the into your CI/CD pipeline for automated code reviews. This document provides a step-by-step guide to help you configure and run the script successfully.

Installation and Configuration Steps

  1. based on your Git provider, and follow the step-by-step instructions to install the AI Code Review Agent

Example Questions

What Types of Questions Can be Asked?

You can try asking any question you may have in mind regarding your codebase. In most cases, Bito will give you an accurate answer. Bito uses AI to determine if you are asking about something in your codebase.

However, if you want to ask a question about your code no matter what, then you can use our pre-defined keywords such as "my code", "my repo", "my project", "my workspace", etc., in your question.

The complete list of these keywords is given on our page.

Here are some popular use cases (with example questions):

Upgrading Bito plugin

How to Update Bito Plugin on VS Code and JetBrains IDEs

Keeping your Bito plugin up to date ensures you have access to the latest features and improvements. In this article, we will guide you through the steps to update the Bito plugin on both VS Code and JetBrains IDEs. Let's dive in!

Updating Bito Plugin on VS Code

  1. Open your VS Code IDE

  2. Navigate to the Extensions view by clicking on the square icon in the left sidebar

Available Keywords

Keywords to invoke AI that understands your code

Here is the list of keywords in different languages to ask questions regarding your entire codebase. Use any of these keywords in your prompts inside Bito chatbox.

English:

  • my code

How it Works?

Bito indexes your code locally using AI

When you open a project in Visual Studio Code or JetBrains IDEs, Bito lets you enable the of code files from that project’s folder. Basically, this indexing mechanism leverages our new that enables Bito to understand your entire codebase and answer any questions regarding it.

The index is stored locally on your system to provide better performance while maintaining the security/privacy of your private code.

It takes 12 minutes per each 10MB of code to understand your repo, as the index is being built locally.

Open Bito in a new tab or window

Learn how to customize Bito’s view by switching from a side panel to a new tab or a separate window.

FAQs

Answers to popular questions

Enabling unicode For Windows 10 and below

Unicode characters (using other languages) might not be readily supported on command prompt if you are on Windows 10 or below. You can run command chcp 936 in cmd prior to using bito to support unicode characters in Windows 10 or below.

Chat session history

Bito automatically saves the chat session History. The session history is stored locally on your computer. You can return to any chat session and continue the AI conversation from where you left off. Bito will automatically maintain and restore the memory of the loaded chat session.

You can "Delete" any saved chat session or share a permalink to the session with your coworkers.

Here is the video overview of accessing and managing the session history.

AI that Understands Your Code

Work on your code with AI that knows your code!

my repo

  • my project

  • my workspace

  • Chinese:

    • 我的代码

    • 我的仓库

    • 我的代码库

    • 我的项目

    • 我的文件夹

    Chinese Traditional:

    • 我的程式碼

    • 我的倉庫

    • 我的項目

    • 我的工作區

    Spanish:

    • Mi código

    • Mi repo

    • Mi proyecto

    • Mi espacio de trabajo

    Japanese:

    • 私のコード

    • 私のリポ

    • 私のプロジェクト

    • 私のワークスペース

    Portuguese:

    • Meu código

    • Meu repo

    • Meu projeto

    • Meu espaço de trabalho

    Polish

    • Mój obszar roboczy

    • moje miejsce pracy

    • mój obszar roboczy

    • moj kod

    • mój kod

    • moim kodzie

    • moje repo

    • moje repozytorium

    • moim repo

    • moj projekt

    • mój projekt

    • moim projekcie

    CLI vs webhooks service
    prerequisites for self-hosted
    Install/run via CLI
    Install/run via webhooks service
    Install/run via GitHub Actions
    Code Explanation
    • What a particular code file does

      • In my code what does code in sendgrid/sendemail.sh do?

    • What a particular function in my code does

      • In my repo explain what function message_tokens do

    Code Translation

    • In my project rewrite the code of signup.php file in nodejs

    Code Refactoring

    • In my workspace suggest code refactoring for api.py and mention all other files that need to be updated accordingly

    Fix Bugs

    • In my code find runtime error possibilities in script.js

    • Find logical errors in scraper.py in my code

    Detect Code Smells

    • In my code detect code smells in /app/cart.php and give solution

    Generate Documentation

    • Generate documentation for search.ts in my workspace in markdown format

    Generate Unit tests

    • In my code write unit tests for index.php

    • In my code generate test code for code coverage of cache.c

    Summarize Recent Code Changes

    • summarize recent code changes in my code

    Code Search using natural language

    • Any function to compute tokens in my project?

    • Any code or script to send emails in my workspace?

    • In my repo list all the line numbers where $alexa array is used in index.php.

    Give details of making modifications

    • In my code list all the files and code changes needed to add column desc in table raw_data in dailyReport DB.

    Available Keywords
    Overview

    AI that Understands Your Code

    How it Works?

    Bito indexes your code locally using AI

    Available Keywords

    Keywords to invoke AI that understands your code

    Example Questions

    What type of questions can be asked?

    How does Bito Understand My Code?

    Sneak peek into the inner workings of Bito

    Using in Visual Studio Code

    AI that understands your code in VS Code

    Using in JetBrains IDEs

    AI that understands your code in JetBrains IDEs (e.g., PyCharm)

    Managing Index Size

    Exclude unnecessary files and folders from repo to index faster!

    FAQs

    Answers to popular questions

    LLM parameters

    Parameters are the individual elements of a Large Language Model that are learned from the training data. Think of them as the synapses in a human brain—tiny connections that store learned information.

    How Parameters Work in LLMs

    Each parameter in an LLM holds a tiny piece of information about the language patterns the model has seen during training. They are the fundamental elements that determine the behavior of the model when it generates text.

    For example, imagine teaching a child what a cat is by showing them pictures of different cats. Each picture tweaks the child's understanding and definition of a cat. In LLMs, each training example tweaks the parameters to better understand and generate language.

    The Role of Parameters in Understanding and Generating Language

    Parameters are crucial because they allow the model to perform tasks such as translation, write articles, and even generate source code. When you ask an AI a question, the parameters work together to sift through the learned patterns and generate a response that makes sense based on the training it received.

    For instance, if you ask an AI to write a poem, the parameters will determine how to structure the poem, what words to use, and how to create rhyme or rhythm, all based on the data it was trained on.

    The Scale of LLM Parameters: Just How Large Are We Talking?

    When we say "Large" in LLM, we're not kidding. The size of a language model is directly related to the number of parameters it has.

    Take GPT-4, for example, with its 1.76 trillion parameters. That's like 1.76 trillion different dials the model can tweak to get language just right. Each parameter holds a piece of information that can contribute to understanding a sentence's structure, the meaning of a word, or even the tone of a text.

    Earlier models had significantly fewer parameters. GPT-1, for instance, had only 117 million parameters. With each new generation, the number of parameters has grown exponentially, leading to more sophisticated and nuanced language generation.

    Training LLMs: How Parameters Learn

    Training an LLM involves a process called "backpropagation" where the model makes predictions, checks how far off it is, and adjusts the parameters accordingly.

    Let's say we're training an LLM to recognize the sentiment of a sentence. We show it the sentence "I love sunny days!" tagged as positive sentiment. The LLM predicts positive but isn't very confident. During backpropagation, it adjusts the parameters to increase the confidence for future similar sentences.

    This process is repeated millions of times with millions of examples, gradually fine-tuning the parameters so that the model's predictions become more accurate over time.

    Parameter’s Impact on AI Performance and Limitations

    The number of parameters is one of the key factors influencing an AI model's performance. However, more parameters can mean a model requires more computational power and data to train effectively, which can lead to increased costs and longer training times.

    With great power comes great responsibility—and greater chances of making mistakes. More parameters can sometimes mean that the model starts seeing patterns where there aren't any, a phenomenon known as "overfitting" where the model performs well on training data but poorly on new, unseen data.

    The Future of Parameters in LLMs

    The future of LLMs might not just be about adding more parameters, but also about making better use of them. Innovations in how parameters are structured and how they learn are ongoing.

    AI researchers are exploring ways to make LLMs more parameter-efficient, meaning they can achieve the same or better performance with fewer parameters. Techniques like "parameter sharing" and "sparse activation" are part of this cutting-edge research.

    Conclusion

    Parameters in LLMs are the core elements that allow these models to understand and generate human-like text. While the sheer number of parameters can be overwhelming, it's their intricate training and fine-tuning that empower AI to interact with us in increasingly complex ways.

    As AI continues to evolve, the focus is shifting from simply ramping up parameters to refining how they're used, ensuring that the future of AI is not just smarter but also more efficient and accessible.

    Installation guide

    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Connect Bito to your Git provider

    Select your Git provider from the options below and follow the step-by-step installation guide to seamlessly set up your AI Code Review Agent.

    Bito Cloud
    AI Code Review Agent
    Pricing
    using
    Bito Cloud
    . Be sure to review the prerequisites and the installation/configuration steps provided in the documentation.
  • Download the bito-action-script folder from GitHub, which includes a shell script (bito-actions.sh) and a configuration file (bito_action.properties).

  • You can integrate the AI Code Review Agent into your CI/CD pipeline in two ways, depending on your preference:

    • Option 1: Using the bito_action.properties File

      • Configure the following properties in the bito_action.properties file located in the downloaded bito-action-script folder.

    Property Name
    Description

    agent_instance_url

    The URL of the Agent instance provided after configuring the AI Code Review Agent with Bito Cloud.

    agent_instance_secret

    The secret key for the Agent instance obtained after configuring the AI Code Review Agent with Bito Cloud.

    pr_url

    URL of your pull request on GitLab, GitHub, or BitBucket.

    • Run the following command:

      • ./bito_actions.sh bito_action.properties

      • Note: When using the properties file, make sure to provide all the three parameters in .properties file

    • Option 2: Using Runtime Values

      • Provide all necessary values directly on the command line:

        • ./bito_actions.sh agent_instance_url=<agent_instance_url> agent_instance_secret=<secret> pr_url=<pr_url>

        • Replace <agent_instance_url>, <secret>, and <pr_url> with your specific values.

      • Note: You can also override the values given in the .properties file or provide values that are not included in the file. For example, you can configure agent_instance_url and agent_instance_secret in the bito_action.properties file, and only pass pr_url on the command line during runtime.

        • ./bito_actions.sh bito_action.properties pr_url=<pr_url>

    1. Incorporate the AI Code Review Agent into your CI/CD pipeline by adding the appropriate commands to your build or deployment scripts. This integration will automatically trigger code reviews as part of the pipeline, enhancing your development workflow by enforcing code quality checks with every change.

    Bito Cloud
    AI Code Review Agent
    Select the appropriate Git provider guide from this link

    In the search bar, type "Bito" to locate the Bito plugin

  • Once you locate the Bito plugin, click on the update button to initiate the update

  • Pro Tip 💡: Enable Auto-update for Bito Plugin on VS Code (as shown in the video)

    Updating Bito Plugin on JetBrains IDEs

    1. Open your JetBrains IDE (e.g., IntelliJ IDEA, PyCharm, etc.)

    2. Go to Settings by clicking on "File" in the menu bar (Windows/Linux) or by clicking on "IntelliJ IDEA" in the menu bar (macOS).

    3. In the Settings window, navigate to the "Plugins" section

    4. Switch to the "Installed" tab to view the list of installed plugins

    5. Locate the Bito plugin in the list and click on the update button to initiate the update

    How to Ask Questions?

    Once indexing is complete, you can ask any question in the Bito chatbox. Bito uses AI to determine if you are asking about something in your codebase. If Bito is confident, it grabs the relevant parts of your code from our index and feeds them to the Large Language Models (LLMs) for accurate answers. But if it's unsure, Bito will ask you to confirm before proceeding.

    In case you ask a general question (not related to your codebase), then Bito will directly send your request to our LLM without first looking for the appropriate local context.

    However, if you want to ask a question about your code no matter what, then you can use specific keywords such as "my code", "my repo", "my project", "my workspace", etc., in your question.

    The complete list of these keywords is given on our Available Keywords page.

    Once Bito sees any input containing these keywords, it will use the index to identify relevant portions of code or content in your folder and use it for processing your question, query, or task.

    Security of your code

    As usual, security is top of mind at Bito, especially when it comes to your code. A fundamental approach we have taken is to keep all code on your machine, and not store any code, code snippets, indexes, or embedding vectors on Bito’s servers or our API partners. All code remains on your machine, Bito does not store it. In addition, none of your code is used for AI model training.

    Learn more about Bito’s Privacy and Security Practices.

    indexing
    AI Stack

    If you are on Windows 11 then you shouldn't encounter any such issues.

    Using Homebrew for Bito CLI

    1. Before using homebrew, please make sure that you uninstall any previously installed versions of Bito CLI using the uninstall guide provided here.

    2. Once above is done then you can use following commands to install Bito CLI using homebrew:

      1. First tap the CLI repo using brew tap gitbito/bitocli command, this should be a one time action and not required every time.

      2. Now you can install Bito CLI using following command:

        • brew install bito-cli - this should install Bito CLI based upon your machine architecture.

      3. To update Bito CLI to the latest version, use following commands:

        1. Please make sure you always do brew update before upgrading to avoid any errors.

        2. brew update - this will update all the required packages before upgrading.

      4. To uninstall Bito CLI you can either use the or use following commands:

        • brew uninstall bito-cli - this should uninstall Bito CLI completely from your system.

    Bitbucket
    Bitbucket (Self-Managed)

    Overview

    On-demand, context-aware AI code reviews for GitHub, GitLab, and Bitbucket.

    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Bito’s AI Code Review Agent is the first agent built with Bito’s AI Agent framework and engine. It is an automated AI assistant (powered by Anthropic’s Claude Sonnet 3.7) that will review your team’s code; it spots bugs, issues, code smells, and security vulnerabilities in Pull/Merge Requests (PR/MR) and provides high-quality suggestions to fix them.

    It seamlessly integrates with Git providers such as GitHub, GitLab, and Bitbucket, automatically posting recommendations directly as comments within the corresponding Pull Request. It includes real-time recommendations from Static Code Analysis and OSS vulnerability tools such as fbinfer, Dependency-Check, etc. and can include high severity suggestions from other 3rd party tools you use such as Snyk.

    We also support GitHub (Self-Managed) and GitLab (Self-Managed).

    The AI Code Review Agent acts as a set of specialized engineers each analyzing different aspects of your PR. They analyze aspects such as Performance, Code Structure, Security, Optimization, and Scalability. By combining and filtering the results, the Agent can provide you with much more detailed and insightful code reviews, bringing you a better quality code review and helping you save time.

    The AI Code Review Agent helps engineering teams merge code faster while also keeping the code clean and up to standard, making sure it runs smoothly and follows best practices.

    It ensures a secure and confidential experience without compromising on reliability. Bito neither reads nor stores your code, and none of your code is used for AI model training. Learn more about our .

    By accessing Bito's feature, the AI Code Review Agent can analyze relevant context from your entire repository, providing better context-aware analysis and suggestions. This tailored approach ensures a more personalized and contextually relevant code review experience.

    To comprehend your code and its dependencies, we use Symbol Indexing, Abstract Syntax Trees (AST), and Embeddings. Each step feeds into the next, starting from locating specific code snippets with Symbol Indexing, getting their structural context with AST parsing, and then leveraging embedding vectors for broader semantic insights. This approach ensures a detailed understanding of the code's functionality and its dependencies. For more information, see

    The AI Code Review Agent is built using Bito Dev Agents, an open framework and engine to build custom AI Agents for software developers that understands code, can connect to your organization’s data and tools, and can be discovered and shared via a global registry.

    Why use an AI Agent for code review?

    In many organizations, senior developers spend approximately half of their time reviewing code changes in PRs to find potential issues. The AI Code Review Agent can help save this valuable time.

    AI Code Review Agent speeds up PR merges by 89%, reduces regressions by 34%, and delivers 87% human-grade feedback.

    However, it's important to remember that the AI Code Review Agent is designed to assist, not replace, senior software engineers. It takes care of many of the more mundane issues involved in code review, so senior engineers can focus on the business logic and how new development is aligned with your organization’s business goals.

    Pricing details

    The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

    Learn more

    Configuration

    Manage Bito CLI settings

    bito config [flags]

    • run bito config -l or bito config --list to list all config variables and values.

    • run bito config -e or bito config --edit to open the config file in default editor.

    Sample Configuration

    What is an Access Key and How to Get it?

    is an alternate authentication mechanism to Email & OTP based authentication. You can use an Access Key in Bito CLI to access various functionalities such as Bito AI Chat. Here’s a guide on . Basically, after creating the Access Key, you have to use it in the config file mentioned above. For example, access_key: “YOUR_ACCESS_KEY_HERE”

    Access Key can be persisted in Bito CLI by adding it in the config file using bito config -e. Such persisted Access Key can be over-ridden by running bito -k <access-key> or bito --key <access-key> for the transient session (sessions that last only for a short time).

    Preferred AI Model Type

    By default AI Model Type is set to ADVANCED and it can be overridden by running bito -m <BASIC/ADVANCED>. Model type is used for AI query in the current session. Model type can be set to BASIC or ADVANCED, which is case insensitive.

    "ADVANCED" refers to AI models like GPT-4o, Claude Sonnet 3.5, and best in class AI models, while "BASIC" refers to AI models like GPT-4o mini and similar models.

    When using Basic AI models, your prompts and the chat's memory are limited to 40,000 characters (about 18 single-spaced pages). However, with Advanced AI models, your prompts and the chat memory can go up to 240,000 characters (about 110 single-spaced pages). This means that Advanced models can process your entire code files, leading to more accurate answers.

    If you are seeking the best results for complex tasks, then choose Advanced AI models.

    Access to Advanced AI models is only available in Bito's . However, Basic AI models can be used by both free and paid users.

    To see how many Advanced AI requests you have left, please visit the page. On this page, you can also set to control usage of Advanced AI model requests for your workspace and avoid unexpected expenses.

    Also note that even if you have set preferred_ai_model: ADVANCED in Bito CLI config but your Advanced AI model requests quota is finished (or your self-imposed is reached) then Bito CLI will start using Basic AI models instead of Advanced AI models.

    Bito CLI

    Command Line Interface (Powered by Bito AI Chat) to Automate Your Tasks

    How to use?

    Learn how to work with Bito CLI (including examples)

    Prerequisites

    Terminal

    • Bash (for Mac and Linux)

    • CMD (for Windows)

    Using Bito CLI

    Before you can use Bito CLI, you need to and it. Once the setup is done, follow the steps below:

    • Execute Chat: Run bito command on command prompt to get started. Ask anything you want help with such as awk command to print first and last column.

    • Note: Bito CLI supports long prompts through multiline input. To complete and submit the prompt, press Ctrl+D. Enter/Return key adds a new line to the input.

    Here is the complete list of .

    Getting Started

    Check out the video below to get started with Bito CLI.

    Examples

    Here are two examples for you to see My Prompt in action:

    1. How to Create Git Commit Messages and Markdown Documentation with Ease using Bito CLI My Prompt:

    1. How to generate test data using Bito CLI My Prompt:

    Overview

    Bito CLI (Command Line Interface)

    Bito CLI (Command Line Interface) is an innovative tool that harnesses the power of Bito AI chat functionality to automate software development workflows. It can automate repetitive tasks like software documentation, test case generation, pull request review, release notes generation, writing commit message or pull request description, and much more.

    For example, you can run a command like bito –p writedocprompt.txt -f mycode.js for non-interactive mode in Bito CLI (where writedocprompt.txt will contain your prompt text such as Explain the code below in brief and mycode.js will contain the actual code on which the action is to be performed).

    Here is the complete list of .

    Download Bito CLI from GitHub:

    With support for 50+ programming languages (Python, JavaScript, SQL, etc.) and 50+ spoken languages (English, German, Chinese, etc.), Bito CLI is versatile and adaptable to different project needs. Furthermore, it's designed to be compatible across multiple operating systems, including Windows, Mac, and Linux, ensuring a wide range of usability.

    You can either use "ADVANCED" AI models like GPT-4o, Claude Sonnet 3.5, and best in class AI models, or "BASIC" AI models like GPT-4o mini and similar models inside Bito CLI.

    When using Basic AI models, your prompts and the chat's memory are limited to 40,000 characters (about 18 single-spaced pages). However, with Advanced AI models, your prompts and the chat memory can go up to 240,000 characters (about 110 single-spaced pages). This means that Advanced models can process your entire code files, leading to more accurate answers.

    If you are seeking the best results for complex tasks, then choose Advanced AI models.

    Access to Advanced AI models is only available in Bito's . However, Basic AI models can be used by both free and paid users.

    Bito CLI is an invaluable asset for developers looking to increase efficiency and productivity in their workflows. It allows developers to save time and focus on more complex and creative aspects of their work. Additionally, Bito CLI plays a crucial role in supporting continuous integration and deployment (CI/CD) processes. Explore some we've created using Bito CLI, which you can implement in your projects right now. These automations showcase the powerful capabilities of Bito CLI.

    To get started, check out our guide on , ensuring you make the most out of it.

    Share chat session

    Let your friends see what you and Bito are creating together.

    Easily share insights from any AI Chat session by creating a unique shareable link directly from the Bito extension in VS Code or JetBrains IDEs.

    Whether you need to share AI-generated code suggestions, explanations, or any other chat insights, this feature allows you to create a public link that others can access. The link will remain active for 15 days and can be viewed by anyone with access to the URL, making collaboration and knowledge sharing seamless.

    Additionally, you can quickly share your AI Chat session through a pre-written Tweet or an Email.

    Note:

    • The link will expire in 15 days.

    • The session link will be publicly accessible by anyone with the link.

    • Recipients can access the chat session in any web browser by using the unique URL.

    Let's see how it is done:

    1. Open Bito in Visual Studio Code or any JetBrains IDE.

    2. Start a conversation in Bito’s AI Chat user interface.

    3. Locate the share button on the top right of the Bito extension side-panel.

    4. Click the share button to open a menu with options, including X (Twitter), Email, and Link.

    Appearance settings

    The IDE customization settings are accessible through the new toolbar dropdown menu titled "Extension Settings".

    Light and Dark Themes

    In Visual Studio Code and JetBrains IDEs, you can choose between a light or dark theme for the Bito panel to match your coding environment preference. For VS Code users, Bito also offers an adaptive theme mode in which the Bito panel and font colors automatically adjust based on your selected VS Code theme, creating a seamless visual experience.

    You can set the desired theme through the Theme dropdown.

    Theme Screenshots

    “Always Light” Theme

    “Always Dark” Theme

    “Light” or “Dark” Theme - Matching IDE

    “Adaptive” Theme

    Theme adapted from “Noctis Lux”:

    Theme adapted from “Solarized Light”:

    Theme adapted from “Tomorrow Night Blue”:

    Theme adapted from “barn-cat”:


    Font Size Control

    Take control of your code readability! Within the Bito extension settings, you can now adjust the font size for a comfortable viewing experience.

    You can set the desired font size through the Font Size text field. However, if you check the Font Size (Match with IDE Font) checkbox, it will override the set font size with the Editor font size.

    Bito's AI stack

    Learn About AI Technologies & Concepts Powering Bito

    How does Bito Understand My Code?

    Sneak Peek into the Inner Workings of Bito

    Bito deploys a Vector Database locally on the user’s machine, bundled as part of the Bito IDE plug-in. This database uses Embeddings (a vector with over 1,000 dimensions) to retrieve text, function names, objects, etc. from the codebase and then transform them into multi-dimensional vector space.

    Then when you give it a function name or ask it a question, that query is converted into a vector and is compared to other vectors nearby. This returns the relevant search results. So, it's a way to perform search not on keywords, but on meaning. Vector Databases are able to do this kind of search very quickly.

    Learn more about how Bito indexes your code so that it can understand it.

    Bito also uses an Agent Selection Framework that acts like an autonomous entity capable of perceiving its environment, making decisions, and taking actions to achieve certain goals. It figures out if it’s necessary to do an embeddings comparison on your codebase, do we need to perform an action against Jira, or do we do something else.

    Finally, Bito utilizes from Open AI, Anthropic, and others that actually provide the answer to the question by leveraging the context provided by the Agent Selection Framework and the embeddings.

    This is what makes us stand out from other AI tools like ChatGPT, GitHub Copilot, etc. that do not understand your entire codebase.

    We’re making significant innovations in our to simplify coding for everyone. To learn more about this head over to .

    Delete unused Agent instances

    Easily delete Agent instances you no longer need.

    If you no longer need an AI Code Review Agent instance, you can delete it to keep your workspace organized. Follow the steps below to quickly remove any unused Agents.

    1. Log in to Bito Cloud and select a workspace to get started.

    2. From the left sidebar, select Code Review Agents.

      If your Bito workspace is connected to your GitHub/GitLab/Bitbucket account, a list of AI Code Review Agent instances configured in your workspace will appear.

    1. Before deleting an Agent, ensure that any repositories currently using it are reassigned to another Agent otherwise a warning popup will appear.

    1. Locate the Agent you wish to delete and click the Delete button given in front of it.

    Note: The Default Agent (provided by Bito) cannot be deleted.

    Clone an Agent instance

    Easily duplicate Agent configurations for faster setup.

    Save time and effort by quickly creating a new AI Code Review Agent instance using the configuration settings of an existing one. It’s a fast and simple way to set up multiple Agent instances without having to reconfigure each one.

    Follow the steps below to get started:

    1. Log in to Bito Cloud and select a workspace to get started.

    2. From the left sidebar, select Code Review Agents.

    1. If your Bito workspace is connected to your GitHub/GitLab/Bitbucket account, a list of AI Code Review Agent instances configured in your workspace will appear. Locate the instance you wish to duplicate and click the Clone button given in front of it.

    1. An Agent configuration form will open, pre-populated with the input field values. You can edit these values as needed.

    1. Click Select repositories to choose Git repositories for the new Agent.

    1. To enable code review for a specific repository, simply select its corresponding checkbox. You can also enable repositories later, after the Agent has been created. Once done, click Save and continue to save the new Agent configuration.

    1. When you save the configuration, your new Agent instance will be added and available on the page.

    Creating a Bito account

    Try Advanced AI Coding Assistant for Free

    You would need to create an account with your email to use Bito. You can sign up for Bito directly from the IDE extension or the Bito web interface at https://alpha.bito.ai/.

    1. After you install the Bito extension, click the "Sign up or Sign-in" button on the Bito sign-up flow screen.

    1. In the next screen, enter your work email address, and verify through a six-digit code sent to your email address.

    1. Once your email is verified, you will get an option to create your profile. Enter your full name and set the language for the AI setup. Bito uses this setting to generate the output regardless of prompt language.

    Now, let's learn to start using Bito.

    Account and settings

    Manage your Bito workspace, members and the personal settings

    Prerequisites

    Key requirements for self-hosting the AI Code Review Agent.

    Minimum System Requirements

    A machine with the following minimum specifications is recommended for Docker image deployment and for obtaining optimal performance of the AI Code Review Agent.

    Requirement
    Minimum Specification

    Start free trial

    Unlock premium Bito features with our 14-day free trial.

    The Bito free trial gives you access to premium features for 14 days, allowing you to experience the full capabilities of Bito's AI-powered coding assistant.

    You can start your free trial directly from the Bito IDE extension using any of the three methods below.

    How to start your free trial

    Request changes comments

    Block merges until code issues are fixed.

    Bito’s Request changes comments feature helps enforce code quality by blocking merges until all AI-generated review comments are resolved—fully supported in GitHub, GitLab, and Bitbucket.

    When enabled, Bito identifies actionable issues in pull requests and posts them as formal “Request changes” review comments. If your repository uses branch protection rules that require all review conversations to be resolved before merging, Bito’s flagged comments will automatically block the pull request until addressed.

    This ensures developers don’t accidentally merge incomplete or unreviewed code.

    How it works

    Chat with AI Code Review Agent

    Ask questions about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    Real-time collaboration with the AI Code Review Agent accelerates your development cycle. By delivering immediate, actionable insights, it eliminates the delays typically experienced with human reviews. Developers can engage directly with the Agent to clarify recommendations on the spot, ensuring that any issues are addressed swiftly and accurately.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    How to chat?

    Large Language Models (LLM)

    Large Language Models (LLMs) are advanced AI algorithms trained to understand, generate, and sometimes translate human language. They are called “large” for a good reason: they consist of millions or even billions of parameters, which are the fundamental data points the model uses to make predictions and decisions.

    How Do LLM Work?

    Imagine teaching a child language by reading every book you can find. That’s essentially what LLMs go through. They are fed vast amounts of text data and use statistical methods to find patterns and learn from context. Through a process known as machine learning, these models become adept at predicting the next word in a sentence, answering questions, summarizing texts, and more.

    CLI vs webhooks service

    From one-time reviews to continuous automated reviews.

    On your machine or in a Private Cloud, you can run the AI Code Review Agent via either CLI or webhooks service. This guide will teach you about the key differences between CLI and webhooks service and when to use each mode.

    Difference Between CLI and webhooks service

    The main difference between CLI and webhooks service lies in their operational approach and purpose. In CLI, the docker container is used for a one-time code review. This mode is ideal for isolated, single-instance analyses where a quick, direct review of the code is needed.

    On the other hand, webhooks service is designed for continuous operation. When set in webhooks service mode, the AI Code Review Agent remains online and active at a specified URL. This continuous operation allows it to respond automatically whenever a pull request is opened in a repository. In this scenario, the git provider notifies the server, triggering the AI Code Review Agent to analyze the pull request and post its review as a comment directly on it.

    Prompts

    A prompt, in the simplest terms, is the initial input or instruction given to an AI model to elicit a response or generate content. It's the human touchpoint for machine intelligence, a cue that sets the AI's gears in motion.

    Prompts are more than mere commands; they are the seeds from which vast trees of potential conversations and content grow. Think of them as the opening line of a story, the question in a quiz, or the problem statement in a mathematical conundrum – the prompt is the genesis of the AI's creative or analytical output.

    For example, when you ask GPT-4o "What's the best way to learn a new language?" you've given it a prompt. The AI then processes this and generates advice based on its training data.

    The Art of Prompt Engineering

    Overview

    AI that Understands Your Code

    Bito has created the ability for our AI to understand your codebase, which produces dramatically better results that are personalized to you. This can help you write code, refactor code, explain code, debug, and generate test cases – all with the benefits of AI knowing your entire code base.

    Bito AI automatically figures out if you're asking about something in your code. If it's confident, it grabs the relevant parts of your code from our and feeds them to the for accurate answers. But if it's unsure, Bito will ask you to confirm before proceeding.

    To specifically ask questions related to your codebase, add the keyword "my code" in English, Cantonese, Japanese, Mandarin, Spanish, or Portuguese (more languages coming soon) to your questions in the Bito chatbox.

    Setting AI output language

    Communicate in Your Preferred Language

    Bito users come from all over the world, and it makes it super easy to set the AI output language. Bito will automatically generate the text output in the language in your user profile setting, regardless of the prompt input language.

    Bito allows setting this language when creating an account, as described in .

    You can also set or change this setting anytime by going to in Bito Cloud. Here is a quick video walkthrough.

    Supported Languages:

    Bito offers 20+ languages for you to choose from. Here is the list of currently supported languages:

    1. English (Default Language)

    AI Chat in Bito

    Bito AI chat is the most versatile and flexible way to use AI assistance. You can type any technical question to generate the best possible response. Check out these to understand all you can do with Bito.

    To use AI Chat, type the question in the chat box, and press 'Enter' to send. You can add a new line in the question with 'SHIFT+ ENTER'.

    Bito starts streaming answers within a few seconds, depending on the size and complexity of the prompt.

    Note: Team Plan users receive

    Replace <pr_url> with your specific values.

    The Training Regime

    Data, Data, and More Data: LLMs are the heavyweight champions of the data world. They are trained on diverse datasets comprising encyclopedias, books, articles, and websites to learn a wide range of language patterns and concepts.

    Supervised and Unsupervised Learning: Some LLMs learn through supervised learning, meaning they learn from datasets that have been labeled or corrected by humans. Others use unsupervised learning, meaning they infer patterns and rules from raw data without human annotation.

    Fine-Tuning: After the initial training, LLMs can be fine-tuned for specific tasks, like legal document analysis or medical diagnosis, by training them further on specialized data.

    Applications of LLMs

    Writing Assistance: Grammarly or the autocomplete in your email are powered by LLMs. They predict what you’re trying to say and help you say it better.

    Chatbots: If you've ever chatted with Bito and noticed that it sounds almost like a real person, that's because it is powered by several state-of-the-art Large Language Models.

    Translation Services: Services like Google Translate use LLMs to convert text from one language to another, learning from vast amounts of bilingual text to improve their accuracy.

    The Magic Behind the Scenes

    Neural Networks: The core technology behind LLMs is artificial neural networks, particularly a type called Transformer models. These mimic some aspects of human brain function and are particularly good at handling sequential data like text.

    Training Hardware: Training LLMs requires significant computational power, often involving hundreds of GPUs or specialized TPUs that work in tandem for weeks or months.

    Continuous Learning: LLMs don’t stop learning after their initial training. They can continue to learn from new data, improving their performance over time.

    Examples of Large Language Models

    GPT Series by OpenAI

    The GPT series by OpenAI has been a trailblazer in the field of LLMs. Each version of the Generative Pre-trained Transformer has been more powerful than the last, with GPT-4o as a staggering leap forward. Boasting over 200 billion parameters, this model is not just about size; it’s about the nuanced understanding and generation of human-like text. GPT-4o can craft essays that are indistinguishable from those written by humans, compose complex poetry, and even generate functional computer code across several languages, which is a testament to its versatility.

    GPT-4o's influence extends across industries. For instance, it can simulate conversations, create educational content, and even assist programmers by converting natural language descriptions into code snippets. Its advanced capabilities are being integrated into various software applications and tools, enhancing productivity and sparking creative new approaches to problem-solving.

    BERT by Google

    BERT stands for Bidirectional Encoder Representations from Transformers. It's a complicated name, but really, it's just Google's method for making search engines smarter. Unlike earlier models, BERT examines the context of a word in both directions (left and right of the word) within a sentence, leading to a far more nuanced interpretation of the query. This ability means that BERT can grasp the full intent behind your searches, so the results you get are closer to what you actually need.

    Since its integration into Google's search engine, BERT has significantly improved the relevance of results for millions of queries every day. For users, this often translates to finding answers more quickly and accurately, sometimes in subtle ways that may go unnoticed but are nonetheless powerful. Beyond search, BERT is also revolutionizing natural language processing tasks such as language translation, question answering, and text summarization.

    In summary, both the GPT series and BERT are not just steps but giant leaps forward in our ability to interface with machines in a more natural, intuitive way. They are redefining what's possible in the realm of AI and continuing to pave the way for smarter, more responsive technology.

    Ethical Considerations and Challenges

    Bias in AI: Since LLMs learn from existing data, they can perpetuate and amplify biases present in that data. It’s an ongoing challenge to ensure that LLMs are fair and unbiased.

    Privacy: Training LLMs on personal data raises privacy concerns. Ensuring data used is anonymized and secure is paramount.

    Environmental Impact: The energy consumption of training and running LLMs is significant. Researchers are working on more efficient models to mitigate this.

    The Future of LLMs

    Evolving Intelligence: LLMs are getting more sophisticated, with future models expected to handle more complex tasks and exhibit greater understanding of human language.

    Interdisciplinary Integration: LLMs are beginning to intersect with other fields, such as bioinformatics and climatology, providing unique insights and accelerating research.

    Democratization of AI: As LLMs become more user-friendly, their use is expanding beyond tech companies to schools, small businesses, and individual creators.

    Conclusion

    Large Language Models are transforming how we interact with machines, making them more human-like than ever. They're a blend of colossal data, computing power, and intelligent algorithms, pushing the boundaries of what machines can understand and accomplish. As they evolve, LLMs will continue to shape our digital landscape in unpredictable and exciting ways. Just remember, the next time you type out a sentence and your phone suggests the end of it, there’s a little bit of LLM magic at work.

    When to Use CLI and When to Use webhooks service

    Selecting the appropriate mode for code review with the AI Code Review Agent depends largely on the nature and frequency of your code review needs.

    CLI: Ideal for Specific, One-Time Reviews

    CLI mode is best suited for scenarios requiring immediate, one-time code reviews. It's particularly effective for:

    • Conducting quick assessments of specific pull requests.

    • Performing periodic, scheduled code analyses.

    • Reviewing code in environments with limited or no continuous integration support.

    • Integrating with batch processing scripts for ad-hoc analysis.

    • Using in educational settings to demonstrate code review practices.

    • Experimenting with different code review configurations.

    • Reviewing code on local setups or for personal projects.

    • Performing a final check before pushing code to a repository.

    CLI mode stands out for its simplicity and is perfect for standalone tasks where a single, direct execution of the code review process is all that's needed.

    Webhooks service: For Continuous, Automated Reviews

    Webhooks service, on the other hand, is the go-to choice for continuous code review processes. It excels in:

    • Continuously monitoring all pull requests in a repository.

    • Providing instant feedback in collaborative projects.

    • Seamlessly integrating with CI/CD pipelines for automated reviews.

    • Performing automated code quality checks in team environments.

    • Conducting real-time security scans on new pull requests.

    • Ensuring adherence to coding standards in every pull request.

    • Streamlining the code review process in large-scale projects.

    • Maintaining consistency in code review across multiple projects.

    • Enhancing workflows in remote or distributed development teams.

    • Offering prompt feedback in agile development settings.

    Webhooks service is indispensable in active development environments where consistent monitoring and immediate feedback are critical. It automates the code review process, integrating seamlessly into the workflow and eliminating the need for manual initiation of code reviews.

    Prompt engineering is a discipline in itself, evolving as an art and science within AI communities. Crafting effective prompts is akin to programming without code; it's about phrasing and framing your request to the AI in a way that maximizes the quality and precision of its output.

    Good prompt engineering can involve:

    • Being specific: Clearly defining what you want the AI to do.

    • Setting the tone: Informing the AI of the style or mood of the content you expect.

    • Contextualizing: Providing background information to guide the AI's responses.

    Example: Instead of saying, "Tell me about France," a well-engineered prompt would be, "Write a short travel guide for first-time visitors to France, highlighting top attractions, cultural etiquette, and local cuisine."

    The Role of Prompts in Generative AI

    Generative AI, which includes everything from text to image generation models, relies heavily on prompts to determine the direction of content creation. Prompts for generative AI act as a blueprint from which the model can conjure up entirely new pieces of content – whether that's an article, a poem, a piece of art, or a musical composition.

    Prompts tell the AI not just what to create, but can also suggest how to create it, influencing creativity, tone, structure, and detail. As generative AI grows more sophisticated, the potential for complex and nuanced prompts increases, allowing for more customized and high-fidelity outputs.

    Example: Prompting an AI with "Create a poem in the style of Edgar Allan Poe about the sea" instructs the model to adopt a specific literary voice and thematic focus.

    Challenges and Considerations

    Crafting the perfect prompt isn't always straightforward. One of the challenges lies in the AI's interpretation of the prompt. Ambiguity can lead to unexpected or unwanted results, while overly restrictive prompts may stifle the AI's creative capabilities.

    Moreover, ethical considerations arise when prompts are designed to elicit biased or harmful content. The AI's response is contingent upon its training data, and if that data includes prejudiced or false information, the output may reflect those biases. Responsible prompt engineering thus also involves an awareness of potential harm and the implementation of safeguards against it.

    Example: To avoid bias in AI-generated news summaries, prompts should be engineered to require neutrality and fact-checking.

    Conclusion

    Prompts are the simple commands or questions we use to kickstart a conversation with AI, guiding it to understand and generate the responses or content we seek. They're like the steering wheel for the AI's capabilities, crucial for navigating the vast landscape of information and creativity the AI models offer.

    As we continue to interact with and shape AI technology, mastering the use of prompts becomes our way of ensuring that the conversation flows in the right direction. Simply put, the better we become at asking, the better AI gets at answering.

    So, the next time you interact with a language model, remember that the quality of the output is often a direct reflection of your input - your prompt is the key.

    Guide for GitHub

    Guide for GitHub (Self-Managed)

    Guide for GitLab

    Guide for GitLab (Self-Managed)

    Guide for Bitbucket

    Guide for Bitbucket (Self-Managed)

    brew upgrade bito-cli - once above is done, this will update Bito CLI to the latest version.

    uninstall guide from here
    Large Language Models (LLMs)
    AI Stack
    Bito’s AI Stack documentation

    Embeddings
    Vector databases
    Indexing
    Generative AI
    Large Language Models (LLM)
    LLM tokens
    LLM parameters
    Retrieval Augmented Generation (RAG)
    Prompts
    Prompt engineering

    Learn how to sign up or log in to Bito

    Learn how to create, join, or change workspace

    Invite coworkers and manage their workspace membership

    Personalize Bito to speak your language

    Learn about different access levels and permissions

    An alternative to standard email and OTP authentication

    Creating a Bito account
    Workspace
    Managing workspace members
    Setting AI output language
    Managing user access levels
    Access key

    Share on X (Twitter):

    1. Click on X (Twitter) from the menu, and a dialogue window will appear, asking whether you want to open the external site.

    2. Simply click "Open" to proceed.

    3. You will be redirected to the X (Twitter) website, with a pre-written tweet containing a link to your Chat Session ready to be published.

    4. Click the "Post" button to send the tweet.

  • Share Through Email:

    1. Click on Email from the menu, and you will be redirected to your email application.

    2. Select your email account if needed.

    3. The email will be pre-filled with all the necessary information, including the link to your Chat Session.

    4. Add the receiver(s) of this email using the "To" input field.

    5. Click the "Send" button to send the email.

  • Share the Link:

    1. Click on Link from the menu.

    2. A confirmation popup will appear. Click Share session to generate a unique URL for your chat session, which will automatically be copied to your clipboard for easy sharing.

    3. Feel free to share this link with anyone you'd like to.

  • Theme adapted from “Noctis Lux”
    Theme adapted from “Solarized Light”
    Theme adapted from “Tomorrow Night Blue”
    Theme adapted from “barn-cat”
    Code Review Agents
    how to create a new workspace or join an existing one
    Privacy & Security practices
    AI that Understands Your Code
    How does Bito’s “AI that understands your code” work?
    Start free trial
    Get a demo
    Pricing
    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Getting Started

    Key Features

    Supported Programming Languages and Tools

    Agent Configuration: bito-cra.properties File

    FAQs

    Access Key
    how to create an Access Key
    Team Plan
    Requests Usage
    hard and soft limits
    hard limit
    bito:
     access_key: ""
     email: [email protected]
     
     preferred_ai_model: ADVANCED
    settings:
     auto_update: true
     max_context_entries: 20
    Exit Bito CLI: To quit/exit from Bito CLI, type quit and press Ctrl+D .
  • Terminate: Press Ctrl+C to terminate Bito CLI.

  • install
    configure
    available commands for Bito CLI

    4

    RAM

    8 GB

    Hard Disk Drive

    80 GB


    Supported Operating Systems

    • Windows

    • Linux

    • macOS


    OS Prerequisites

    Operating System
    Installation Steps

    Linux

    You will need:

    1. Bash (minimum version 4.x)

      • For Debian and Ubuntu systems

        sudo apt-get install bash

    macOS

    You will need:

    1. Bash (minimum version 4.x)

      brew install bash

    1. Docker (minimum version 20.x)

    Windows

    You will need:

    1. PowerShell (minimum version 5.x)

      • Note: In PowerShell version 7.x, run Set-ExecutionPolicy Unrestricted


    Required Access Tokens

    • Bito Access Key: Obtain your Bito Access Key. View Guide

    • GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently. View Guide

    GitHub Personal Access Token (Classic)
    • GitLab Personal Access Token: For GitLab PR code reviews, a token with API access is required. View Guide

    GitLab Personal Access Token
    • Snyk API Token (Auth Token): For Snyk vulnerability reports, obtain a Snyk API Token. View Guide

    CPU Cores

    Method 1: Using Bito AI chat

    The easiest way to start your trial is through natural interaction:

    1. Type a message in the Bito chat box and send it.

    2. Look for the popup that appears after sending your message.

    3. Click Try for free in the popup notification.

    4. Complete signup in the browser window that opens.

    5. Select Start Trial to activate your free trial.

    Method 2: Click upgrade button

    For a direct approach to upgrading:

    1. Click the UPGRADE button given at the top of the chat window

    2. Complete signup in the browser window that opens.

    3. Select Start Trial to activate your free trial.

    Method 3: Quick trial activation

    The fastest way to start your free trial:

    1. Hover over Include my code (located above the Bito chat box).

    2. In the popup, select Click for 14 day free trial to immediately activate your trial.

    💡 Pro tip: Method 3 is the quickest option as it starts your trial instantly without opening any external windows.

    Available features in free trial

    During your free trial, you'll have access to all the features of Bito Team Plan as mentioned on our Pricing page. It includes:

    • AI Code Reviews in Git

    • AI Code Reviews in IDE

    • AI Chat

    • AI that understands your code

    • and more.

    Visit our Pricing page

    Need Help?

    If you encounter any issues while starting your free trial:

    • Check your internet connection.

    • Ensure your Bito extension is up to date.

    • Contact [email protected] if the trial doesn't activate properly.

    Next steps

    Once your free trial is active, explore all the premium features available to you. Consider upgrading to a paid plan before your trial expires to continue enjoying advanced functionality.

    1. Enable comment resolution rules in your Git provider

    GitHub:

    • Go to your repository → Settings → Branches

    • Create or edit a branch protection rule (e.g., for main)

    • Enable:

      • ✅ Require a pull request before merging

      • ✅ Require conversation resolution before merging

    GitLab:

    • Go to your project → Settings → Merge requests

    • Under Merge checks, enable:

      • ✅ All threads must be resolved

    • Click Save changes button.

    Bitbucket:

    • Go to your repository → Repository settings → Branch restrictions

    • Click Add a branch restriction button.

    • Under Select branches, define the target branches where this restriction should apply. Pull requests merging into these branches will be blocked until all "Request changes" comments are resolved. You can choose one of two options:

      • By branch name or pattern: Enter a specific branch name (e.g., main) or use a wildcard pattern to cover multiple branches. For example, using an asterisk * applies the restriction across all branches, while release/* applies it to every release branch.

      • By branch type: Select a branch type (e.g., development, release) from the dropdown menu.

    • Switch to Merge settings tab.

    • Under Merge checks, enable:

      • ✅ No changes are requested

    • Under Merge conditions, enable:

      • ✅ Prevent a merge with unresolved merge checks

      • Note: This setting is only available if your organization uses Bitbucket Cloud Premium. It will block anyone from merging the PR if there are unresolved "request change" comments. On Standard Bitbucket Cloud, this option is unavailable; users will see a warning if they attempt to merge with unresolved "request change" comments, but the merge will still be allowed.

    • Click Save button.

    Note: Request change comments usually have to be resolved by the person who posted them. Since here these comments are posted by Bito, the user must comment /resolve in the pull request to resolve them.

    2. Turn on “Request changes comments” in Bito

    • Go to Repositories in the Bito dashboard.

    • Click on Settings for your desired AI Code Review Agent instance.

    • Enable the toggle: “Request changes comments”

    • Save changes

    When this is on, Bito will flag actionable AI feedback as formal review comments requiring resolution. Informational or minor suggestions will remain as regular comments.

    3. What happens in a pull request

    • Bito runs an AI code review on your pull request or merge request.

    • Actionable issues are posted as change requests.

    • Your Git provider treats these comments according to your configured merge rules.

    • If comment resolution is required, the merge is blocked until the flagged issues are resolved.

    Example workflow

    1. Developer opens a pull request or merge request.

    2. Bito reviews the code and posts a “request change” comment on a problematic line.

    3. The Git provider blocks the merge due to unresolved comments or threads.

    4. Developer fixes the issue and marks the thread as resolved.

    5. Merge becomes possible once all conditions are met.

    Why use this feature?

    • Enforces follow-up on critical AI-detected issues.

    • Works natively with GitHub, GitLab, and Bitbucket workflows.

    • Ensures only reviewed and clean code gets merged.

    • Helps maintain consistent code quality at scale.

    To start a conversation, type your question directly as a reply to the Agent’s code review comment.

    The AI Code Review Agent will analyze your comment and determine if it’s a valid and relevant question.

    • If the agent decides it’s a valid question, it will respond with helpful insights.

    • If the agent determines it’s unclear, off-topic, or not related to its feedback, it will not respond.

    To help the agent recognize your question faster, you can also tag your comment with @bitoagent or @askbito. Tagging informs the Agent that your message is intended as a question. However, tagging does not guarantee a reply. The agent will still analyze your comment and decide whether it is a valid question worth responding to.

    Bito usually responds within about 10 seconds.

    • On GitHub and Bitbucket, you may need to manually refresh the page to see the response.

    • On GitLab, updates happen automatically.

    Note: The AI Code Review Agent will only respond to questions posted as a reply to its own comments. It will not reply to questions added on threads that it didn’t start.

    What you can ask about

    When chatting with the AI Code Review Agent, you can ask questions to better understand or improve the code feedback it provided. Here are examples of what you can ask:

    • Clarifications about a highlighted issue Ask the AI to explain why it flagged a certain line of code or why something might cause a problem.

    • Request for alternative solutions Request different ways to fix or improve the code beyond what was originally suggested.

    • Deeper explanations If you want to understand the technical reasoning behind a suggestion (e.g., security concerns, performance impacts, best practices), you can ask for more detailed explanations.

    • Request for examples Ask the AI to provide an example snippet showing the corrected or improved code.

    • Trade-off discussions Ask the AI about pros and cons of different approaches it may have suggested (e.g., performance vs. readability).

    • Best practices guidance Request advice on best practices related to the specific code snippet — such as naming conventions, error handling, optimization tips, or design patterns.

    • Language-specific advice If you’re working in a particular language (e.g., JavaScript, Python, Java), you can ask for language-specific guidance related to the comment.

    • Request for more context If the suggestion feels too "short" or "surface level," you can ask the AI to explain more about the broader coding or architectural concept behind its feedback.

    • Security and safety questions If a suggestion touches on security (like input validation, authentication, or encryption), you can ask for further security-related advice.

    • Testing and validation Ask the AI if it recommends writing any tests based on its code suggestions and what those tests might look like.

    Tip: Feel free to ask your question in your preferred language! Bito supports over 20 languages, including English, Hindi, Chinese, and Spanish.

    What you cannot ask about

    The AI can only answer questions related to its own code review comments.

    • You cannot ask general questions about the repository or unrelated topics.

    • You cannot start a new thread independently — your question must be a reply to a comment made by Bito’s AI Code Review Agent.

    If your comment is not linked to a Bito review comment, the AI will not respond.

    Example: in my code explain the file apiUser.js

    Additional keywords for various languages are listed on the Available Keywords page.

    For now, this feature is only available for our Team Plan which costs $15 per user per month. We have plans to release it for our Free Plan soon. But it will be limited to repos of 10MB indexable size.

    Recent breakthroughs in Generative AI and Large Language Models (LLMs) have helped make many AI Coding Assistant tools available, including Bito, to help you develop software faster.

    The major issue with these AI assistants, though, is that they have no idea about your entire codebase. Some tools take context from currently opened files in your IDE, while others enable you to manually enter code snippets in a chat-like interface and then ask questions about them.

    But with Bito’s AI that understands your entire repository, this is a whole new capability. For example, what if you could ask questions like:

    • how can I add a button to mute and unmute the song to my code in my music player? By default, set this button to unmute. Also, use the same design as existing buttons in UI.

    • In my code list all the files and code changes needed to add column desc in table raw_data in dailyReport DB.

    • In my code suggest code refactoring for api.py and mention all other files that needs to be updated accordingly

    • Please write the frontend and backend code to take a user’s credentials, and authenticate the user. Use the authentication service in my code

    This will definitely improve the way you build software.

    index
    Large Language Models (LLMs)
  • Bulgarian (български)

  • Chinese (Simplified) (简体中文)

  • Chinese (Traditional) (繁體中文)

  • Czech (čeština)

  • French (français)

  • German (Deutsch)

  • Hungarian (magyar)

  • Italian (italiano)

  • Japanese (日本語)

  • Korean (한국어)

  • Polish (polski)

  • Portuguese (português)

  • Russian (русский)

  • Spanish (español)

  • Turkish (Türkçe)

  • Vietnamese (Tiếng Việt)

  • Dutch (Nederlands)

  • Hebrew (עִברִית)

  • Arabic (عربي)

  • Malay (Melayu)

  • Hindi (हिंदी)

  • Using the Language Support Feature

    Once you have selected your preferred language, Bito will communicate with you in your selected language. Take full advantage of this feature by:

    • Asking questions or giving commands to Bito in your selected language

    • Receiving responses and outputs from Bito in the language you've selected

    Note: All responses from Bito will appear in the selected language, regardless of the input language

    Enjoy the convenience of conversing with Bito in your native language and take your coding experience to a new level!

    Creating a Bito account
    Settings > Profile settings
    50 AI Chat requests per user per day
    in the Bito IDE extension, while
    Free Trial
    users are limited to
    20 AI Chat requests per day
    .

    Bito makes it super easy to use the answer generated by AI, and take a number of actions.

    Copy Answer

    Copy the answer to the clipboard.

    Regenerate Answer

    AI may not give the best answer on the first attempt every time. You can ask Bito AI to regenerate the answer by clicking "Regenerate" button next to the answer.

    Rate Response

    Vote response "Up" or "Down". This feedback Bito improve the prompt handling.

    Modify Last Prompt

    Many of these commands can be executed with keyboard shortcuts documented here: Keyboard shortcuts

    Use cases and examples

    Bito CLI (Command Line Interface)

    Learn how to setup Bito CLI on your device (Mac, Linux, and Windows)

    Manage Bito CLI settings

    Learn how to work with Bito CLI (including examples)

    Learn about all the powerful commands to use Bito CLI

    Answers to popular questions

    Overview
    Install or uninstall
    Configuration
    How to use?
    Available commands
    FAQs
    available commands for Bito CLI
    Team Plan
    intelligent AI automations
    how to use Bito CLI

    Available commands

    Invoke the AI Code Review Agent manually or within a workflow.

    The AI Code Review Agent offers a suite of commands tailored to developers' needs. You can manually trigger a code review by entering any of these commands in the comment box below a pull/merge request on GitHub, GitLab, or Bitbucket and submitting the comment. Alternatively, if you are using the self-hosted version, you can configure these commands in the bito-cra.properties file for automated code reviews.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the pull/merge request.

    /review

    This command provides a broad overview of your code changes, offering suggestions for improvement across various aspects but without diving deep for secure coding or performance optimizations or scalability improvements etc. This makes it ideal for catching general code quality issues that might not necessarily be critical blockers but can enhance readability, maintainability, and overall code health.

    Think of it as a first-pass review to identify potential areas for improvement before delving into more specialized analyses.

    Review Scope

    Five specialized commands are available to perform detailed analyses on specific aspects of your code. Details for each command are given below.

    1. /review security

    2. /review performance

    3. /review scalability

    4. /review codeorg

    You can provide comma-separated values to perform multiple types of code analysis simultaneously.

    Example: /review performance,security,codeoptimize

    Combining general feedback with specialized review scopes

    If you'd like to receive general code quality feedback alongside specialized analyses, include the general keyword in your review command.

    For example, to receive feedback on general code quality, performance, and security, use:

    • Example: /review general,performance,security

    This ensures a holistic review encompassing both general code quality and specific areas of concern.

    /review security

    This command performs an in-depth analysis of your code to identify vulnerabilities that could allow attackers to steal data, gain unauthorized access, or disrupt your application. This includes checking for weaknesses in input validation, output encoding, authentication, authorization, and session management. It also looks for proper encryption of sensitive data, secure coding practices, and potential misconfigurations that could expose your system.

    /review performance

    This command evaluates the current performance of the code by pinpointing slow or resource-intensive areas and identifying potential bottlenecks. It helps developers understand where the code may be underperforming against expected benchmarks or standards. It is particularly useful for identifying slow processes that could benefit from further investigation and refinement.

    This includes checking how well your code accesses data and manages tasks like database interactions and memory usage.

    /review scalability

    This command analyzes your code to identify potential roadblocks to handling increased usage or data. It checks how well the codebase supports horizontal scaling and whether it is compatible with load balancing strategies. It also ensures the code can handle concurrent requests efficiently and avoids bottlenecks from single points of failure. The command further examines error handling and retry mechanisms to promote system resilience under pressure.

    /review codeorg

    This command scans your code for readability, maintainability, and overall clarity. This includes checking for consistent formatting, clear comments, well-defined functions, and efficient use of data structures. It also looks for opportunities to reduce code duplication, improve error handling, and ensure the code is written for future growth and maintainability.

    /review codeoptimize

    This command helps identify specific parts of the code that can be made more efficient through optimization techniques. It suggests refactoring opportunities, algorithmic improvements, and areas where resource usage can be minimized. This command is essential for enhancing the overall efficiency of the code, making it faster and less resource-heavy.

    Control code review workflow

    These commands allow you to manage the AI Code Review Agent's behavior directly within your pull requests across GitHub, GitLab, and Bitbucket.

    /pause

    Pauses automatic AI reviews on the current pull request.

    Use case: Useful when significant changes are underway, and you want to prevent the AI from reviewing incomplete code.

    Example: Add a comment with /pause to the pull request.

    /resume

    Resumes the automatic AI reviews that were previously paused on the pull request.

    Use case: Once your code changes are ready for review, use this command to re-enable the AI's automatic analysis.

    Example: Add a comment with /resume to the pull request.

    /resolve

    Marks all Bito-posted review comments as resolved.

    Use case: After addressing the issues highlighted by the AI, use this command to clean up the comment threads.

    Example: Add a comment with /resolve to the pull request.

    Note: The /resolve command is currently supported in GitLab and Bitbucket.

    /abort

    Cancels all in-progress AI code reviews on the current pull request.

    Use case: If an AI review is no longer needed or was initiated by mistake, this command stops the process.

    Example: Add a comment with /abort to the pull request.

    Display Code Review in a Single Post

    By default, the /review command generates inline comments, placing code suggestions directly beneath the corresponding lines in each file for clearer guidance on improvements. If you prefer a single consolidated code review instead of separate inline comments, use the #inline_comment parameter and set its value to False.

    Example: /review #inline_comment=False

    Example: /review scalability #inline_comment=False

    Note: The /review command defaults to #inline_comment=True, so you can omit this parameter when its value is True.

    Templates

    Instantly improve code performance, security, and readability with AI suggestions.

    Templates help you improve your code quality instantly with AI-powered analysis. Get automated suggestions for performance optimization, security fixes, style improvements, and code cleanup without leaving your editor. Each template provides actionable feedback and ready-to-use code improvements that you can review and apply with a single click.

    Available templates

    1. Performance Check: Optimize code performance and efficiency

    2. Security Check: Identify and fix security vulnerabilities

    3. Style Check: Apply coding style and formatting standards

    4. Improve Readability: Enhance code clarity and organization

    5. Clean Code: Remove debugging and logging statements

    How to use templates

    Prerequisites

    Select the code you want to analyze in your editor before using any template.

    Method 1: Click Templates button

    1. Select code in your editor

    2. Click the Templates button at the bottom of the Bito extension panel

    1. Choose the desired template from the dropdown menu

    Quick navigation: Use arrow keys, Tab, or Shift+Tab to navigate the template menu

    Method 2: Open context menu

    1. Select code in your editor

    2. Right-click in the editor window

    3. Hover over "Bito AI" in the context menu

    4. Select the desired template from the submenu

    Method 3: Using slash / command in Bito chat box

    1. Select code in your editor

    2. Type / at the start of the Bito chat box

    3. Choose the desired template from the dropdown menu

    1. Type some text after the slash / to filter templates by name

    Method 4: Command Palette (VS Code)

    1. Select code in your editor

    2. Go to View → Command Palette (or press Ctrl+Shift+P / Cmd+Shift+P)

    3. Type "bito" to see available templates

    4. Select the desired template from the list

    Applying code suggestions

    When templates provide code improvements, you'll see an Apply button above the suggested code snippet.

    1. Click the Apply button to open the diff view

    1. Review the changes highlighted in the diff:

      • Red lines show code to be removed

      • Green lines show code to be added

    2. Choose your action:

    Tips

    • Select meaningful code blocks for better analysis results

    • Templates work best with complete functions or logical code segments

    • Review suggested changes before applying them to your codebase

    • Verify that the changes don't break existing functionality

    Install or uninstall

    Learn how to setup Bito CLI on your device (Mac, Linux, and Windows)

    Installing Bito CLI (Recommended)

    We recommend you use the following methods to install Bito CLI.

    Mac and Linux

    sudo curl https://alpha.bito.ai/downloads/cli/install.sh -fsSL | bash

    Note: curl will always download the latest version.

    Archlinux

    Arch and Arch based distro users can install it from

    yay -S bito-cli

    or

    paru -S bito-cli

    Note for the Mac Users: You might face issues related to verification for which you will have to manually do the steps from (we are working on fixing it as soon as possible).

    Windows

    • In the , open the folder that has the latest version number.

    • From here, download the MSI file called Bito CLI.exe and then install Bito CLI using this installer.

    • On Windows 11, you might get notification related to publisher verification. Click on "Show more" or "More info" and click on "Run anyway" (we are working on fixing this as soon as possible).

    Once the installation is complete, start a new command prompt and run bito command to get started.

    Installing with Manual Binary Download (Not Recommended)

    While it's not recommended, you can download the Bito CLI binary from our repository, and install it manually. The binary is available for Windows, Linux, and Mac OS (x86 and ARM architecture).

    Mac and Linux

    1. In the , open the folder that has the latest version number.

    2. From here, download the Bito CLI binary specific to your OS platform.

    3. Start the terminal, go to the location where you downloaded the binary, move the downloaded file (in the command below use bito-* filename you have downloaded) to filename bito.

      mv bito-<os>-<arch> bito

    Windows

    1. In the , open the folder that has the latest version number.

    2. From here, download the Bito CLI binary for Windows called bito.exe.

    3. For using Bito CLI, always move to the directory containing Bito CLI prior to running it.

    Uninstalling Bito CLI

    Mac and Linux

    sudo curl https://alpha.bito.ai/downloads/cli/uninstall.sh -fsSL | bash

    Note: This will completely uninstall Bito CLI and all of its components.

    Windows

    For Windows, you can uninstall Bito CLI just like you uninstall any other software from the control panel. You can follow these steps:

    1. Click on the Windows Start button and type "control panel" in the search box, and then open the Control Panel app.

    2. Under the "Programs" option, click on "Uninstall a program".

    3. Find "Bito CLI" in the list of installed programs and click on it.

    4. Click on the "Uninstall" button (given at the top) to start the uninstallation process.

    After completing these steps, Bito CLI should be completely removed from your Windows machine.

    Agent settings

    Learn how to customize the AI Code Review Agent.

    Bito's AI Code Review Agent supports different configuration methods depending on the deployment environment:

    1. Bito-hosted – The agent runs on Bito's infrastructure and is configured through the Bito web UI.

    2. Self-hosted – The agent runs on user-managed infrastructure and is configured by editing the bito-cra.properties file.

    The sections below provide configuration guidance for each setup.

    Bito-hosted agent configuration

    In Bito-hosted AI Code Review Agent, you can configure the agent through the .

    To customize an existing agent, open the page and click the Settings button next to the Agent instance to be modified.

    The agent settings page allows configuration of options such as:

    • Agent name – Define a unique name for easy identification.

    • Review options – Choose the review mode (Essential or Comprehensive), set feedback language, and enable features like auto-review, incremental review, summaries, and change walkthroughs.

    • Custom guidelines – Create and apply custom code review rules tailored to your team’s standards directly from the dashboard.

    • Filters

    These settings tailor the agent’s behavior to match team workflows and project needs. For detailed guidance, see .

    Self-hosted agent configuration

    In self-hosted deployments, configuration is managed by editing the . This file defines how the agent operates and connects to required services.

    Key configuration options include:

    • Mode

      • mode = cli: Processes a single pull request using a manual URL input.

      • mode = server: Runs as a webhook service and listens for incoming events from Git platforms.

    Each property is documented in detail on the page.

    Installing on JetBrain IDEs

    It takes less than 2 minutes

    Get up and running with Bito in just a few steps! Bito seamlessly integrates with popular JetBrains IDEs such as IntelliJ IDEA, PyCharm, and WebStorm, providing powerful AI-driven code reviews directly within your editor. Click the button below to quickly install the Bito extension and start optimizing your development workflow with context-aware AI Chat, and more.

    Video guide

    Watch the video below to learn how to download the Bito extension on JetBrains IDEs.

    Step-by-step instructions

    1. In JetBrains IDEs such as IntelliJ, go to File -> Settings to open the Settings dialog, and click Plugins -> Marketplace tab in the settings dialog. Search for Bito.

    2. Click "Install" to install the Bito extension. We recommend you restart the IDE after the installation is complete.

    Starting with Bito version 1.3.4, the extension is only supported on JetBrains versions 2021.2.4 and higher. JetBrains version 2021.1.3 is no longer supported from Bito version 1.3.4 onward.

    3. Bito panel will appear on the right-hand sidebar. Click it to complete the setup process. You will either need to create a new workspace if you are the first in your company to install Bito or join an existing workspace created by a co-worker. See

    The menu to invoke the settings dialog may differ for different IDEs of the JetBrains family. The screenshots highlighted above are for the IntelliJ IDEA. You can access the Bito extension directly from the JetBrains marketplace at .

    FAQs

    Answers to popular questions about the AI Code Review Agent.

    How do I whitelist Bito's gateway IP address for my on-premise Git platform?

    To ensure the AI Code Review Agent operates smoothly with your GitHub (Self-Managed) or GitLab (Self-Managed), please whitelist all of Bito's gateway IP addresses in your firewall to allow incoming traffic from Bito. This will enable Bito to access your self-hosted repository.

    List of IP addresses to whitelist:

    • 18.188.201.104

    • 3.23.173.30

    • 18.216.64.170

    The agent response can come from any of these IPs.

    How can I prevent the AI Code Review Agent from stopping due to token expiry?

    You should set a longer expiration period for your GitHub Personal Access Token (Classic) or GitLab Personal Access Token. We recommend setting the expiration to at least one year. This prevents the token from expiring early and avoids disruptions in the AI Code Review Agent's functionality.

    Additionally, we highly recommend updating the token before expiry to maintain seamless integration and code review processes.

    For more details on how to create tokens, follow these guides:

    • GitHub Personal Access Token (Classic):

    • GitLab Personal Access Token:

    What is "Estimated effort to review" in code review output?

    This is an estimate, on a scale of 1-5 (inclusive), of the time and effort required to review this Pull Request (PR) by an experienced and knowledgeable developer. A score of 1 means a short and easy review, while a score of 5 means a long and hard review. It takes into account the size, complexity, quality, and the needed changes of the PR code diff. The score is produced by AI.

    Why does Bito need access to my Git account?

    Bito requires certain permissions to analyze pull requests and provide AI-powered code reviews. It never stores your code and only accesses the necessary data to deliver review insights.

    What permissions does Bito need?

    Bito requires:

    1. Read access to code and metadata: To analyze PRs and suggest improvements

    2. Read and write access to issues and pull requests: To post AI-generated review comments

    3. Read access to organization members: To provide better review context

    I don’t have admin permissions. Can I still use Bito?

    If you don’t have admin access, you’ll need your administrator to install Bito on your organization’s Git account. Once installed, you can use it for PR reviews on allowed repositories. GitHub also sends a notification to the organization owner to request the organization owner to install the app.

    Does Bito store my code?

    No, Bito does not store or train models on your code. It only analyzes pull request data in real-time and provides suggestions directly within the PR.

    Can I choose which repositories Bito has access to?

    Yes, after installation, you can select specific repositories instead of granting access to all. You can also manage repository access later through our web dashboard.

    What happens after I install the Bito App?

    Once installed, you’ll be redirected to Bito, where you can:

    1. Select repositories for AI-powered reviews

    2. Customize review settings to fit your workflow

    3. Open a pull request to start receiving AI-driven suggestions

    Where can I get help if I have issues installing Bito?

    Contact for any assistance.

    BitoYouTube

    Available commands

    Learn about all the powerful commands to use Bito CLI

    Help

    Run any one of the below commands.

    bito --help

    or

    bito config –help

    Retrieval Augmented Generation (RAG)

    Retrieval Augmented Generation (RAG) is a paradigm-shifting methodology within natural language processing that bridges the divide between information retrieval and language synthesis. By enabling AI systems to draw from an external corpus of data in real-time, RAG models promise a leap towards a more informed and contextually aware generation of text.

    RAG fuses in-depth data retrieval with creative language synthesis in AI. It's like having an incredibly knowledgeable friend who can not only recall factual information but also weave it into a story seamlessly, in real-time.

    The Mechanics of RAG

    To understand RAG, let's break it down:

    Indexing

    Indexing involves breaking down a source code file into smaller chunks and converting these chunks into that can be stored in a . Bito indexes your entire codebase locally (on your machine) to understand it and provide answers tailored to your code.

    Learn more about Bito's feature.

    Prompt engineering

    Prompt Engineering is the art and science of crafting inputs (prompts) that guide AI to produce the desired outputs. It's about understanding how to communicate with an AI in a way that leverages its capabilities to the fullest. Think of it as giving directions to a supremely intelligent genie without any misunderstandings.

    In Bito’s backend, we do a lot of prompt engineering to ensure that you always receive accurate outputs.

    Installing on Visual Studio Code

    It takes less than 2 minutes

    Get up and running with Bito in just a few steps! Bito seamlessly integrates with Visual Studio Code, providing powerful AI-driven code reviews directly within your editor. Click the button below to quickly install the Bito extension and start optimizing your development workflow with context-aware , and more.

    Video guide

    Watch the video below to learn how to download the Bito extension on VS Code.

    Managing user access levels

    Understanding User Roles in Bito Workspaces

    A Bito Workspace represents your organization. It is the highest level of organization in Bito.

    In a Bito Workspace, different user types play distinct roles in managing and collaborating within the workspace. Here is an overview of the three user types: Owner, Admin, and User. Understanding these roles will help you effectively manage your workspace and optimize team collaboration.

    Owner: The Owner holds the highest level of authority within the workspace

    Admin: Admins have a significant role in managing the workspace alongside the Owner

    User: Users have access to the workspace with limited administrative privileges

    Here's a table summarizing the roles of the different user types in a Bito Workspace:

    Generative AI

    Generative AI has been making waves across various sectors, from art to technology, leaving many people scratching their heads and wondering: WTF is Generative AI? In this guide, we'll unpack the buzzword and provide you with a clear understanding of what Generative AI is, how it works, and why it's becoming increasingly important in our digital world.

    What is Generative AI?

    At its core, Generative AI refers to the subset of artificial intelligence where the systems are designed to generate new content. It’s like giving an artist a canvas, but the artist is an algorithm that can create images, compose music, write text, generate programming source code, and much more.

    Generative AI systems are typically powered by machine learning models that have been trained on vast datasets. They learn patterns, structures, and features from this data and use this understanding to generate new, original creations that are often indistinguishable from content created by humans.

    Workspace

    Learn How to Create, Join, or Change Workspace

    A workspace is a dedicated environment or space where teams can collaborate and use Bito services. After logging into your Bito account, you can either create a new workspace or join an existing one you've been invited to.

    You can use Bito in a single-player mode for all the use cases. However, it works best when your coworkers join the Workspace for collaboration.

    Retrieval: Before generating any new text, the RAG model retrieves information from a large dataset or database. This could be anything from a simple database of facts to an extensive library of books and articles.

  • Augmented: The retrieved information is then fed into a generative model to "augment" its knowledge. This means the generative model doesn't have to rely solely on what it has been trained on; it can access external data for a more informative output.

  • Generation: Finally, the model generates text using both its pre-trained knowledge and the newly retrieved information, leading to more accurate, detailed, and relevant responses.

  • The Components of a RAG Model

    A RAG model typically involves two major components:

    1. Document Retriever: This is a neural network or an algorithm designed to sift through the database and retrieve the most relevant documents based on the query it receives.

    2. Sequence-to-Sequence Model: After retrieval, a Seq2Seq model, often a transformer-based model like BERT or GPT, takes the retrieved documents and the initial query to generate a coherent and relevant piece of text.

    How to Build a RAG

    Let's imagine we want to build a RAG model that, when given a prompt about a historical figure or event, can generate a detailed and accurate paragraph.

    Step 1: Choose Your Data Source

    First, you need a database from which the model can retrieve information. For historical facts, this could be a curated dataset like Wikipedia articles, historical texts, or a database of historical records.

    Step 2: Index Your Data Source

    Before you can retrieve information, you need to index your data source to make it searchable. You can use software like Elasticsearch for efficient indexing and searching of text documents.

    Step 3: Set Up the Retriever

    You then need a retrieval model that can take a query and find the most relevant documents in your database. This could be a simple TF-IDF (Term Frequency-Inverse Document Frequency) retriever or a more sophisticated neural network-based approach like a Dense Retriever that maps text to embeddings.

    Step 4: Integrate with a Generative AI Model

    The retrieved documents are then fed into a generative AI model, like GPT-4o or BERT. This model is responsible for synthesizing the information from the documents with the original query to generate coherent text.

    Step 5: Training Your RAG Model

    If you're training a RAG model from scratch, you'd need to fine-tune your generative AI model on a task-specific dataset. You’d need to:

    • Provide pairs of queries and the correct responses.

    • Allow the model to retrieve documents during training and learn which documents help it generate the best responses.

    Step 6: Iterative Refinement

    After initial training, you can refine your model through further iterations, improving the retriever or the generator based on the quality of outputs and user feedback.

    Building such a RAG system would be a significant engineering effort, requiring expertise in machine learning, NLP, and software engineering.

    Why RAG is a Game-Changer

    RAG significantly enhances the relevance and factual accuracy of text generated by AI systems. This is due to its ability to access current databases, allowing the AI to provide information that is not only accurate but also reflects the latest updates.

    Moreover, RAG reduces the amount of training data needed for language models. By leveraging external databases for knowledge, these models do not need to be fed as much initial data to become functional.

    RAG also offers the capability to tailor responses more specifically, as the source of the retrieved data can be customized to suit the particular information requirement. This functionality signifies a leap forward in making AI interactions more precise and valuable for users seeking information.

    Practical Applications of RAG

    The applications of RAG are vast and varied. Here are a few examples:

    • Customer Support: RAG can pull up customer data or FAQs to provide personalized and accurate support.

    • Content Creation: Journalists and writers can use RAG to automatically gather information on a topic and generate a draft article.

    • Educational Tools: RAG can be used to create tutoring systems that provide students with detailed explanations and up-to-date knowledge.

    Challenges and Considerations

    Despite its advantages, RAG also comes with its set of challenges:

    • Quality of Data: The retrieved information is only as good as the database it comes from. Inaccurate or biased data sources can lead to flawed outputs.

    • Latency: Retrieval from large databases can be time-consuming, leading to slower response times.

    • Complexity: Combining retrieval and generation systems requires sophisticated machinery and expertise, making it complex to implement.

    Conclusion

    Retrieval Augmented Generation is a significant step forward in the NLP field. By allowing machines to access a vast array of information and create something meaningful from it, RAG opens up a world of possibilities for AI applications.

    Whether you're a developer looking to build smarter AI systems, a business aiming to improve customer experience, or just an AI enthusiast, understanding RAG is crucial for advancing in the dynamic field of artificial intelligence.

    Why is it Important?

    Generative AI, like OpenAI’s GPT models, are revolutionizing industries from content creation to coding. But their utility hinges on the quality of the prompts they receive. A well-engineered prompt can yield rich, accurate, and nuanced responses, while a poor one can lead to irrelevant or even nonsensical answers.

    The Anatomy of a Good Prompt

    Clarity and Specificity

    AI models are literal. If you ask for an article, you'll get an article. If you ask for a poem about dogs in space, you’ll get exactly that. The specificity of your request can significantly alter the output.

    Example:

    • Vague Prompt: Write about health.

    • Engineered Prompt: Write a comprehensive guide on adopting a Mediterranean diet for improving heart health, tailored for beginners.

    Contextual Information

    Providing context helps the AI understand the nuance of the request. This could include tone, purpose, or background information.

    Example:

    • Without Context: Explain quantum computing.

    • With Context: Explain quantum computing in simple terms for a blog aimed at high school students interested in physics.

    Closed vs. Open Prompts

    Closed prompts lead to specific answers, while open prompts allow for more creativity. Depending on your goal, you may need one over the other.

    Example:

    • Closed Prompt: What is the capital of France?

    • Open Prompt: Describe a day in the life of a Parisian.

    The Practice of Prompt Engineering

    Prompt engineering is not a "get it right the first time" kind of task. It involves iterating prompts based on the responses received. Tweaking, refining, and even overhauling prompts based on output can lead to more accurate and relevant results.

    A significant part of prompt engineering is experimentation. By testing different prompts and studying the outputs, engineers learn the nuances of the AI's language understanding and generation capabilities.

    Keywords are the bread and butter of prompt engineering. Identifying the right keywords can steer the AI in the desired direction.

    Example:

    • Without Keyword Emphasis: Write about the internet.

    • With Keyword Emphasis: Write an article focused on the evolution of internet privacy policies.

    Advanced Techniques

    Chain of Thought Prompts

    These prompts mimic a human thought process, providing a step-by-step explanation that leads to an answer or conclusion. This can be especially useful for complex problem-solving.

    Example:

    • Chain of Thought Prompt: To calculate the gravitational force on an apple on Earth, first, we determine the mass of the apple and the distance from the center of the Earth...

    Zero-Shot and Few-Shot Learning

    In zero-shot learning, the AI is given a task without previous examples. In few-shot learning, it’s provided with a few examples to guide the response. Both techniques can be leveraged in prompt engineering for better results.

    Example:

    • Zero-Shot Prompt: What are five innovative ways to use drones in agriculture?

    • Few-Shot Prompt: Here are two ways to use drones in agriculture: 1) Crop monitoring, 2) Automated planting. List three more innovative ways.

    Ethical Considerations and Limitations

    • Bias and Sensitivity: Prompt engineers must be mindful of inherent biases and ethical considerations. This includes avoiding prompts that could lead to harmful outputs or perpetuate stereotypes.

    • Realistic Expectations: LLMs and Generative AI are powerful but not omnipotent. Understanding their limitations is crucial in setting realistic expectations for what prompt engineering can achieve.

    • Data Privacy and Security: As prompts often contain information that may be sensitive, engineers must consider data privacy and security in their designs.

    Conclusion

    Prompt engineering is more than a technical skill—it’s a new form of linguistic artistry. As we continue to integrate AI into our daily lives, becoming adept at communicating with these systems will become as essential as coding is today.

    Whether you’re a writer, a developer, or just an AI enthusiast, mastering the craft of prompt engineering will place you at the forefront of this exciting conversational frontier. So go ahead, start crafting those prompts, and unlock the full potential of your AI companions.

    Make the file executable using following command chmod +x ./bito

  • Copy the binary to /usr/local/bin using following command sudo cp ./bito /usr/local/bin

  • Set PATH variable so that Bito CLI is always accessible. PATH=$PATH:/usr/local/bin

  • Run Bito CLI with bito command. If PATH variable is not set, you will need to run command with the complete or relative path to the Bito executable binary.

  • Set PATH variable so that Bito CLI is always accessible.
    1. Follow the instructions as per this link

    2. Edit the "Path" variable and add a new path of the location where Bito CLI is installed on your machine.

    Follow the instructions provided by the uninstall wizard to complete the uninstallation process.

    AUR
    here
    Bito CLI GitHub repo
    Bito CLI GitHub repo
    Bito CLI GitHub repo
    How Bito Indexes Your Code

    In the steps below, we'll show you how Bito indexes your code, ensuring that each query you have is met with precise and contextually relevant information. From breaking down code into digestible chunks to leveraging advanced AI models for nuanced understanding, Bito transforms the daunting task of code analysis into a seamless and efficient experience.

    Here's how the magic happens:

    Step 1: Chunk Breakdown

    Dividing Code into Pieces

    Bito starts by breaking down your source code files into smaller sections, known as 'chunks'. It’s like cutting up a long text into paragraphs to make it more manageable. Each chunk represents a piece of your code that can be individually indexed and analyzed.

    Step 2: Indexing Each Chunk

    Creating a Searchable Reference

    After breaking down the file, each chunk is indexed, similar to creating a catalog entry. This step is crucial as it allows for the efficient location of the code segment later on.

    Step 3: Generating Embeddings

    Translating Code into Numeric Vectors

    For every chunk, Bito generates a numeric vector or “embedding”. This process, which can be done using OpenAI or alternative open-source embedding models, translates the code into a mathematical representation. The idea is to create a form that can be easily compared and matched with other code chunks.

    Step 4: Storing the Vectors

    Saving the Essential Data

    These embeddings are then stored in an index file on your machine. This index file is like a detailed directory, listing the file name, the location of the chunk within the file (start and end), and the embedding vector for each piece of code.

    Step 5: Query Embedding

    Understanding Your Questions

    When you ask a question in Bito's chatbox, the AI checks whether it has some specific keywords like "my code", "my project", etc. If so, Bito generates a numeric vector for your query, mirroring the process used for code chunks.

    The complete list of these keywords is given on our Available Keywords page.

    Step 6: Finding the Nearest Neighbor

    Matching Your Query with Code

    Using the query's vector, Bito searches the index to find the code chunk with the closest matching embedding. This step identifies the relevant sections of your codebase that can answer your question.

    Step 7: Contextualization

    Building a Bigger Picture

    Identifying chunks is just part of the process. Bito ensures that these chunks make sense in the broader context of your code. If necessary, it expands the search to include complete functions or related code segments, creating a fuller, more accurate context.

    Step 8: Leveraging Language Models

    Consulting the AI Experts

    With the context in hand, Bito consults with language models – either basic (GPT-4o mini and similar models) or advanced (GPT-4o, Claude Sonnet 3.5, and best in class AI models) – to interpret the code within the context and provide an accurate response to your query.

    Step 9: Session Privacy

    Keeping Your Data Local

    All the indexing and querying happens on your local machine. The index files are stored in the user’s home folder, for example on Windows the path will be something like C:\Users\Furqan\.bito\localcodesearch folder. It ensures that your code and session history remain private and secure.

    Step 10: Safeguarding Data

    Ensuring Confidentiality

    Bito is committed to privacy. All LLM accounts it uses are under strict agreements to prevent your data from being used for training, recorded, or logged.

    Step 11: Handling Hallucination

    Reducing AI Fabrication

    Bito is designed to minimize AI 'hallucinations' or fabrications, ensuring the answers you receive are based on your actual code. Although complete elimination of hallucination isn't feasible, as it sometimes aids in constructing beyond seen data, Bito strives to keep it in check, especially when dealing with your local code.

    With these steps, Bito provides a robust and privacy-conscious method for indexing and understanding your code, simplifying navigation and enhancing productivity in your development projects.

    embeddings
    vector database
    AI that Understands Your Code
  • /review codeoptimize

  • Highlighting the security vulnerability detected and the proposed solution.
    Highlighting the performance issue detected and the proposed solution.
    Highlighting the scalability issue detected and the proposed solution.
    Highlighting the code structure issue detected and the proposed solution.
    Precise code optimization advice pinpointing exact lines in a file.

    Accept - Apply the suggested changes to your code

  • Undo - Reject the changes and keep your original code

  • Use multiple templates on the same code for comprehensive analysis

  • Use the diff view to understand exactly what changes will be made

  • Templates menu in Bito Panel
    View Guide
    View Guide
    [email protected]

    For CentOS and other RPM-based systems

    sudo yum install bash

    1. Docker (minimum version 20.x)

      • View Guide

    View Guide

    command. It allows the execution of scripts without any constraints, which is essential for running scripts that are otherwise blocked by default security settings.

    1. Docker (minimum version 20.x)

      • View Guide

    View Guide
    Chat Session History
    Check Bito CLI Version

    Run any one of the below commands to print the version number of Bito CLI installed currently.

    bito -v

    or

    bito --version

    Bito CLI MyPrompt (Automation using Bito CLI)

    The below commands can help you automate repetitive tasks like software documentation, test case generation, writing pull request description, pull request review, release notes generation, writing commit message, and much more.

    Explore some intelligent AI automations we've created using Bito CLI, which you can implement in your projects right now. These automations showcase the powerful capabilities of Bito CLI.

    1- Non-Interactive Mode with File Input

    Run the below command for non-interactive mode in Bito (where writedocprompt.txt will contain your prompt text such as Explain the code below in brief and mycode.js will contain the actual code on which the action is to be performed).

    2- Standard Input Mode

    Run the below command to read the content at standard input in Bito (where writedocprompt.txt will contain your prompt text such as Explain the code below in brief and input provided will have the actual content on which the action is to be performed).

    3- Direct File Input

    Run the below command to directly concatenate a file and pipe it to bito and get instant result for your query.

    On Mac/Linux

    On Windows

    4- Redirect Output to a File

    On Mac/Linux

    Run the below command to redirect your output directly to a file (where -p can be used along with cat to perform prompt related action on the given content).

    On Windows

    Run the below command to redirect your output directly to a file (where -p can be used along with type to perform prompt related action on the given content).

    5- Store Context/Conversation History

    Run the below command to store context/conversation history in non-interactive mode in file runcontext.txt to use for next set of commands in case prior context is needed. If runcontext.txt is not present it will be created. Please provide a new file or an existing context file created by bito using -c option. With -c option now context is supported in non-interactive mode

    On Mac/Linux

    On Windows

    6- Instant Response for Queries

    Run the below command to instantly get response for your queries using Bito CLI.

    Using Comments

    Anything after # symbol in your prompt file will be considered as a comment by Bito CLI and won't be part of your prompt.

    You can use \# as an escape sequence to make # as a part of your prompt and to not use it for commenting anymore.

    Few examples for above:

    • Give me an example of bubble sort in python # everything written here will be considered as a comment now.

    • Explain what this part of the code do: \#include<stdio.h>

      • In the example above \# can be used as an escape sequence to include # as a part of your prompt.

    • #This will be considered as a comment as it contains # at the start of the line itself.

    Using Macro

    Use {{%input%}} macro in the prompt file to refer to the contents of the file provided via -f option.

    Example: To check if a file contains JS code or not, you can create a prompt file checkifjscode.txt with following prompt:

    bito –p writedocprompt.txt -f mycode.js
    bito –p writedocprompt.txt
    cat file.txt | bito
    type file.txt | bito
    cat inventory.sql | bito -p testdataprompt.txt > testdata.sql
    type inventory.sql | bito -p testdataprompt.txt > testdata.sql
    cat inventory.sql | bito -c runcontext.txt -p testdataprompt.txt > testdata.sql
    type inventory.sql | bito -c runcontext.txt -p testdataprompt.txt > testdata.sql
    echo "give me code for bubble sort in python" | bito
    Context is provided below within contextstart and contextend
    contextstart
    {{%input%}}
    contextend
    Check if content provided in context is JS code.
    – Exclude specific files, folders, or branches from review to focus on relevant code.
  • Tools – Enable additional checks, such as secret scanning and static analysis.

  • Chat – Configure how the agent responds to follow-up questions in pull request comments and manage automatic replies.

  • Authentication
    • bito_cli.bito.access_key: Required for authenticating the agent with the Bito platform.

    • git.provider, git.access_token, etc.: Required for connecting to the appropriate Git provider (e.g., GitHub, GitLab, Bitbucket).

  • General feedback settings

    • code_feedback: Enables or disables general feedback comments in reviews.

  • Analysis tools

    • static_analysis: Enables static code analysis.

    • dependency_check: Enables open-source dependency scanning.

    • dependency_check.snyk_auth_token: Required when using Snyk for vulnerability detection.

  • Review format and scope

    • review_comments: Defines output style (e.g., single post or inline comments).

    • review_scope: Limits the review focus to specific concerns such as security, performance, or style.

  • Filters

    • include_source_branches and include_target_branches: Restrict reviews to pull requests that match specified source and target branch patterns.

    • exclude_files: Skips selected files based on glob patterns.

    • exclude_draft_pr: Skips draft pull requests when enabled (default: True).

  • Bito web UI
    Code Review > Repositories
    Create or customize an Agent instance
    bito-cra.properties file
    bito-cra.properties file documentation

    Learn more

    Learn more

    Step-by-step instructions
    1. In Visual Studio Code, go to the extension tab and search for Bito.

    1. Install the extension. We recommend you restart the IDE after the installation is complete.

    Starting with Bito version 1.3.4, the extension is only supported on VS Code versions 1.72 and higher. Bito does not support VS Code versions below 1.72, and earlier versions of Bito do not function properly on these older versions.

    1. After a successful install, the Bito logo appears in the Visual Studio Code pane.

    1. Click the Bito logo to launch the extension and complete the setup process. You will either need to create a new workspace if you are the first in your company to install Bito or join an existing workspace created by a co-worker. See Managing workspace members

    Visual Studio Code Marketplace Link https://marketplace.visualstudio.com/items?itemName=Bito.bito

    Setup Bito extension in VS Code running through SSH

    SSH (Secure Shell) is a network protocol that securely enables remote access, system management, and file transfer between computers over unsecured networks.

    Visual Studio Code IDE allows developers to access and collaborate on projects from any connected machine remotely. The corresponding extension [Remote -SSH] must be installed on the host machine's Visual Studio Code IDE to utilize this feature.

    The Bito VS Code extension seamlessly integrates with Remote development via SSH, allowing developers to utilize Bito features and capabilities on their remote machines.

    Remote SSH connection and setup

    Please follow the instructions given in the links below:

    • https://code.visualstudio.com/docs/remote/ssh

    • https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse

    Video Guide:

    Setup Bito extension in VS Code running through WSL

    Running VS Code on WSL allows developers to work in a Linux-like environment directly from Windows. This kind of setup is to take advantage of development experience on both operating systems.

    WSL provides access to Linux command-line tools, utilities, and applications, to enhance productivity and streamline the development process.

    This setup ensures a consistent development environment across different systems, making it easier to develop, test, and deploy applications that will run on Linux servers.

    WSL connection and setup

    Please follow the instructions given in the links below:

    • https://code.visualstudio.com/docs/remote/wsl-tutorial

    • https://learn.microsoft.com/en-us/windows/wsl/install

    Video Guide:

    AI Chat

    Install on VS Code

    Owner
    Admin
    Member

    Make or Remove Other Owner

    Yes

    No

    No

    Promote another user to admin or remove admin

    Yes

    Yes

    No

    Manage Subs and Billing

    Yes

    Yes

    No

    Manage Overage Limits

    Yes

    Yes

    How Does Generative AI Work?

    Generative AI works using advanced machine learning models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).

    These models involve two key components:

    1. Generative Models: These are the AI algorithms that create the new data. For example, a generative model might create new images of animals it has never seen before by learning from a dataset of animal pictures.

    2. Discriminative Models: In the case of GANs, the discriminative model evaluates the data generated by the generative model. This is like an art critic who tells the artist if their work is believable or not.

    The two models work together in a sort of AI tug-of-war, with the generative model trying to produce better and better outputs and the discriminative model trying to get better at telling the difference between generated and real data.

    Applications of Generative AI

    Generative AI has a plethora of applications, here are a few:

    • Art: Apps like DeepArt and platforms like DALL-E generate original visuals and art based on user prompts.

    • Music: AI like OpenAI's Jukebox can generate music, complete with lyrics and melody, in various styles and genres.

    • Text: Tools like ChatGPT can write articles, poetry, and even code based on text prompts. Bito also falls in this category as an AI Coding Assistant.

    • Design: Generative AI can suggest design layouts for everything from websites to interior decorating.

    • Deepfakes: This controversial use involves generating realistic video and audio recordings that can mimic real people.

    Benefits and Challenges

    Benefits

    • Efficiency: Generative AI can produce content much faster than humans.

    • Creativity: It has the potential to create novel combinations that might not occur to human creators.

    • Personalization: AI can tailor content to individual tastes and preferences.

    Challenges

    • Ethics: Generative AI raises questions about authenticity and the ownership of AI-generated content.

    • Quality Control: Ensuring consistent quality of AI-generated content can be challenging.

    • Misuse: There’s a risk of its use in creating misleading information or deepfakes.

    Future Prospects

    The future of Generative AI is both exciting and uncertain. It could revolutionize how we create and consume content. For instance, imagine personalized movies generated in real-time to match your mood, or educational content adapted perfectly to each student's learning style.

    As technology advances, so too will the capabilities of Generative AI. It's not just about the ‘WTF’ factor; it's about recognizing the potential and preparing for the transformation it will bring about.

    Conclusion

    Generative AI is at the frontier of innovation, standing at the crossroads of creativity and computation. It is transforming the conventional processes of creation across various fields and presenting us with a future where the line between human and machine-made is increasingly blurred. While it brings with it a host of benefits, we must tread carefully to navigate the ethical considerations and harness its power for the greater good.

    As with any transformative technology, the question isn’t just 'WTF is Generative AI?' but also 'How do we responsibly integrate it into our society?' That is the real challenge and opportunity ahead.

    Create a New Workspace

    The link to create a new workspace will appear at the bottom of the sign-up flow screen. Click on "Create Workspace" to get started.

    Now, enter the name of the workspace. You can also choose to make this workspace discoverable by the users with the same domain email as your email. Finally, click on the "Next" button to proceed creating a new workspace.

    For example, if your email is [email protected] and you enable the "Workspace discovery" feature, then any other person with an email that ends in @mywebsite.com (like [email protected], [email protected], and so on) can join your workspace after they sign in.

    You can always switch this feature off later by visiting the Workspace Settings page.

    Workspace discovery feature is not available for public email addresses like @gmail.com, @outlook.com, @yahoo.com, etc.

    Once you complete the Workspace setup, Bito will be ready to use.

    Join an Existing Workspace

    If your email domain is allowed for the Workspace, or your coworker invited you, you will see the Workspace listed during the sign-up flow under the "Workspaces Available to Join" list.

    Simply click on the "Join" button given in front of the workspace you want to join. Joining your company or team Workspace takes less than a minute.

    Alternatively, you can join the Workspace through the Workspace link shared by your coworker.

    Change Workspace

    Follow the below steps to switch to a different workspace:

    1. First log out of your Bito account.

    1. Then, log back in and choose the workspace you want from the available list.

    How to See Which Workspace You Are In?

    In the IDE extension, place your mouse cursor over the workspace icon. The workspace name will show up as a tooltip.

    Install on JetBrains

    Managing workspace members
    https://plugins.jetbrains.com/plugin/18289-bito
    Settings in IntelliJ IDE
    Bito in IntelliJ right-hand side bar

    Using in Visual Studio Code

    AI that understands your codebase in VS Code

    This feature is only available for our Team Plan. Visit the pricing page or billing documentation to learn more about our paid plans.

    1. Open your project’s folder using Visual Studio Code.

    2. Bito uses AI to create an of your project’s codebase. It enables Bito to understand the code and provide relevant answers. There are three ways to start the indexing process:

      • When you open a new project, a popup box will appear through which Bito asks you whether you want to enable indexing of this project or not. Click on the “Enable” button to start the indexing process. You can also skip this step by clicking the “Maybe later” button. You can always index the project later if you want.

    • In the bottom-left of Bito plug-in pane, hover your mouse cursor over this icon. You can also enable indexing from here by clicking on the “Click to enable it” text.

    • Another option is to open the "Manage Repos" tab by clicking the laptop icon in the top-right corner of the Bito plugin pane.

    Bito usually takes around 12 minutes per each 10MB of code to understand your repo.

    Bito will still work correctly if you don’t enable indexing of your project. However, in that case, Bito will only be able to provide general answers.

    If you have previously indexed some projects using Bito then they will show in the “Other projects” section.

    Index building is aborted if the user logs out or if the user's subscription is canceled (downgraded from a paid plan to a free plan).

    1. Let’s start the indexing process by using any of the above-mentioned methods.

    2. The status will now be updated to “Indexing in progress...” instead of “Not Indexed”. You will also see the real-time indexing progress for the current folder, based on the number of files indexed.

    In case you close the VS Code while the indexing is in progress then don’t worry. The indexing will be paused and will automatically continue from where it left off when you reopen VS Code. Currently, the indexing will resume 5-10 minutes after reopening the IDE.

    The progress indicator for the other folders is updated every 5 minutes.

    1. Once the indexing is complete, the status will be updated from “Indexing in progress...” to “Indexed”, and will look like this.

    2. Now you can ask any question regarding your codebase by adding the keyword "my code" to your AI requests in the Bito chatbox. Bito is ready to answer them!

    Example: in my code explain the file apiUser.js

    Additional keywords for various languages are listed on the page. Also, here are some .

    1. In case you ever want to delete an index then you can do that by clicking on this three dots button and then clicking the “Delete” button.

    Index deletion is allowed even if the index is in progress or in a paused state.

    1. A warning popup box will open in the bottom of Bito’s plugin pane. You can either click on the “Delete” button to delete the project’s index from your system or click on the “Cancel” button to go back.

    A Quick Example from a Real Project

    For the sake of this tutorial, we’ve created a simple “Music Player using JavaScript”.

    Here’s how it looks:

    We have added a bunch of songs to this project. The song details like name, artist, image, and the music file name are stored in a file called music-list.js

    Question # 1

    Let’s ask Bito to list names of all song artists used in my code

    As you can see, Bito gave the correct answer by utilizing its understanding of our repository.

    Similarly, we can ask any coding-related question like find bugs, improve code, add new features, etc.

    Question # 2

    Our music player is working fine, but we don’t have any option to mute/unmute the song.

    Let’s ask Bito to add this feature.

    Here’s the question I used:

    In my code how can i add a button to mute and unmute the song? By default, set this button to unmute. Also, use the same design as existing buttons in UI.

    After adding the code suggested by Bito, here’s how the music player looks when it starts (unmuted).

    And when muted:

    Keyboard shortcuts

    Bito UI in Visual Studio Code and JetBrains IDEs is entirely keyboard accessible. You can navigate Bito UI with standard keyboard actions such as TAB, SHIFT+TAB, ENTER, and ESC keys. Additionally, you can use the following shortcuts for quick operations.

    The following video demonstrates important keyboard shortcuts.

    General

    Command
    Shortcuts

    Question & Answers

    The following keyboard shortcuts work after the Q/A block is selected.

    Command
    Keyboard Shortcut

    Change Default Keyboard Shortcuts

    Bito has carefully selected the keyboard shortcuts after thorough testing. However, it's possible that Bito selected key combination may conflict with IDE or other extensions shortcut. You can change the Bito default shortcut keys to avoid such conflicts.

    Visual Studio Code Editor

    1. To Open the Keyboards Shortcuts editor in VS Code, navigate to the menu under File > Preferences > Keyboard Shortcuts. (Code > Preferences > Keyboard Shortcuts on macOS)

    1. Search for default available commands, keybindings, or Bito extension-specific commands in VSCode keyboard shortcut editor.

    1. Finding a conflict in Key binding → Search for the key and take necessary action, e.g., Remove or Reset.

    1. Add a new key binding or map the existing Bito extension command. Provide the necessary information (Command ID) to add the new key binding.

    JetBrains

    JetBrains Document:

    1. File > settings > keymaps > configure keymaps

    1. Bito extension shortcuts can be overwritten by going into the File > Settings > Keymaps > configure keymaps > to the action you want to assign. It will also overwrite the Bito shortcut if there are conflicts.

    2. Bito extension keyboard shortcuts can be changed from the IntelliJ settings. File > Settings > Keymaps > configure keymaps > plugins > Bito > action you want to change by right click.

    3. Bito extension Keyboard shortcuts can be deleted from the IntelliJ settings. File > Settings > Keymaps > configure keymaps > plugins > Bito > action you want to delete by right click.

    Install/run via CLI

    CLI mode is best suited for immediate, one-time code reviews.

    1. Prerequisites: Before proceeding, ensure you've completed all necessary prerequisites for self-hosted AI Code Review Agent.

    2. Start Docker: Ensure Docker is running on your machine.

    3. Repository Download: Download the AI Code Review Agent GitHub repository to your machine.

    4. Extract and Navigate:

    • Extract the downloaded .zip file to a preferred location.

    • Navigate to the extracted folder and then to the “cra-scripts” subfolder.

    • Note the full path to the “cra-scripts” folder for later use.

    1. Open Command Line:

      • Use Bash for Linux and macOS.

      • Use PowerShell for Windows.

    2. Set Directory:

    1. Configure Properties:

      • Open the bito-cra.properties file in a text editor from the “cra-scripts” folder. Detailed information for each property is provided on page.

      • Set mandatory properties:

    Note: Valid values for git.provider are GITHUB, GITLAB, or BITBUCKET.

    • Optional properties (can be skipped or set as needed):

      • git.domain

      • code_feedback

      • static_analysis

    Note: Detailed information for each property is provided on page.

    Check the guide to learn more about creating the access tokens needed to configure the Agent.

    1. Run the Agent:

      • On Linux/macOS in Bash: Run ./bito-cra.sh bito-cra.properties

      • On Windows in PowerShell: Run ./bito-cra.ps1 bito-cra.properties

    This step might take time initially as it pulls the Docker image and performs the code review.

    1. Final Steps:

      • The script may prompt values of mandatory/optional properties if they are not preconfigured.

      • Upon completion, a code review comment is automatically posted on the Pull Request specified in the pr_url property.

    Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please .

    Screenshots

    Screenshot # 1

    AI-generated pull request (PR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a pull request.

    Screenshot # 3

    AI code review feedback posted as comments on the pull request.

    Using in JetBrains IDEs

    AI that understands your codebase in JetBrains IDEs (e.g., PyCharm)

    This feature is only available for our Team Plan. Visit the pricing page or billing documentation to learn more about our paid plans.

    1. Open your project’s folder using a JetBrains IDE. For this guide, we are using PyCharm.

    2. Bito uses AI to create an of your project’s codebase. It enables Bito to understand the code and provide relevant answers. There are three ways to start the indexing process:

      • When you open a new project, a popup box will appear through which Bito asks you whether you want to enable indexing of this project or not. Click on the “Enable” button to start the indexing process. You can also skip this step by clicking the “Maybe later” button. You can always index the project later if you want.

    • In the bottom-left of Bito plug-in pane, hover your mouse cursor over this icon. You can also enable indexing from here by clicking on the “Click to enable it” text.

    • Another option is to open the "Manage Repos" tab by clicking the laptop icon in the top-right corner of the Bito plugin pane.

    Bito usually takes around 12 minutes per each 10MB of code to understand your repo.

    Bito will still work correctly if you don’t enable indexing of your project. However, in that case, Bito will only be able to provide general answers.

    If you have previously indexed some projects using Bito then they will show in the “Other projects” section.

    Index building is aborted if the user logs out or if the user's subscription is canceled (downgraded from a paid plan to a free plan).

    1. Let’s start the indexing process by using any of the above-mentioned methods.

    2. The status will now be updated to “Indexing in progress...” instead of “Not Indexed”. You will also see the real-time indexing progress for the current folder, based on the number of files indexed.

    In case you close the JetBrains IDE (e.g., PyCharm) while the indexing is in progress then don’t worry. The indexing will be paused and will automatically continue from where it left off when you reopen the IDE. Currently, the indexing will resume 5-10 minutes after reopening the IDE.

    The progress indicator for the other folders is updated every 5 minutes.

    1. Once the indexing is complete, the status will be updated from “Indexing in progress...” to “Indexed”, and will look like this.

    2. Now you can ask any question regarding your codebase by adding the keyword "my code" to your AI requests in the Bito chatbox. Bito is ready to answer them!

    Example: in my code explain the file apiUser.js

    Additional keywords for various languages are listed on the page. Also, here are some .

    1. In case you ever want to delete an index then you can do that by clicking on this three dots button and then clicking the “Delete” button.

    Index deletion is allowed even if the index is in progress or in a paused state.

    1. A warning popup box will open in the bottom of Bito’s plugin pane. You can either click on the “Delete” button to delete the project’s index from your system or click on the “Cancel” button to go back.

    A Quick Example from a Real Project

    For the sake of this tutorial, we’ve created a clone of popular game “Wordle” using Python.

    Here’s how it looks:

    We have stored the list of words in files that are inside the “word_files” folder. A word is selected from these files randomly at the start of the game that the player has to guess.

    Question # 1

    Let’s ask Bito to understand my code and briefly write about what this game is all about and how to play it

    Bito correctly described the game by just looking at its source code.

    Question # 2

    Our game (PyWordle) is working fine, but there is no count down timer to make it a bit more challenging.

    So, let’s ask Bito to add this feature.

    Here’s the question I used:

    suggest code for main.py "class PyWordle" to add a count down timer for this game in my code. I'm using "self" in functions and variable names, so suggest the code accordingly. The player will lose the game if the time runs out. Set the time limit to 2 minutes (format like 02:00). The timer will start when the game starts. Also reset the timer when the game restarts, such as when player closes the "you won / you lost" popup. Display this real-time count down timer on the right-side of where the player score is displayed. Use the similar design as the player score UI. Also tell me exactly where to add your code. Make sure all of this functionality is working.

    Bito suggested the code which looks good. But, it was a bit incomplete and needs some improvements. So, I further asked a series of questions to Bito (one-by-one) to fix the remaining issues.

    After adding the code suggested by Bito, here's how the PyWordle game looks now. As you can see the countdown timer is accurately added where we want it to be added.

    Embeddings

    Bito leverages the power of embeddings to understand your entire codebase. But WTF are these embeddings, and how do they help Bito understand your code?

    If you are curious to know, this guide is for you!

    What is Embedding?

    Embeddings, at their essence, are like magic translators. They convert data—whether words, images, or, in Bito's case, code—into vectors in a dense numerical space. These vectors encapsulate meaning or semantics. Basically, these vectors help computers understand and work with data more efficiently.

    Imagine an embedding as a vector (list) of floating-point numbers. If two vectors are close, they're similar. If they're far apart, they're different. Simple as that!

    A vector embedding looks something like this: [0.02362240, -0.01716885, 0.00493248, ..., 0.01665339]

    Why Embeddings?

    In this section, we'll explore the most common and impactful ways embeddings are used in everyday tech and applications.

    Word Similarity & Semantics: Word embeddings, like Word2Vec, map words to vectors such that semantically similar words are closer in the vector space. This allows algorithms to discern synonyms, antonyms, and more based on their vector representations.

    Sentiment Analysis: By converting text into embeddings, machine learning models can be trained to detect and classify the sentiment of a text, such as determining if a product review is positive or negative.

    Recommendation Systems: Embeddings can represent items (like movies, books, or products) and users. By comparing these embeddings, recommendation systems can suggest items similar to a user's preferences. For example, by converting audio or video data into embeddings, systems can recommend content based on similarity in the embedded space, leading to personalized user recommendations.

    Document Clustering & Categorization: Text documents can be turned into embeddings using models like Doc2Vec. These embeddings can then be used to cluster or categorize documents based on their content.

    Translation & Language Models: Models like BERT and GPT use embeddings to understand the context within sentences. This contextual understanding aids in tasks like translation and text generation.

    Image Recognition: Images can be converted into embeddings using convolutional neural networks (CNNs). These embeddings can then be used to recognize and classify objects within the images.

    Anomaly Detection: By converting data points into embeddings, algorithms can identify outliers or anomalies by measuring the distance between data points in the embedded space.

    Chatbots & Virtual Assistants: Conversational models turn user inputs into embeddings to understand intent and context, enabling more natural and relevant responses.

    Search Engines: Text queries can be converted into embeddings, which are then used to find relevant documents or information in a database by comparing embeddings.

    Let’s look at an example

    Suppose you have two functions in your codebase:

    Function # 1:

    Function # 2:

    Using embeddings, Bito might convert these functions into two vectors. Because these functions perform different operations, their embeddings would be at a certain distance apart. Now, if you had another function that also performed addition but with a slight variation, its embedding would be closer to the add function than the subtract function.

    Let's oversimplify and imagine these embeddings visually:

    Embedding for Function # 1 (add):

    [0.9, 0.2, 0.1]

    Embedding for Function # 2 (subtract):

    [0.2, 0.9, 0.1]

    Notice the numbers? The first positions in these lists are quite different: 0.9 for addition and 0.2 for subtraction. This difference signifies the varied operations these functions perform.

    Now, let's add a twist. Suppose you wrote another addition function, but with an extra print statement:

    Function # 3:

    Bito might give an embedding like:

    [0.85, 0.3, 0.15]

    If you compare, this new list is more similar to the add function's list than the subtract one, especially in the first position. But it's not exactly the same as the pure add function because of the added print operation.

    This distance or difference between lists is what Bito uses to determine how similar functions or chunks of code are to one another. So, when you ask Bito about a piece of code, it quickly checks these number lists, finds the closest match, and guides you accordingly!

    How Bito Uses Embeddings

    When you ask Bito a question or seek assistance with a certain piece of code, Bito doesn't read the code the way we do. Instead, it refers to these vector representations (embeddings). By doing so, it can quickly find related pieces of code in your repository or understand the essence of your query.

    For example, if you ask Bito, "Where did I implement addition logic?", Bito will convert your question into an embedding and then look for the most related (or closest) embeddings in its index. Since it already knows the add function's embedding represents addition, it can swiftly point you to that function.

    Models for Generating Embeddings

    When we talk about turning data into these nifty lists of numbers (embeddings), several models and techniques come into play. These models have been designed to extract meaningful patterns from vast amounts of data and represent them as compact vectors. Here are some of the standout models:

    Word2Vec: One of the pioneers in the world of embeddings, this model, developed by researchers at Google, primarily focuses on words. Given a large amount of text, Word2Vec can produce a vector for each word, capturing its context and meaning.

    Doc2Vec: An extension of Word2Vec, this model is designed to represent entire documents or paragraphs as vectors, making it suitable for larger chunks of text.

    GloVe (Global Vectors for Word Representation): Developed by Stanford, GloVe is another method to generate word embeddings. It stands out because it combines both global statistical information and local semantic details from a text.

    BERT (Bidirectional Encoder Representations from Transformers): A more recent and advanced model from Google, BERT captures context from both left and right (hence, bidirectional) of a word in all layers. This deep understanding allows for more accurate embeddings, especially in complex linguistic scenarios.

    FastText: Developed by Facebook’s AI Research lab, FastText enhances Word2Vec by considering sub-words. This means it can generate embeddings even for misspelled words or words not seen during training by breaking them into smaller chunks.

    ELMo (Embeddings from Language Models): This model dynamically generates embeddings based on the context in which words appear, allowing for richer representations.

    Universal Sentence Encoder: This model, developed by Google, is designed to embed entire sentences, making it especially useful for tasks that deal with larger text chunks or require understanding the nuances of entire sentences.

    GPT (Generative Pre-trained Transformer): Developed by OpenAI, GPT is a series of models (from GPT-1 to GPT-4o) that use the Transformer architecture to generate text. While GPT models are famous for generating text, they can also produce vector embeddings. Their latest embeddings model is text-embedding-ada-002 which can generate embeddings for text search, code search, sentence similarity, and text classification tasks.

    Bito uses text-embedding-ada-002 from OpenAI and we’re also trying out some open-source embedding models for our feature.

    These models, among many others, power a wide range of applications, from natural language processing tasks like sentiment analysis and machine translation to aiding assistants like Bito in understanding and processing code or any other form of data.

    Embeddings: More Than Just Numbers

    While embeddings might seem like just another technical term or a mere list of numbers, they are crucial bridges that connect human logic and machine understanding. The ability to convert complex data, be it code, images, or even human language, into such vectors, and then use the 'distance' between these vectors to find relatedness, is nothing short of magic.

    In the context of Bito, embeddings aren't just a feature—it's the core that powers its deep understanding of your code, making it an indispensable tool for developers. So, the next time you think of Bito's answers as magical, remember, it's the power of embeddings at work!

    Overview

    System intelligence for your coding agents.

    Bito’s AI Architect builds a knowledge graph of your codebase — from repos to modules to APIs — delivering deep codebase intelligence to the coding agents you already use. This fundamentally changes the game for enterprises with many microservices or large, complex codebases.

    Bito provides the AI Architect in a completely secure fashion, with the AI Architect available on-prem if you desire with no code ever being sent to Bito. No AI is trained on your code and your code is not stored.

    Key capabilities of the AI Architect include:

    • Grounded 1-shot production-ready code — The AI Architect learns all your services, endpoints, code usage examples, and architectural patterns. The agent automatically feeds those to your coding agent (Claude Code, Cursor, Codex, any MCP client) to provide it the necessary information to quickly and efficiently create production ready code.

    • Consistent design adherence — Code generated aligns with your architecture patterns and coding conventions.

    The AI Architect builds the knowledge graph by analyzing all your repositories (whether you have 50 or 5,000 repos) to learn about your codebase architecture, microservices, modules, API endpoints, design patterns, and more.

    How you can use AI Architect

    AI Architect is designed to be flexible and can power multiple use cases across different AI coding tools and workflows.

    • – Use AI Architect as an MCP (Model Context Protocol) server to connect with tools like , , , and . It helps connected tools understand your codebase and workflows better, resulting in accurate and more reliable suggestions.

      1. On-premises deployment – Install and run AI Architect on your own infrastructure.

    Why use the AI Architect?

    Most AI coding tools struggle with accuracy in real-world codebases because they

    1. Don’t fully understand the breadth and depth of your codebase. They read some of the code in your existing repository, but they don’t have a complete graph of your internal APIs, endpoints, libraries, etc. On top of that, if you are accessing a monorepo or many services not available on your machine locally, they have no context or get confused trying to access them. Bito’s AI Architect has built a knowledge graph to provide this information in a cheap and efficient way to your coding agent so it can accomplish the task with grounded and complete information.

    2. They don’t fully understand how all of your services and modules interact with each other when you are trying to understand your overall system versus just one component. The AI Architect’s graph contains a mapping of all the dependencies, allowing to provide sophisticated analysis – how you would expect an Architect too.

    How AI Architect differs from Embeddings?

    Traditional embeddings work like a search engine — they retrieve code snippets or documents similar to a given query.

    They can find related content but can’t understand how different parts of your system work together.

    The AI Architect, on the other hand:

    • Builds a knowledge graph that captures relationships between repositories, modules, APIs, and libraries.

    • Provides precise answers and implementations, not just search results.

    • Understands context and intent — how and why something is implemented in your codebase.

    • Enables system-aware reasoning, allowing AI agents to generate or review code with full architectural understanding.

    Getting started

    1. by filling out the form.

    2. with our team.

    3. Lastly, email if you have any additional questions.

    Demos of different ways to use AI Architect

    Overview

    Get instant feedback on your code changes directly within your code editor.

    Unlock the power of AI-driven code reviews in VS Code, Cursor, Windsurf, and all JetBrains IDEs (including IntelliJ IDEA, PyCharm, WebStorm, and more) with Bito's AI Code Review Agent. This tool provides real-time, human-like feedback on your code changes, catching common issues before you submit a pull request.

    The AI Code Review Agent helps you improve your code as you develop, so you don't have to wait for days to get feedback. This accelerates development cycles, boosts team productivity, and ensures higher code quality.

    You can start using the Agent immediately—no setup is required!

    Install on VS CodeInstall on JetBrainsInstall on CursorInstall on Windsurf

    Prerequisites

    1. Install the latest Bito IDE extension for , , , or .

    2. A workspace subscribed to the Bito Team Plan. on how to upgrade.

    1. The root of your project must use a supported Version Control System such as Git, Perforce, or SVN, and be opened in the supported IDE.

    How to use the Agent in IDE?

    1. Open the Bito IDE extension.

    2. Login to your workspace subscribed to the Bito Team Plan.

    3. Type @codereview in the chat box to open a menu and select from the following actions:

    Supported review options based on your Version Control System (VCS):

    • If your project uses Git, all five review options are available.

    • If your project uses a non-Git VCS (e.g., Perforce, SVN), only two review options are available:

    1. After that, choose between Essential and Comprehensive review modes:

      • In Essential mode, only critical issues are posted.

      • In Comprehensive mode, Bito also includes minor suggestion and potential nitpicks.

    Start code review from context menu

    You can also invoke the AI Code Review Agent directly from the context menu by right-clicking in the code editor and selecting commands under the "Bito Code Review Agent" menu.

    This provides faster, on-the-go access to code reviews right where you work.

    Reviewing the feedback

    Once the AI code review is complete, you'll receive a notification in the IDE. You can view the feedback in the Bito Panel, which includes a list of issues and their fixes.

    Each item will contain the following details:

    • Issue description: Description of the identified issue.

    • Fix description: Recommended approach or steps to resolve the issue.

    • File path: The file containing the issue.

    • Code suggestion: The AI-generated code fix for the issue.

    Each code suggestion includes an Apply button. Click it to open the diff view, where you can review the changes and choose to accept or undo them.

    Code review session history

    To view past code reviews, click the Session history icon in the top-right corner of the Bito Panel. This opens the Session history tab, which lists all your previous code review sessions.

    From the list, click any session to open it and view the complete code review details along with the AI suggestions.

    Managing workspace members

    Bring your team together

    In Bito, collaboration happens within a Workspace, where team members are assigned roles and access based on their responsibilities. In most cases, every organization would create one Workspace. Anyone can sign up on Bito, create a workspace for their team, and invite their coworkers to join the Workspace.

    The Manage Users → Members dashboard introduces a clear, flexible interface for managing user access and feature seats across your team.

    Seat management overview

    At the top of the Members dashboard, you’ll see a summary of your seat usage and assignment status:

    • Seats purchased: Displays the total number of seats your workspace has purchased and the total billing amount.

    • Seats assigned: Shows how many of those seats are assigned for:

      • IDE Code Reviews

      • Git Code Reviews

    You can switch between these modes based on your team's seat allocation preferences.

    Managing members by feature

    Below the seat overview, you'll find three tabs to manage different types of access:

    1. Git Code Review tab

    Assign or unassign seats to members specifically for the Git based Code Review Agent feature. Each member listed here can be toggled on or off depending on whether you want to allocate a seat for this feature.

    2. IDE Code Review tab

    Similar to the Git Code Review tab, this tab lets you assign or remove access to Bito's AI Chat and code review feature in supported IDEs. You can also to join the workspace.

    3. Admin tab

    This tab is dedicated to managing administrative roles within the workspace. Only members with elevated permissions are shown here.

    This tab displays a table with the following information:

    • Name: Displays the full name and email address of the member.

    • Role: A dropdown that allows you to set or update the user’s administrative role:

      • Owner: Full control over the workspace.

      • Admin: Access to most workspace management functions.

    Additional options:

    Each admin row has a three-dot menu offering:

    • Remove from Admin members: Revoke administrative privileges.

    • Remove from workspace: Completely remove the user from the workspace.

    Inviting coworkers to the Workspace

    You can use Bito in a single-player mode for all the use cases. However, it works best when your coworkers join the Workspace to collaborate with Bito. There are three ways you can invite your coworkers.

    Option 1 - Allow your work e-mail domain for the Workspace. This setting is turned on by default, and all users with the same e-mail domain as yours will automatically see the Workspace under "Pending Invitations" when signing up in Bito. You can manage this setting after you create the Workspace through the "Settings" page in your Bito account.

    You may still need to notify your coworkers about Bito and share Bito workspace URL. We don't send e-mails to your coworkers unless you invite them to the Workspace.

    Option 2 - Invite your coworkers via e-mail when you create your Workspace or later from your workspace setting.

    Option 3- Share a web link specific to your Workspace via the channel of your choice: e-mail, Slack, or Teams. The link is automatically created and shown when creating a workspace or on the workspace settings page.

    Adding Admin members

    To add a new admin:

    1. Click the “Add members” button at the top of the Admin tab.

    2. In the popup:

      • Select an existing user from your workspace, or

      • Invite a new member by entering their email address.

    Guide for Cursor

    Integrate Cursor with AI Architect for more accurate, codebase-aware AI assistance.

    Use Bito's with Cursor to enhance your AI-powered coding experience.

    Once connected via MCP (Model Context Protocol), Cursor can leverage AI Architect’s deep contextual understanding of your project, enabling more accurate code suggestions, explanations, and code insights.

    Prerequisites

    Interaction diagram

    Visualize code changes and their impact with automated sequence diagrams.

    The Interaction Diagram is a visual feature in Bito's that automatically generates sequence diagrams to help you quickly understand the impact of code changes in your pull requests.

    This diagram visualizes how different components of your code interact with each other, making code reviews faster and more intuitive.

    How to enable

    Jira integration

    Bring Jira issue requirements into every pull request and get validation results back automatically.

    Note: The Jira integration is available only on the .

    Bito integrates with Jira to automatically validate pull request code changes against linked Jira ticket requirements, helping ensure your implementations align with the specified requirements in those tickets.

    How it works

    FAQs

    Answers to Popular Questions

    How many repositories can Bito index?

    Bito can index unlimited repositories for workspaces that have subscribed to our Team Plan. This feature is also coming soon for our Free Plan. But it will be limited to 10MB maximum indexable size of repository.

    LLM tokens

    At the heart of every LLM, from GPT-3.5 Turbo to the latest GPT-4o, are tokens. These are not your arcade game coins but the fundamental units of language that these models understand and process. Imagine tokens as the DNA of digital language—their sequence dictates how an LLM interprets and responds to text.

    A token is created when we break down a massive text corpus into digestible bits. Think of it like slicing a cake into pieces; each slice, or token, can vary from a single word to a punctuation mark or even a part of a word. The process of creating tokens, known as tokenization, simplifies complex input text, making it manageable for LLMs to analyze.

    Here’s a quick reference to understand token equivalents:

    • 1 token ≈ 4 characters in English

    Vector databases

    Think of a huge, never-ending stream of information like photos, tweets, and songs pouring in every second. We need special storage boxes to keep all this info organized and find what we need quickly. One of the new, cool storage boxes people are talking about is called a “Vector Database”. So, what's this Vector Database thing, and why is it something you might want to know about? Let's unwrap this mystery and make it super easy to understand.

    What is a Vector Database?

    A vector database is designed to handle vectorized data - that is, data represented as vectors. A vector, in this context, is a mathematical construct that embeds information into a high-dimensional space, with each dimension representing a different feature of the data.

    Traditionally, databases have been adept at handling structured data (like rows and columns in a spreadsheet) or even semi-structured data (like JSON documents). However, with the rise of machine learning and artificial intelligence, there is an increasing need to efficiently store and query data that isn't just numbers or text but is represented in multi-dimensional space.

    Vim/Neovim Plugin

    Vim/ Neovim Plugin for Bito Using Bito CLI

    We are excited to announce that one of our users has developed a dedicated Vim and Neovim plugin for Bito, integrating it seamlessly with your favorite code editor. This plugin enhances your coding experience by leveraging the power of Bito's AI capabilities directly within Vim and Neovim.

    Installation

    To get started with "vim-bitoai," follow these steps:

    Step 1: Install Bito CLI

    Make sure you have Bito CLI installed on your system. If you haven't installed it, you can find detailed instructions in the Bito CLI repository at.

    Step 2: Install the Plugin

    Open your terminal and navigate to your Vim or Neovim plugin directory. Then, clone the "vim-bitoai" repository using the following command:

    Step 3: Configure the Plugin

    AI Code Review Agent (with AI Architect vs without AI Architect)

    From single-repo reviews to system-wide insights

    The becomes significantly more powerful when paired with .

    Below is a clear explanation of how the agent behaves in each setup and why AI Architect unlocks much deeper, system-level insights.

    AI Code Review Agent without AI Architect

    The standard AI Code Review Agent analyzes code at the repository level.

    It creates a within-repo knowledge graph by building:

    Vector database fills this gap by excelling at managing and querying data in the form of vectors. This is particularly useful for tasks that involve similarity search, like finding the most similar images, text, or even audio clips, in a process known as "nearest neighbor search".

    Why are Vector Databases Important?

    Imagine trying to search for a song that sounds like another song or finding images that are visually similar to a given image. These tasks are non-trivial because they involve understanding the content at a deeper, more abstract level. Vector databases allow us to convert these abstract, complex features into a mathematical space where 'similarity' can be computed and searched efficiently.

    For instance, in the world of machine learning, models like neural networks can convert images or text into vectors during their processing stages. These vectors, known as embeddings, capture the essence of the data. When you query a vector database with another vector, it retrieves the most similar items based on the vector's position and distance in that high-dimensional space.

    Key Features of Vector Databases

    Efficient Similarity Search: They use specialized indexing and search algorithms to perform fast and efficient nearest neighbor searches.

    Scalability: They are designed to handle large volumes of data and high-dimensional vectors without sacrificing performance.

    Machine Learning Integration: They are often integrated with machine learning models and pipelines to enable real-time embedding and querying.

    Language Agnosticism: Vector databases can handle any data that can be vectorized, whether it's images, text, audio, or any other form of media.

    Real-World Applications

    Recommendation Systems: Vector databases can power recommendation engines that suggest products, movies, or songs by finding items that are similar to a user’s past behavior.

    Image Retrieval: They are used in image search engines to find images that are visually similar to a query image.

    Natural Language Processing: In the field of NLP, vector databases enable searching through large corpora of text for documents or entries that are contextually similar to a given piece of text.

    Fraud Detection: They can be used to detect anomalies or patterns in transaction data that signify fraudulent activity by comparing against typical transaction vectors.

    Best Free, Paid, and Open-Source Vector Databases

    Let's look at some top players:

    Pinecone: A cloud-native, managed vector database that doesn't require infrastructure management. Pinecone offers fast data processing and quality relevance features like metadata filters and supports both sparse and dense vectors. Key offerings include duplicate detection, rank tracking, and deduplication.

    Milvus: An open-source vector database tailored for AI applications and similarity search, it provides fast search capabilities across trillions of vector datasets and boasts high scalability and reliability. Its use cases span across image and chatbot applications to chemical structure search.

    Chroma: Aimed at building LLM applications, Chroma is an open-source, AI-native embedding database offering features like filtering and intelligent grouping. It positions itself as a database that combines document retrieval capabilities with AI to enhance data querying processes.

    Weaviate: This is a cloud-native, open-source vector database that stands out with its AI modules and ability to handle text, images, and other data conversions into searchable vectors. It offers quick neighbor search and is designed with scalability and security in mind.

    Deep Lake: Designed for deep learning and LLM-based applications, Deep Lake supports a wide array of data types and integrates with various tools to facilitate model training and versioning. It emphasizes ease in deploying enterprise-grade products.

    Qdrant: A versatile open-source vector search engine and database that supports payload-based storage and extensive filtering. It is well-suited for semantic matching and faceted search, with a focus on efficiency and configuration simplicity.

    Elasticsearch: A highly scalable open-source analytics engine capable of handling diverse data types, Elasticsearch is part of the Elastic Stack, offering fast search, fine-tuned relevance, and sophisticated analytics.

    Vespa: Vespa is an open-source data serving engine that enables machine-learned decisions on massive datasets at serving time. It's built for high-performance and high-availability use cases, facilitating a variety of complex query operations.

    Vald: Focused on dense vector search, Vald is a distributed, cloud-native search engine that uses the ANN Algorithm NGT for neighbor searches. It features automatic indexing, index backup, and horizontal scaling.

    ScaNN: A Google-developed method that improves search accuracy and performance for vector similarity, ScaNN is known for its effective compression techniques and support for different distance functions.

    Pgvector: As a PostgreSQL extension, pgvector brings vector similarity search to the robust, feature-rich environment of PostgreSQL, enabling embeddings to be stored and searched alongside other application data.

    Faiss: Developed by Facebook AI Research, Faiss is a library for efficient similarity search and clustering of dense vectors. It's versatile, supporting various distances and batch processing, and it can operate on datasets larger than available RAM.

    How to Choose the Right Vector Database for Your Project

    When you're picking out the perfect vector database, think about these things:

    • Do you need someone else to handle the techy database stuff, or do you have wizards in-house?

    • Got your vectors ready, or do you need the database to make them for you?

    • How fast do you need the data – right now, or can it wait?

    • How much experience does your team have with this kind of tech?

    • Is the database easy to learn, or is it going to be lots of late nights?

    • Can you trust the database to be up and running when you need it?

    • What's the price tag for setting it up and keeping it going?

    • How secure is it, and does it check all the legal boxes?

    Challenges and Considerations

    While vector databases are powerful, they come with challenges. The management and querying of high-dimensional data can be resource-intensive. The efficiency of a vector database often depends on the underlying infrastructure and the effectiveness of its indexing and compression algorithms.

    Furthermore, security and privacy are crucial, especially when handling sensitive data. Vector databases must ensure that they incorporate robust security measures to protect against unauthorized access and data breaches.

    The Future of Vector Databases

    As data continues to grow in volume and complexity, the importance of vector databases will only increase. Their integration with AI and machine learning is a match set for the future where almost every digital interaction may involve some form of similarity search or content-based retrieval.

    Conclusion

    Vector Databases are a cutting-edge solution designed to handle the complexity of modern data needs, particularly in the realm of similarity search and AI applications. Understanding and leveraging vector databases can unlock a plethora of opportunities across industries, making them an exciting area of development in the database technology landscape.

    As companies and developers keep using AI more and more, the use of vector databases is expected to increase a lot. This signals the start of a new period in how we handle data, where the way we sort and keep information is as complex and varied as the data itself.

    No

    Add Member by E-mail

    Yes

    Yes

    No

    Access and Share Join workspace link

    Yes

    Yes

    Yes

    Deactivate Member

    Yes

    Yes

    No

    Edit WS Settings - Name, Discovery

    Yes

    Yes

    No

    Approve Member [When joining from the "Invite Workspace" web link]

    Yes

    Yes

    No

    Force Reauthentication

    Yes

    Yes

    No

  • Change the current directory in Bash/PowerShell to the “cra-scripts” folder.

  • Example command: cd [Path to cra-scripts folder]

  • Adjust the path based on your extraction location.

  • mode = cli

  • pr_url

  • bito_cli.bito.access_key

  • git.provider

  • git.access_token

  • dependency_check

  • dependency_check.snyk_auth_token

  • review_scope

  • exclude_branches

  • exclude_files

  • exclude_draft_pr

  • Agent Configuration: bito-cra.properties File
    Agent Configuration: bito-cra.properties File
    Required Access Tokens
    contact support
    Changelist in AI Code Review Agent's feedback.
    Setting AI output language in Bito
    AI that understands your code
    Updating Bito Plugin on JetBrains IDEs
    Updating Bito plugin on VS Code
    AI that Understands Your Code

    Open your Vim or Neovim configuration file and add the following lines:

    Save the configuration file and restart your editor or run :source ~/.vimrc (for Vim) or :source ~/.config/nvim/init.vim (for Neovim) to load the changes.

    Step 4: Verify the Installation

    Open Vim or Neovim, and you should now have the "vim-bitoai" plugin installed and ready to use.

    Usage

    You can use its powerful features once you have installed the "vim-bitoai" plugin. Here are some of the available commands:

    • BitoAiGenerate: Generates code based on a given prompt.

    • BitoAiGenerateUnit: Generates unit test code for the selected code block.

    • BitoAiGenerateComment: Generates comments for methods, explaining parameters and output.

    • BitoAiCheck: Performs a check for potential issues in the code and suggests improvements.

    • BitoAiCheckSecurity: Checks the code for security issues and provides recommendations.

    • BitoAiCheckStyle: Checks the code for style issues and suggests style improvements.

    • BitoAiCheckPerformance: Analyzes the code for performance issues and suggests optimizations.

    • BitoAiReadable: Organizes the code to enhance readability and maintainability.

    • BitoAiExplain: Generates an explanation for the selected code.

    To execute a command, follow these steps:

    1. Open a file in Vim or Neovim that you want to work on.

    2. Select the code block you want to act on. You can use visual mode or manually specify the range using line numbers.

    3. Execute the desired command by running the corresponding command in command mode. For example, to generate code based on a prompt, use the : BitoAiGenerate command. Note: Some commands may prompt you for additional information or options.

    4. The plugin will communicate with the Bito CLI and execute the command, providing the output directly within your editor.

    By leveraging the "vim-bitoai" plugin, you can directly harness the power of Bito's AI capabilities within your favorite Vim or Neovim editor. This integration lets you streamline your software development process, saving time and effort in repetitive tasks and promoting efficient coding practices.

    Customization

    The "vim-bitoai" plugin also offers customization options tailored to your specific needs. Here are a few variables you can configure in your Vim or Neovim configuration file:

    • g:bito_buffer_name_prefix: Sets the prefix for the buffer name in the Bito history. By default, it is set to 'bito_history_'.

    • g:vim_bito_path: Specifies the path to the Bito CLI executable. If the Bito CLI is not in your system's command path, you can provide the full path to the executable.

    • g:vim_bito_prompt_{command}: Allows you to customize the prompt for a specific command. Replace {command} with the desired command.

    To define a custom prompt, add the following line to your Vim or Neovim configuration file and replace your prompt with the desired prompt text:

    Remember to restart your editor or run the appropriate command to load the changes.

    We encourage you to explore the "vim-bitoai" plugin and experience the benefits of seamless integration between Bito and your Vim or Neovim editor. Feel free to contribute to the repository or provide feedback to help us further improve this plugin and enhance your coding experience.

    https://github.com/gitbito/CLI
    def add(x, y):
        return x + y
    def subtract(x, y):
        return x - y
    def add_and_print(x, y):
        result = x + y
        print(result)
        return result
    git clone https://github.com/zhenyangze/vim-bitoai.git
    
    " Vim Plug
    Plug 'zhenyangze/vim-bitoai'
    
    " NeoBundle
    NewBundle 'zhenyangze/vim-bitoai'
    
    " Vundle
    Plugin 'zhenyangze/vim-bitoai'
    if !exists("g:vim_bito_prompt_{command}")
        let g:vim_bito_prompt_{command}="your prompt"
    endif

    From here you can start the indexing process by clicking on the “Start Indexing” button. Here, you will also see the total indexable size of the repository. Read more about What is Indexable Size?

    index
    Available Keywords
    Example Questions

    Regenerate the answer

    CTRL + L

    Modify the prompt for the selected Q&A. Bito copies the prompt in the chatbox that you can modify as needed.

    CTRL + U

    Open Bito Panel: Toggle Bito Panel on and off in the JetBrains IDE. In the Visual Studio Code, the shortcut opens the Bito panel if not already opened.

    SHIFT + CTRL + O

    Puts cursor in the chatbox when Bito panel is in focus.

    SPACEBAR (Or start typing your question directly)

    Execute the chat command

    ENTER

    Add a new line in the chatbox

    CTRL + ENTER or SHIFT + ENTER

    Modify the most recently executed prompt. This copies the last prompt in the chatbox for any edits.

    CTRL + M

    Expands and Collapse the "Shortcut" panel

    WINDOWS: CTRL + ⬆️ / ⬇️ MAC: CTRL + SHIFT+ ⬆️ / ⬇️

    Navigate between the Questions/Answers block.

    Note: You must select the Q/A container with TAB/SHIFT+TAB.

    ⬆️ / ⬇️

    Copy the answer to the clipboard.

    CTRL + C

    Insert the answer in the code editor

    CTRL + I

    Toggle the diff view (when Diff View is applicable)

    CTRL + D

    Expand/Collapse the code block in the question.

    WINDOWS: CTRL + ⬆️ / ⬇️ MAC: CTRL + SHIFT+ ⬆️ / ⬇️

    https://www.jetbrains.com/help/idea/configuring-keyboard-and-mouse-shortcuts.html

    From here you can start the indexing process by clicking on the “Start Indexing” button given in front of your current project. Here, you will also see the total indexable size of the repository. Read more about What is Indexable Size?

    index
    Available Keywords
    Example Questions
    Spec-driven development — Automatically generate highly detailed, implementation-ready technical requirement documents (TRDs) and low-level designs (LLDs) with a deep, context-aware understanding of your codebase, services, and design patterns, ensuring architectural integrity and consistency at a granular level.
    • Watch demo video

  • Triaging production issues — Easily and quickly find root causes to production issues based on errors/logs/etc.

    • Watch demo video

  • Faster onboarding — New engineers or AI agents can quickly understand how a system or component system structure.

  • Enhanced documentation and diagramming — Through its internal understanding of interconnections between modules and APIs.

  • Smarter code reviews — Reviews with system-wide awareness of dependencies and impacts.

  • Bito-hosted version – Use the hosted version managed by Bito.

    • Contact [email protected] for a trial

  • Example: Bito’s AI Code Review Agent – One example of AI Architect in action is Bito’s AI Code Review Agent, which uses AI Architect to deliver smarter, context-aware code reviews directly in your pull requests and IDEs.

  • Watch demo video
    Integrate via MCP server
    Claude Code
    Cursor
    Windsurf
    GitHub Copilot (VS Code)
    See the installation instructions
    Join the beta
    Get a demo
    [email protected]
    localchanges: Review only the changes you’ve made in your local workspace that haven’t been staged yet. This is useful for quickly checking your current edits before moving them forward.
  • stagedchanges: Review the changes you’ve staged in Git but haven’t committed yet. This helps ensure only clean, well-reviewed updates get committed.

  • uncommittedchanges: Review all modifications that exist locally but aren’t yet committed—both staged and unstaged. Ideal for a full review of your current working directory.

  • path: Review a specific file or multiple files by providing their paths. This allows you to target critical files without running a review across your entire project.

  • commitId: Review one commit or a range of commits by referencing their commit IDs. Perfect for validating code history, checking incremental updates, or reviewing PR-related commits.

  • uncommittedchanges

  • path

  • Unsupported options will be hidden automatically.

    Submit to get the code review feedback.
    VS Code
    JetBrains
    Cursor
    Windsurf
    Read documentation
    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Seat assignment mode:

    • Auto (Assign & Buy): In this mode, available seats will be automatically assigned to developers (marked as Eligible) when they join the workspace or when they submit their first pull request reviewed by Bito. If all seats are assigned, new seat is purchased and assigned automatically.

      • Note: This mode is useful for dynamic teams where new contributors are added frequently.

      • Note: This is the default mode for all new workspaces.

    • Auto (Assign only): In this mode, available seats will be automatically assigned to developers (marked as ) when they join the workspace or when they submit their first pull request reviewed by Bito. If no seats are available, Bito will not purchase additional seats, and the developer will not gain access to Bito features.

    • Manual: In this mode, workspace admins need to manually purchase seats and as needed. Bito will review pull requests only for submitters who have an assigned seat.

      • Note: This mode is ideal for teams that want tighter control over who gets access and when billing occurs.

  • + Billing contact: (Checkbox) Receives billing-related communications.

  • Billing only: (Button) Limits the member to billing management tasks.

  • Assign the appropriate role and permissions as needed.

    Learn more
    invite new members
    Follow the AI Architect installation instructions. Upon successful setup, you will receive a Bito MCP URL and Bito MCP Access Token that you need to enter in your coding agent.
  • Download BitoAIArchitectGuidelines.md file. You will need to copy/paste the content from this file later when configuring AI Architect.

    • Note: This file contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server. The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

  • Set up AI Architect

    Follow the setup instructions for your operating system:

    • Windows

    • macOS/Linux

    Windows

    1

    Create Cursor config directory

    1. Press Win + R

    2. Type: %USERPROFILE%\.cursor

    3. Press Enter

    If the folder doesn't exist, create it:

    1. Open File Explorer

    2. Navigate to %USERPROFILE%

    3. Create new folder: .cursor

    2

    Create or edit mcp.json

    1. Open %USERPROFILE%\.cursor\mcp.json in a text editor.

    3

    Add guidelines (optional but highly recommended)

    The contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server.

    The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

    4

    Restart Cursor

    1. Close Cursor completely

    macOS/Linux

    1

    Create Cursor config directory

    2

    Create or edit mcp.json

    Add this content:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token

    Save and exit (Ctrl+O, Enter, Ctrl+X)

    3

    Add guidelines (optional but highly recommended)

    The contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server.

    The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

    4

    Restart Cursor

    1. Close Cursor completely

    Troubleshooting Cursor

    Server not showing:

    Connection errors:

    • Verify Bito MCP URL and Bito MCP Access Token are correct.

    • Test endpoint with MCP protocol:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token you received after completing the AI Architect setup.

    • Check Settings → Tools & MCP for error messages

    AI Architect

    Navigate to the Code Review > Repositories dashboard.

  • Click the Settings button next to the Agent instance you wish to modify.

  • Under Review tab, enable the Generate interaction diagrams option.

  • Once enabled, Bito will automatically post interaction diagrams during code reviews.

    Understanding sequence diagrams

    A sequence diagram is a type of visual diagram that shows how different parts of your system interact with each other over time.

    It illustrates the flow of operations by displaying the order in which methods are called and how data flows between different components.

    This makes it easy to trace the execution path of your code and understand dependencies between modules.

    Diagram components

    Boxes

    The main components in the diagram are displayed as boxes. The level of detail shown depends on the size of your code changes:

    • Small changes: Boxes may represent individual classes or functions with detailed interactions

    • Large changes: Boxes may represent higher-level abstractions for better readability

    Bito's AI automatically determines the appropriate level of detail based on your pull request.

    Labels and indicators

    Within boxes, you'll see labels that provide quick insights:

    Change type:

    Indicates what kind of modification was made to each module in your codebase.

    • 🟩 Added - New code introduced to the codebase. These are components, functions, or classes that didn't exist before this pull request.

    • 🔄 Updated - Existing code that has been modified. This indicates changes to the logic, behavior, or implementation of existing components.

    • Deleted - Code that has been removed from the codebase. These components are no longer present after this pull request is merged.

    Impact level:

    Shows the scope and significance of changes to help you prioritize your code review efforts.

    • Low - Minimal impact (● ○ ○)

      • Changes are localized and unlikely to affect other parts of the system. Safe to review with standard attention.

    • Medium - Moderate impact (● ● ○)

      • Changes affect multiple components or have moderate complexity. Requires careful review of interactions and side effects.

    • High - Significant impact (● ● ●)

      • Changes are extensive or critical, affecting core functionality or multiple system areas. Demands thorough review and testing.

    These visual indicators help you identify critical changes at a glance.

    Arrows and flow

    Solid arrows (→): Represent forward calls flowing left to right

    • Example: If main() calls UserService, a solid arrow points from main() to UserService

    Dotted arrows (⇢): Represent return flows going right to left

    • Example: When UserService returns data to main(), a dotted arrow points back from UserService to main()

    Circular arrows (↻): Indicate internal calls within the same module

    • Example: One component of UserService calling another component within UserService

    Control flow blocks

    Alt block (if-else logic)

    • Displayed as a dotted box around multiple lines

    • Contains two sections separated by a dotted line representing "if" and "else" branches

    • Shows conditional execution paths in your code

    Opt block (optional parameters)

    • Used for functions with parameter overloading

    • Contains a single section for optional execution flow

    • Represents code that may or may not execute depending on optional parameters

    Code outside these blocks represents the normal execution flow.

    Platform-specific behavior

    GitHub

    • Diagrams are posted in Mermaid format

    • Interactive controls available:

      • Pan (move top, bottom, left, right)

      • Expand/collapse

      • Zoom in/out

    GitLab

    • Diagrams are posted in Mermaid format

    • Note: For very large diagrams, GitLab may not render automatically. You'll see a notice box with a "Display" button - click it to manually render the diagram

    Bitbucket

    • Diagrams are posted as image format

    Note: If you see a "syntax error" or "unable to render" message, try refreshing the page.

    Incremental reviews

    When you run incremental reviews (for example, by using the /review command in pull request comments), the existing interaction diagram will be updated rather than creating a new comment with a separate diagram.

    Interaction diagram vs impact analysis diagram

    Bito can generate two types of diagrams, but only one is displayed at a time:

    • Interaction diagram: Generated by the standard Code Review Agent, focusing on code changes in the current pull request

    • Impact analysis diagram: Generated using Bito AI Architect with complete cross-repository codebase understanding.

      • Note: This feature is not publicly available yet. Please contact Bito at [email protected] to have it enabled for your account.

    Note: If both Impact Analysis and Interaction Diagram are enabled, only the Impact Analysis diagram will be shown.

    Best practices

    • Review the diagram before diving into code details to get a high-level understanding

    • Use impact indicators to prioritize which changes need closer examination

    • Follow the arrow flows to understand the execution path

    • Pay special attention to "High" impact modules

    Troubleshooting

    • Diagram not appearing: Verify that "Generate interaction diagrams" is enabled in Bito Cloud settings

    • Rendering issues:

      • In GitLab, you may need to click the Display button to manually render the diagram.

      • Refresh the page - this often resolves transient rendering errors.

    • Syntax errors: In some cases, the Mermaid diagram may contain syntax errors that prevent it from rendering. Try updating the pull request so the diagram is regenerated.

    AI Code Review Agent
    Interaction diagram by Bito
    Interaction Diagram by Bito

    When you create a pull request, Bito automatically:

    1. Detects Jira ticket references in your pull request description, title, or branch name

    2. Crawls the linked Jira tickets to extract requirements from issue descriptions and related Stories/Epics

    3. Analyzes your code changes against these requirements

    4. Provides structured validation results directly in your pull request comments

    Jira integration options in Bito

    Bito supports two ways to connect with Jira, depending on where your Jira instance is hosted:

    1. Jira Cloud: for Jira sites hosted by Atlassian (e.g., https://mycompany.atlassian.net).

    2. Jira Data Center: for Jira instances hosted on your own company domain or servers (e.g., https://jira.mycompany.com).

    Connect Bito with Jira Cloud (hosted by Atlassian)

    1

    Connect Bito to Jira

    1. Navigate to the Manage integrations page in your Bito dashboard

    2. In the Available integrations section, you will see Jira. Click Connect to proceed.

    3. Select the option Jira Cloud. You will be redirected to the official Jira website, where you need to grant Bito access to your Atlassian account.

    4. Click Accept to continue. If the integration is successful, you will be redirected back to Bito.

    2

    Agent-specific settings

    After completing the initial setup, you can control Jira integration on a per-agent basis:

    1. Go to the page in your Bito dashboard.

    Note: The Functional validation feature must be enabled in your Bito agent settings for the integration to work.

    Connect Bito with Jira Data Center (hosted on your own server)

    1

    Connect Bito to Jira

    1. Navigate to the Manage integrations page in your Bito dashboard

    2. In the Available integrations section, you will see Jira. Click Connect to proceed.

    3. Select the option Jira Data Center (self-managed).

    4. Provide connection details:

      • Domain URL: Enter the base URL for your Jira instance (e.g. https://jira.mycompany.com).

      • Personal Access Token: Enter a valid Personal Access Token with admin permissions. Read the to learn how to create a Personal Access Token.

    5. Click Connect to Jira. You will be redirected to your self-hosted Jira website, where you need to grant Bito access to your Jira account.

    6. Click Allow to continue. If the integration is successful, you will be redirected back to Bito.

    2

    Agent-specific settings

    After completing the initial setup, you can control Jira integration on a per-agent basis:

    1. Go to the page in your Bito dashboard.

    Linking Jira tickets to pull requests

    Bito offers multiple ways to link your Jira tickets with pull requests. You can use any of these methods:

    Method 1: Branch name

    Name your source branch using the Jira issue key:

    Method 2: Pull request description

    Include the Jira ticket reference in your PR description:

    OR

    Method 3: Pull request title

    Include the Jira issue key in your PR title:

    Understanding the validation results

    When Bito completes its analysis, it adds a "Functional Validation by Bito" table to your pull request comments. This table contains four columns:

    Source

    Displays the Jira issue key (e.g., "QP-11", "QP-123") that references the specific Jira ticket being validated.

    Requirement / Code Area

    Shows a brief description of the requirement or task that needs to be completed, summarizing what needs to be done according to the Jira ticket.

    Status

    Indicates the completion status of each requirement:

    • Met: The requirement has been fully implemented in the pull request

    • Missed: The requirement has not been addressed in the pull request

    • Partial: The requirement has been partially implemented but still needs additional work

    Notes

    Provides detailed information about the validation status:

    • For "Met" items: Explains what has been successfully implemented

    • For "Missed" items: Describes what is missing and needs to be addressed

    • For "Partial" items: Details what has been completed and what still remains to be done

    Example validation output

    Here's what a typical validation table looks like:

    Benefits

    • Automated quality assurance: Ensure code changes meet specified requirements

    • Improved collaboration: Bridge the gap between project management and development

    • Reduced manual reviews: Bito AI automatically catches missing implementations during code review

    • Better traceability: Maintain clear links between requirements and code changes

    By leveraging Bito's Jira integration, your development team can maintain higher code quality while ensuring that all requirements are properly addressed in every pull request.

    Best practices

    For developers

    • Always reference Jira tickets in your pull requests using one of the supported methods

    • Review the validation table and address any "Missed" or "Partial" items before merging

    For teams

    • Ensure Jira tickets contain clear, detailed requirements

    • Use consistent naming conventions for branches and pull request titles

    • Enable functional validation for all relevant agents

    Troubleshooting

    Validation table not appearing:

    • Check that your Jira integration is properly configured in the Manage integrations page

    • Verify that Functional validation is enabled in your agent settings

    • Ensure your pull request contains valid Jira issue key references

    Incorrect validation results:

    • Review your Jira ticket descriptions for clarity and completeness

    • Verify that linked Stories/Epics contain relevant requirements

    • Check that your code changes are in the expected areas

    Enterprise Plan
    Why is the index creation taking a long time?

    Bito takes time to thoroughly read your entire repository and understand it. This is completely normal. If your repository is a bit large, then it can take several hours to get indexed.

    Bito usually takes around 12 minutes per each 10MB of code to understand your repo.

    Why is the answer not complete?

    There is a limit on the amount of memory/context that can be used at a time to answer the question, so the answers sometimes may not cover all the code. To solve for this, restrict the questions by providing additional criteria like:

    • In my code explain message_tokens in ai/request.js

    Where can I see the status of my Index?

    Open your project in VS Code or JetBrains IDEs. From the Bito plugin pane, click the laptop icon located in the top-right corner.

    On this tab, you will see the status of your current project as well as the status of any other project that you indexed previously.

    List of Indexing Statuses:

    • Not Indexed: A new project that you have not started indexing yet.

    • Indexing in progress: A project that is currently being indexed.

    • Indexing is paused: A project that was previously being indexed but is now paused for some reason. Generally, if you close the IDE while the project is being indexed, its status will change from "Indexing in progress" to "Indexing is paused".

    • Indexed: A project that has already been indexed, and Bito is ready to answer any questions about it.

    What happens if my IDE got closed while indexing is in progress?

    In case you close the Visual Studio Code or JetBrains IDE (e.g., PyCharm) while the indexing is in progress then don’t worry. The indexing will be paused and will automatically continue from where it left off when you reopen the IDE. Currently, the indexing will resume 5-10 minutes after reopening the IDE.

    How to delete project index from IDE?

    1. To delete an index, navigate to the "Manage repos" tab.

    2. Next, click on the three dots button located in front of your project’s name, and then select the "Delete" option.

    3. A warning popup box will appear at the bottom of Bito's plugin pane. You can choose to click the "Delete" button to remove the project's index from your system, or click the "Cancel" button to go back.

    How to fix indexing issues in Visual Studio Code and JetBrains IDEs (e.g., IntelliJ IDEA, PyCharm, etc.)?

    Before getting started, please ensure that you have allowed your project sufficient time to be indexed. Bito typically requires approximately 12 minutes for every 10MB of code to understand your repository.

    If for some reason you are struggling to index your project’s folder while using Visual Studio Code or JetBrains IDEs, then follow the below steps to delete the folder that contains all the indexes and try to re-index your project.

    1. Close all JetBrains IDEs and VS Code instances where Bito is installed.

    2. Go to your users directory. For example, on Windows it will be something like C:\Users\<your username>

    3. Now, find .bito folder and delete it. (Note: All configuration settings and project indexes created by Bito will be deleted. You will also be logged out from Bito IDE plugin)

    If Windows is installed on a drive other than “C”, you will need to locate the .bito folder on that drive instead.

    1. Once you have deleted the .bito folder, open your project in the IDE again.

    2. After restarting the IDE, you will need to enter your email ID and a 6-digit code to log in. Once you're logged in, select the workspace that has an active paid subscription.

    3. After that, when Bito asks if you wish to index the folder, you can select "Maybe later".

    4. Then, navigate to the "Manage repos" tab in the Bito plugin pane, where you should see the folder name listed under the "Current project" along with its size, indicating that it is not indexed. Since you have deleted the .bito folder, the "Other projects" section will no longer display any entries.

    5. Finally, click on "Start Indexing" and it should begin indexing the folder.

    For testing purposes, we suggest using a folder with a small size and avoid changing the folder in IDE until indexing is completed and the folder icon turns green.

    By the way, you can continue using Bito while indexing is in progress in the background.

    1 token ≈ ¾ of a word
  • 100 tokens ≈ 75 words or about 1–2 sentences

  • Tokenization Methods

    Imagine you have a sentence: "The quick brown fox jumps over the lazy dog." An LLM would use tokenization to chop this sentence into manageable pieces. Depending on the chosen method (we’ve discussed it in the next section below), this could result in a variety of tokens, such as:

    • Word-level: ["The", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"]

    • Subword-level: ["The", "quick", "brown", "fox", "jumps", "over", "the", "la", "zy", "dog"]

    • Character-level: ["T", "h", "e", " ", "q", "u", "i", "c", "k", " ", ...]

    Each method has its own advantages and disadvantages.

    Word-level tokenization is straightforward and aligns with the way humans naturally read and write text. It is effective for languages with clear word boundaries and for tasks where the meaning is heavily dependent on the use of specific words. However, this method can lead to very large vocabularies, especially in languages with rich morphology or in cases where the text contains a lot of different proper nouns or technical terms. This large vocabulary can become a problem when trying to manage memory and computational efficiency.

    Subword-level tokenization, often implemented through methods like Byte Pair Encoding (BPE) or SentencePiece, addresses some of the issues of word-level tokenization. By breaking down words into more frequently occurring subunits, this method allows the model to handle rare or out-of-vocabulary (OOV) words more gracefully. It balances the vocabulary size and the ability to represent the full range of text seen during training. It can also be more effective for agglutinative languages (like Turkish or Finnish), where you can combine many suffixes with a base word, leading to an explosion of possible word forms if using word-level tokenization.

    Character-level tokenization has the advantage of the smallest possible vocabulary. Since it deals with characters, it is very robust to misspellings and OOV words. However, because it operates at such a fine-grained level, it may require more complex models to understand higher-level abstractions in the text. Models may need to be larger or more complex to learn the same concepts that could be learned with fewer parameters at higher levels of tokenization.

    Beyond these, there are other tokenization methods such as:

    • Byte-level: Similar to character-level, but treats the text as a sequence of bytes, which can be useful for handling multilingual text uniformly.

    • Morpheme-level: Breaks words down into morphemes, which are the smallest meaningful units of language. This can be useful for capturing linguistic nuances but requires sophisticated algorithms to implement effectively.

    • Hybrid approaches: Some models use a combination of the above methods, often starting with a larger unit and then falling back to smaller units when the first approach does not work.

    The choice of tokenization can affect not just the performance of an LLM but also its understanding of the text. For example, using a subword tokenizer that never breaks down "dog" into smaller pieces ensures that the model always considers "dog" as a semantic unit. In contrast, if "dog" could be broken down into "d" and "og", the model might lose the understanding that "dog" represents an animal.

    Tokens and Model Costs

    The complexity and number of tokens directly impact the computational horsepower needed to run AI models. More tokens generally mean more memory and processing power, which translates to higher costs.

    When you use services like OpenAI's GPT models, you're charged based on the number of tokens processed. With different rates for different models (like Davinci or Ada), budgeting for AI usage can get tricky. This makes the choice of tokenization method not just a technical decision but also a financial one.

    Overcoming the Token Limit Challenge

    A crucial point about LLMs is that they can only handle a limited number of tokens at once—this is their token limit. The more tokens they can process, the more complex the tasks they can handle.

    Imagine asking an AI to write a novel in one go. If the token limit is low, it might only manage a chapter. If it's high, you could get a full book, but it might take ages to write. It's all about finding the balance between performance and practicality.

    Here’s the token limits chart of popular LLMs.

    Model Name
    Context Window
    Max Output Tokens

    GPT-3.5 Turbo

    16,385 tokens

    4,096 tokens

    GPT-3.5 Turbo Instruct

    4,096 tokens

    4,096 tokens

    GPT-4

    8,192 tokens

    8,192 tokens

    GPT-4o

    But what happens when you have more to say than the token limit allows?

    5 Strategies to Beat Token Limits

    1. Truncation: The most straightforward approach is to cut the text down until it fits the token budget. However, this is like trimming a picture; you lose some of the scenes.

    2. Chunk Processing: Break your text into smaller pieces, process each chunk separately, and stitch the results together. It's like watching a series of short clips instead of a full movie.

    3. Summarization: Distill your text to its essence. For example, "It's sunny today. What will the weather be like tomorrow?" can be shortened to "Tell me tomorrow's weather."

    4. Remove Redundant Terms: Cut out the fluff—words that don't add significant meaning (like "the" or "and"). This streamlines the text but beware, over-pruning can alter the message.

    5. Fine-Tuning Language Models: Custom-train your model on specific data to get better results with fewer tokens. It’s like prepping a chef to make a dish they can cook blindfolded.

    Conclusion

    Tokens are much more than jargon—they're central to how language models process and understand our queries and commands.

    Understanding tokens and their role in AI language processing is fundamental for anyone looking to leverage the power of LLMs in their work or business. By grasping the basics of tokenization and its impact on computational requirements and costs, users can make informed decisions to balance performance with budget.

  • Abstract Syntax Trees (ASTs)

  • Symbol indexes

  • Local dependency relationships

  • This allows it to perform strong, context-aware code reviews within a single repository, including:

    • Identifying issues in the diff

    • Understanding dependencies inside the repo

    • Checking for consistency and correctness within that project

    • Suggesting improvements based on local patterns

    However, the agent’s visibility stops at the repository boundary. It cannot detect effects on other services or codebases.

    AI Code Review Agent powered by AI Architect

    When AI Architect is enabled, the AI Code Review Agent gains a complete view of your entire engineering ecosystem.

    AI Architect builds a cross-repository knowledge graph that maps:

    • All services

    • Shared libraries

    • Modules and components

    • Inter-service dependencies

    • Upstream and downstream call chains

    With this system-level understanding, the agent can perform much deeper analysis.

    Key capabilities unlocked by AI Architect

    1. Cross-repository awareness

    The agent understands how code in one repo interacts with code in others — crucial for microservices and distributed systems.

    2. Cross-repo impact analysis

    During a pull request review, the agent can identify:

    • What breaks downstream if you change an interface

    • Which services call the function you updated

    • Which teams or repos depend on your changes

    • Whether the update introduces architecture-wide risks

    3. Architecture-level checks

    The agent evaluates your changes not just for correctness, but for their alignment with the overall system design.

    4. Early problem detection across the entire codebase

    Ripple effects, breaking changes, or dependency violations that traditionally appear only in staging or after deployment can now be flagged directly during review.


    Side-by-side comparison

    Capability
    Without AI Architect
    With AI Architect

    Scope

    Single repository

    Entire system (multi-repo)

    Knowledge graph

    Repo-only

    Cross-repository, system-wide

    AST + symbol analysis

    ✅

    ✅ (plus cross-repo linking)

    Dependency visibility

    Local to repo

    AI Code Review Agent
    AI Architect

    Code review analytics

    Get in-depth insights into your code review process.

    The user-friendly Code Review Analytics dashboards help you track key metrics such as pull requests reviewed, issues found, lines of code reviewed, and understand individual contributions.

    This helps you identify trends and optimize your development workflow.

    Code Review Analytics dashboard

    Bito provides four distinct analytical views to help you understand your code review performance from multiple perspectives:

    1. Overview: High-level workspace metrics and trends

    2. : Individual contributor performance and patterns

    3. : Repository and language-specific insights

    4. : Detailed pull request and issue tracking

    "Overview" dashboard

    The provides a comprehensive high-level view of your workspace's code review performance, showing pull requests reviewed, issues found, and their categorization.

    Key metrics:

    • Code Requests Reviewed - This Month: Total number of code reviews completed by Bito, including both pull requests from git workflows and IDE-based reviews

    • Lines Reviewed - This Month: Total lines of code analyzed across all pull request diffs

    • Repositories Reviewed - This Month: Number of unique repositories that received code review coverage

    • Submitters - This Month

    Use the Filters button (top-right) to customize your view. You can also export the data to PowerPoint or PDF using the Share menu button (top-right).

    "Submitter Analytics" dashboard

    The helps you gain insights into individual contributor patterns and performance with user-level statistics and visualizations.

    Key metrics:

    • Pull Requests Reviewed - This Month: Number of pull requests reviewed for each developer. It helps you identify most active team members.

      • Shows top 30 contributors by pull request count

      • Remaining contributors aggregated under 'Other'

    • Lines of Code Reviewed - This Month: Lines of code reviewed by Bito per developer. It is useful for understanding workload distribution.

    Use the Filters button (top-right) to customize your view. You can also export the data to PowerPoint or PDF using the Share menu button (top-right).

    "Repository Analytics" dashboard

    The helps you understand repository-level performance and language-specific trends across your codebase.

    Key metrics:

    • Pull Requests Reviewed - This Month: Review activity across repositories (top 30 shown, remainder grouped as 'Other'). It identifies which codebases receive most attention.

    • Lines of Code Reviewed (Repo) - This Month: Lines of code reviewed by Bito in each repository (top 30 displayed individually). It helps you understand where development effort is concentrated.

    • Lines of Code Reviewed (Language) - This Month: Breakdown of reviewed code by programming language. It is useful for resource allocation and expertise planning.

    Use the Filters button (top-right) to customize your view. You can also export the data to PowerPoint or PDF using the Share menu button (top-right).

    "PR Analytics" dashboard

    The helps you dive deep into individual pull request performance with detailed pull request and issue-level analytics.

    The dashboard organizes pull requests into three tabs:

    1. "Reviewed (Feedback)" tab

    • Shows pull requests where Bito provided actionable feedback

    • These pull requests contain issues that require your attention

    • Click any pull request to access comprehensive details including every feedback item with its category (Security, Performance, Linter, Functionality, etc.), affected programming language, and direct links to the specific code location within the pull request for quick reference.

    • Useful for tracking reviews that generated value

    2. "Reviewed (No Feedback)" tab

    • Shows pull requests that Bito reviewed but found no actionable issues

    • Indicates clean code submissions

    3. "Skipped" tab

    • Shows pull requests that Bito didn't review due to configuration settings or other constraints

    • Includes skip reasons for transparency

    Use the Filter button (top-left) to customize views by:

    • Specific submitters

    • Date ranges

    Benefits for technical leadership

    The detailed code review analytics reports enables tech leads and reviewers to:

    • Trace patterns: Identify recurring issues across pull requests

    • Spot trends: Recognize systematic problems in code quality

    • Connect insights: Link high-level analytics to specific code examples

    • Targeted mentoring: Provide specific guidance based on actual code issues

    Best practices for using analytics

    1. Regular review cadence

    • Check metrics for trend monitoring

    • Review for team performance discussions

    • Analyze for strategic planning

    • Use for issue tracking and mentoring

    2. Filtering for insights

    • Use date filters to compare time periods

    • Filter by specific teams or repositories during retrospectives

    • Focus on high-activity contributors or repositories for targeted improvements

    3. Export and sharing

    • Export monthly reports for stakeholder updates

    • Share repository-specific insights with relevant teams

    • Use PowerPoint exports for executive presentations

    • Archive PDF reports for compliance or historical analysis

    4. Action-oriented analysis

    • Identify submitters who might benefit from additional code review training

    • Focus attention on repositories with high issue density

    • Address language-specific patterns through targeted workshops

    • Use acceptance rate trends to validate review effectiveness

    Managing Index Size

    Exclude unnecessary files and folders from repo to index faster!

    What is Indexable Size?

    Indexable size is size of all code files, excluding following from the folder:

    • Directory/File based filtering

      • logs, node_modules, dist, target, bin, package-lock.json, data.json, build, .gradle, .idea, gradle, extension.js, vendor.js, ngsw.json, polyfills.js, ngsw-worker.js, runtime.js, runtime-main.js, service-worker.js, bundle.js, bundle.css

    • Extension based filtering

      • bin, exe, dll, log, aac, avif, bmp, cda, gif ,mp3, mp4, mpeg, weba, webm, webp, oga, ogv, png, jpeg, jpg, bmp, wpa, tif, tiff, svg, ico, wav, mov, avi, doc, docx, ppt, pptx, xls, xlsx, ods, odp, odt, pdf, epub, rar, tar, zip, vsix, 7z, bz, bz2, gzip, jar, war, gz, tgz, woff, woff2, eot, ttf, map, apk, app, ipa, lock, tmp, logs, gmo, pt

    • Hidden files are filtered i.e., files starting with "."

    • All Empty files are filtered.

    • All Binary files are also filtered.

    Is there any limit on repository size?

    For workspaces that have upgraded to Bito's Team Plan, we have set the indexable size limit to 120MB per repo. However, once we launch the "AI that Understands Your Code" feature for our Free Plan users, they will be restricted to repositories with an indexable size limit of 10MB.

    Learn more about above and see which files and folders are excluded by default.

    You can reduce your repo's indexable size by excluding certain files and folders from indexing using file and remain within the limit.

    What if a repo hits 120MB limit?

    If a repo hits 120MB limit, then the below error message will be displayed in the "Manage repos" tab and the repo's index status will be changed to "Not Indexed".

    Sorry, we don’t currently support repos of this size. Please use .bitoignore to reduce the size of the repo you want Bito to index.

    To fix this issue, follow our instructions regarding and reduce your repo's size and bring it under the max limit of 120MB.

    After that, you must and then restart the indexing by clicking on the "Start Indexing" button shown for the repo folder. You can also follow our step-by-step guides to and IDEs.

    What is .bitoignore and how to use it?

    A .bitoignore file is a plain text file where each line contains a pattern or rules that tells Bito which files or directories to ignore and not index. In other words, it's a way to reduce your repo's indexable size. You can also see .

    There are two ways to use .bitoignore file:

    1. Create a .bitoignore file inside the folder where indexes created by Bito are stored. e.g. <user-home-directory>/.bito/localcodesearch/.bitoignore

      • On Windows, this path will be something like: C:\Users\<your username>\.bito\localcodesearch\.bitoignore

    If a .gitignore file is available in your repo then Bito will also use that to ignore files & folders from indexing process. Both .bitoignore and .gitignore files can work together without any issues.

    At present, Bito considers only those .gitignore files that are placed in the project root directory and .bitoignore files that are placed either in <user-home-directory\.bito\localcodesearch> or <project-root-directory>

    Changes to the .bitoignore file are taken into account at the beginning of the indexing process, not during or after the indexing itself.

    Therefore, to implement changes made to the .bitoignore file, you'll need to and then restart the indexing by clicking on the "Start Indexing" button shown for the repo folder. You can also follow our step-by-step guides to and IDEs.

    Please note that any changes to the .bitoignore or .gitignore file will take a minimum of 3 to 5 minutes to trigger new indexing.

    Common .bitoignore Patterns

    Understanding these patterns/rules is crucial for effectively managing the files and directories that Bito indexes and excludes in your projects.

    Sample Rule
    Description

    Negation ! (exclamation mark)

    When a pattern starts with ! it negates the pattern, meaning it explicitly includes files or directories that would otherwise be ignored. For example, have a look at this sample .bitoignore file:

    Here !Engine/Build/BatchFiles/** pattern includes all files in the Engine/Build/BatchFiles directory and its subdirectories, even though Engine/** pattern would ignore them.

    Avoid Ambiguous Patterns: Negation patterns can become confusing when they potentially match multiple files. Target specific files or folders rather than using wildcards in negation patterns.

    For example, it is better to use patterns like !Engine/Build/BatchFiles/script.bat instead of !Engine/Build/BatchFiles/**

    .bitoignore Examples

    Exclude Files/Folders

    Exclude Everything Except Specific Files

    To exempt a file, ensure that the negation pattern ! appears afterward, thereby overriding any broader exclusions.

    Guide for Bitbucket (Self-Managed)

    Integrate the AI Code Review Agent into your self-hosted Bitbucket workflow.

    Speed up code reviews by configuring the AI Code Review Agent with your Bitbucket (Self-Managed) server. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using available commands.

    The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our Pricing page.

    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Video tutorial

    Prerequisites

    Before proceeding, ensure you've completed all necessary prerequisites.

    1. Create a Bitbucket Personal Access Token:

    For Bitbucket pull request code reviews, a token with Project Admin permission is required. Make sure that the token is created by a Bitbucket user who has the Admin privileges.

    Important: Bito posts comments using the Bitbucket user account linked to the Personal Access Token used during setup. To display "Bito" instead of your name, create a separate user account (e.g., Bito Agent) and use its token for integration.

    You can use the Create Token button that appears once you provide the Hosted Bitbucket URL and your Bitbucket username.

    Or directly visit the URL of your self-hosted Bitbucket.

    To create a token for your user account:

    1. Go to Profile picture > Manage account > HTTP access tokens.

    2. Select Create token.

    3. Set the token name, permissions, and expiry.

    2. Authorizing a Bitbucket Personal Access Token for use with SAML single sign-on:

    If your Bitbucket organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.

    For more information, please refer to .

    Installation and configuration steps

    Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

    Step 1: Log in to Bito

    and select a workspace to get started.

    Step 2: Open the Code Review Agents setup

    Click under the CODE REVIEW section in the sidebar.

    Step 3: Select your Git provider

    Bito supports integration with the following Git providers:

    • GitHub

    • GitHub (Self-Managed)

    • GitLab

    • GitLab (Self-Managed)

    Since we are setting up the Agent for Bitbucket (Self-Managed) server, select Bitbucket (Self-Managed) to proceed.

    Step 4: Connect Bito to Bitbucket

    To enable pull request reviews, you’ll need to connect your Bito workspace to your Bitbucket (Self-Managed) server.

    If your network blocks external services from interacting with the Bitbucket server, whitelist all of Bito's gateway IP addresses in your firewall to ensure Bito can access your self-hosted repositories. The Agent response can come from any of these IPs.

    • List of IP addresses to whitelist:

      • 18.188.201.104

    You need to enter the details for the below mentioned input fields:

    • Hosted Bitbucket URL: This is the domain portion of the URL where your Bitbucket Enterprise server is hosted (e.g., https://bitbucket.mycompany.com). Please check with your Bitbucket administrator for the correct URL.

    • Bitbucket username: This is your Bitbucket username used for login. Please check it from your user profile page or ask your Admin.

    • Personal Access Token: Generate a Bitbucket Personal Access Token with Project Admin permission in your Bitbucket (Self-Managed) account. Ensure you have Bitbucket

    Important: Bito posts comments using the Bitbucket user account linked to the Personal Access Token used during setup. To display "Bito" instead of your name, create a separate user account (e.g., Bito Agent) and use its token for integration.

    Click Validate to ensure the token is functioning properly.

    If the token is successfully validated, click Connect Bito to Bitbucket to proceed.

    Step 5: Enable AI Code Review Agent on repositories

    After connecting Bito to your Bitbucket self-managed server, you'll see a list of repositories that Bito has access to.

    Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

    To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

    Step 6: Automated and manual merge request reviews

    Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

    1. Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.

    2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the pull request and submit it. This action will start the code review process.

    The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

    Note: To enhance efficiency, the automated code reviews are only triggered for pull requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

    To review additional branches, you can use the . Bito will review pull requests when the source or target branch matches the list.

    The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    The AI Code Review Agent automatically reviews code changes up to 5000 lines when a pull request is created. For larger changes, you can use the /review command.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

    Step 7: Specialized commands for code reviews

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to .

    Step 8: Chat with AI Code Review Agent

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    Step 9: Configure Agent settings

    let you control how reviews are performed, ensuring feedback is tailored to your team’s needs. By adjusting the options, you can:

    • Make reviews more focused and actionable.

    • Apply your own coding standards.

    • Reduce noise by excluding irrelevant files or branches.

    • Add extra checks to improve code quality and security.

    Guide for Windsurf

    Integrate Windsurf with AI Architect for more accurate, codebase-aware AI assistance.

    Use Bito's AI Architect with Windsurf to enhance your AI-powered coding experience.

    Once connected via MCP (Model Context Protocol), Windsurf can leverage AI Architect’s deep contextual understanding of your project, enabling more accurate code suggestions, explanations, and code insights.

    Prerequisites

    1. Follow the . Upon successful setup, you will receive a Bito MCP URL and Bito MCP Access Token that you need to enter in your coding agent.

    2. Download BitoAIArchitectGuidelines.md file. You will need to copy/paste the content from this file later when configuring AI Architect.

      • Note: This file contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server. The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

    Set up AI Architect

    Follow the setup instructions for your operating system:

    Windows

    1

    Create Windsurf config directory

    1. Press Win + R

    macOS/Linux

    1

    Create Windsurf config directory

    2

    Troubleshooting Windsurf

    Server not showing:

    Connection errors:

    • Verify Bito MCP URL and Bito MCP Access Token are correct.

    • Test endpoint with MCP protocol:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token

    • Check Settings → Cascade → MCP Servers for error messages.

    Guide for GitHub

    Integrate the AI Code Review Agent into your GitHub workflow.

    Speed up code reviews by configuring the AI Code Review Agent with your GitHub repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using available commands.

    The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our Pricing page.

    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Video tutorial

    Installation and configuration steps

    Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

    Step 1: Log in to Bito

    and select a workspace to get started.

    Step 2: Open the Code Review Agents setup

    Click under the CODE REVIEW section in the sidebar.

    Step 3: Select your Git provider

    Bito supports integration with the following Git providers:

    • GitHub

    • GitHub (Self-Managed)

    • GitLab

    • GitLab (Self-Managed)

    Since we are setting up the Agent for GitHub, select GitHub to proceed.

    This will redirect you to GitHub.

    Step 4: Install the Bito app for GitHub

    To enable pull request reviews, you need to install and authorize the Bito’s AI Code Review Agent app.

    On GitHub, select where you want to install the app.

    Grant Bito access to your repositories:

    • Choose All repositories to enable Bito for every repository in your account.

    • Or, select Only select repositories and pick specific repositories using the dropdown menu.

    Bito app uses these permissions:

    • Read access to metadata

    • Read and write access to code, issues, and pull requests

    Click Install & Authorize to proceed. Once completed, you will be redirected to Bito.

    Step 5: Enable AI Code Review Agent on repositories

    After connecting Bito to your GitHub account, you'll see a list of repositories that Bito has access to.

    Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

    To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

    Step 6: Automated and manual pull request reviews

    Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

    1. Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.

    2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the pull request and submit it. This action will start the code review process.

    The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

    Note: To enhance efficiency, the automated code reviews are only triggered for pull requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

    To review additional branches, you can use the . Bito will review pull requests when the source or target branch matches the list.

    The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    The AI Code Review Agent automatically reviews code changes up to 5000 lines when a pull request is created. For larger changes, you can use the /review command.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

    Step 7: Specialized commands for code reviews

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to .

    Step 8: Chat with AI Code Review Agent

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    Step 9: Configure Agent settings

    let you control how reviews are performed, ensuring feedback is tailored to your team’s needs. By adjusting the options, you can:

    • Make reviews more focused and actionable.

    • Apply your own coding standards.

    • Reduce noise by excluding irrelevant files or branches.

    • Add extra checks to improve code quality and security.

    Screenshots

    Screenshot # 1

    AI-generated pull request (PR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a pull request.

    Screenshot # 3

    AI code review feedback posted as comments on the pull request.

    Repo level settings

    Configure repository-specific Code Review Agent settings using the .bito.yaml file.

    Repo-level Agent settings let you control how the behaves for each repository.

    By placing a .bito.yaml file in the root of your repository, you can define custom review preferences that apply only to that repository.

    Bito automatically detects the presence of a .bito.yaml file in a repository and applies its configuration to override the global Agent settings defined by admins in the Bito Cloud UI.

    This gives developers fine-grained control while admins maintain global oversight and billing management.

    Guide for GitHub (Self-Managed)

    Integrate the AI Code Review Agent into your self-hosted GitHub Enterprise workflow.

    Speed up code reviews by configuring the with your self-managed GitHub Enterprise server. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using .

    The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

    Guide for Claude Code

    Integrate Claude Code with AI Architect for more accurate, codebase-aware AI assistance.

    Use Bito's with Claude Code to enhance your AI-powered coding experience.

    Once connected via MCP (Model Context Protocol), Claude Code can leverage AI Architect’s deep contextual understanding of your project, enabling more accurate code suggestions, explanations, and code insights.

    Prerequisites

    GitHub - gitbito/CLI: Bito CLI (Command Line Interface) provides a command line interface to the Bito AI chat functionality. Over time, CLI will add more functions and new command options to support complex automation and workflows. This is a very early Alpha version. We would love to get your feedback on the new features or improvements.GitHub
    mkdir -p ~/.cursor
    # Verify file location
    ls -la ~/.cursor/mcp.json
    
    # Check file permissions
    chmod 644 ~/.cursor/mcp.json
    
    # Verify JSON syntax
    cat ~/.cursor/mcp.json | python -m json.tool
    curl -s -X POST \
      -H "Authorization: Bearer <Your-Bito-MCP-Access-Token>" \
      -H "Content-Type: application/json" \
      -d '{"jsonrpc":"2.0","method":"initialize","params":{},"id":1}' \
      <Your-Bito-MCP-URL>
    
    # Should return HTTP 200 with JSON response for valid credentials
    # HTTP 401: Invalid Bito MCP Access Token
    # HTTP 404: Invalid Bito MCP URL
    feature/QP-123-implement-user-authentication
    bugfix/QP-456-fix-login-error
    This PR implements user authentication as specified in QP-123.
    
    Related tickets: QP-123, QP-124
    This PR implements shopping cart functionality as specified in:
    https://your-company.atlassian.net/browse/QP-3
    https://your-company.atlassian.net/browse/QP-7
    QP-123: Implement user authentication feature
    [QP-456] Fix login validation error

    128,000 tokens

    4,096 tokens

    GPT-4o mini

    128,000 tokens

    16,384 tokens

    Claude Sonnet 3.5

    200,000 tokens

    8192 tokens

    Full call chains across repos

    Impact analysis

    Local only

    Upstream + downstream, multi-repo

    Architecture checks

    Limited

    System-level validation

    Ripple-effect detection

    ❌

    ✅

    Multi-service understanding

    ❌

    ✅

    Eligible
    assign them to developers

    Find the Agent instance you want to connect with Jira and open its settings.

  • Within the Agent settings screen, click on the Integrations tab.

  • Locate the Functional validation option and enable this setting to activate automatic pull request validation against Jira tickets.

  • Find the Agent instance you want to connect with Jira and open its settings.

  • Within the Agent settings screen, click on the Integrations tab.

  • Locate the Functional validation option and enable this setting to activate automatic pull request validation against Jira tickets.

  • Repositories
    official Jira documentation
    Repositories
    : Count of unique developers (based on Git handles) whose pull requests were reviewed by Bito
  • Issues Found - This Month: Total number of issues identified across all reviewed code

  • Issues Categories - This Month: Visual breakdown of issues by primary categories (Security, Performance, Functionality, etc.)

    • Note: When issues span multiple categories, Bito assigns the most relevant primary category

  • Merged PRs - This Month: Number of Bito-reviewed pull requests that were subsequently merged or closed

  • Issues Evaluated for Acceptance Rate - This Month: Issues in merged pull requests evaluated for potential fixes

  • Acceptance Rate (Merged PRs) - This Month: Percentage of agent-identified issues that were potentially addressed

    • Calculated based on code changes detected in related hunks when pull requests were merged

    • Available for reviews conducted on or after August 8th, 2024

    • Note: This is an approximation based on code change detection

  • Pull Requests Skipped - This Month: Pull requests excluded from review due to:

    • Matching exclusion filters in agent configuration

    • Empty diffs

    • Invalid Bito plan status

  • Skip Reason - This Month: Breakdown of why specific pull requests were skipped

    • Displays contributors with minimum 100 lines reviewed

    • Top 30 contributors shown individually

    • Remaining contributors grouped under 'Other'

  • Issues Reported Per 1K Lines - This Month: Issue density normalized by code volume for developers with at least 1,000 lines of code, enabling fair comparison across different contribution levels. It helps identify patterns in code quality by developer

  • Issue Distribution by Category - This Month: Breakdown of issues by type for each developer, showing both total count and percentage. Categories with fewer than 5 issues are excluded, with bar height representing total issues and width showing percentage distribution. It helps identify individual strengths and areas for improvement.

  • Issues Reported Per 1K Lines (Repo) - This Month: Issue density for repositories with at least 1,000 lines of changes. It identifies repositories that may need additional attention
  • Issues Reported Per 1K Lines (Language) - This Month: Issue rates across different programming languages (minimum 100 lines required). It helps you identify language-specific training needs.

  • Issue Distribution by Category × Language - This Month: Issues categorized by both type and programming language, with visualization showing total count (bar height) and percentage distribution (bar width). Categories with fewer than 5 issues excluded. It reveals language-specific issue patterns.

  • Issue Distribution by Category × Repo - This Month: Issues analyzed across category and repository dimensions, excluding categories with fewer than 5 issues. The visualization shows total issues (bar height) and percentage distribution (bar width). It identifies repository-specific issue trends.

  • Pull request status

    Process improvement: Adjust development practices based on concrete data

    Submitter Analytics
    Repository Analytics
    PR Analytics
    Overview dashboard
    Submitter Analytics dashboard
    Repository Analytics dashboard
    PR Analytics dashboard
    Overview
    Submitter Analytics
    Repository Analytics
    PR Analytics
    How to install Bito extension on JetBrains IDEs
    Note:
    The custom ignore rules you set in this
    .bitoignore
    file will be applied to all the repositories where you have
    enabled indexing
    .
  • Create a .bitoignore file inside your repository's root folder.

  • Engine/ or Engine/**

    Ignores all files in the Engine directory and their subdirectories (contents).

    subdirectory1/example.html

    Ignore the file named example.html, specifically located in the directory named subdirectory1.

    !contacts.txt

    (Negation Rule) Explicitly tracks contacts.txt, even if all .txt files are ignored.

    !Engine/Batch/Builds

    (Negation Rule) Tracks the Builds directory inside Engine/Batch, overriding a broader exclusion.

    !Engine/Batch/Builds/**

    (Negation Rule) Tracks the Builds directory and all of its subdirectories inside Engine/Batch, overriding a broader exclusion.

    !.java

    (Negation Rule) Ensures that all .java files are included, overriding any previous ignore rules that might apply to them.

    !subdirectory1/*.txt

    (Negation Rule) Track files with the .txt extension located specifically in the subdirectory1 directory, even if other rules might otherwise ignore .txt files.

    BitoUtil?.java

    The ? (question mark) matches any single character in a filename or directory name.

    # this is a comment.

    Any line that starts with a # symbol is considered as a comment and will not be processed.

    *

    (Wildcard character) Ignores all files

    **

    (Wildcard character) Match any number of directories.

    todo.txt

    Ignores a specific file named todo.txt

    *.txt

    Ignores all files ending with .txt

    *.*

    Ignores all files with any extension.

    indexable size
    .bitoignore
    how to use .bitoignore file
    delete the index
    Start Indexing in Visual Studio Code
    Start Indexing in JetBrains
    which files/folders are excluded by default
    delete the index
    Start Indexing in Visual Studio Code
    Start Indexing in JetBrains

    If the file doesn't exist, create it with this content:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token you received after completing the AI Architect setup.

    1. If the file exists with other servers, add BitoAIArchitect to the mcpServers object:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token you received after completing the AI Architect setup.

    In your project root, create .cursorrules file.
  • Open this file with a text editor.

  • Copy the contents of your BitoAIArchitectGuidelines.md file into .cursorules file.

  • Save.

  • Reopen Cursor
  • Open Settings → Tools & MCP

  • Verify BitoAIArchitect appears in the MCP servers list

  • you received after completing the AI Architect setup.
    In your project root, create .cursorrules file.
  • Open this file with a text editor.

  • Copy the contents of your BitoAIArchitectGuidelines.md file into .cursorules file.

  • Save.

  • Reopen Cursor
  • Open Settings → Tools & MCP

  • Verify BitoAIArchitect appears in the MCP servers list

  • BitoAIArchitectGuidelines.md file
    BitoAIArchitectGuidelines.md file
    Engine/**
    !Engine/Build/BatchFiles/**
    # Ignore specific file named "config.ini"
    config.ini
    
    # Ignore all files with a '.bak' extension
    *.bak
    
    # Ignore all files with a '.kunal' extension
    *.kunal
    
    # Exclude directories
    backup
    temp/
    dist/vendor
    # Ignore all files except C++, header and python files
    *
    !*.cpp
    !*.h
    !*.py
    nano ~/.cursor/mcp.json
    {
      "mcpServers": {
        "BitoAIArchitect": {
          "url": "<Your-Bito-MCP-URL>",
          "headers": {
            "Authorization": "Bearer <Your-Bito-MCP-Access-Token>"
          }
        }
      }
    }
    {
      "mcpServers": {
        "BitoAIArchitect": {
          "url": "<Your-Bito-MCP-URL>",
          "headers": {
            "Authorization": "Bearer <Your-Bito-MCP-Access-Token>"
          }
        }
      }
    }
    {
      "mcpServers": {
        "existing-server": {
          ...
        },
        "BitoAIArchitect": {
          "url": "<Your-Bito-MCP-URL>",
          "headers": {
            "Authorization": "Bearer <Your-Bito-MCP-Access-Token>"
          }
        }
      }
    }
    Bitbucket
  • Bitbucket (Self-Managed)

  • 3.23.173.30

  • 18.216.64.170

  • Admin
    privileges. Enter the token into the
    Personal Access Token
    input field. You can use the
    Create Token
    button that appears once you provide the
    Hosted Bitbucket URL
    and your
    Bitbucket username
    .

    For guidance, refer to the instructions in the Prerequisites section.

    /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

  • View Bitbucket documentation
    Bitbucket SAML SSO documentation
    Log in to Bito Cloud
    Repositories
    Learn more
    Include Source/Target Branches filter
    Available Commands
    Agent settings
    Learn more
    Type: %USERPROFILE%\.codeium\windsurf
  • Press Enter

  • If the folders don't exist, create them:

    1. Open File Explorer

    2. Navigate to %USERPROFILE%

    3. Create folders: .codeium\windsurf

    2

    Create or edit mcp_config.json

    1. Open %USERPROFILE%\.codeium\windsurf\mcp_config.json in a text editor.

    2. If the file doesn't exist, create it with this content:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token

    1. If the file exists with other servers, add BitoAIArchitect to the mcpServers object:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token

    1. Save

    3

    Add guidelines (optional but highly recommended)

    The BitoAIArchitectGuidelines.md file contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server.

    The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

    Option A: Global guidelines (applies to all projects):

    Create directory:

    Copy the contents of your into global_rules.md file:

    Option B: Project-level guidelines (applies to specific project):

    In your project directory, create .windsurf\rules directory:

    Copy the contents of your into bitoai-architect.md file:

    Note: Windsurf Wave 8+ uses .windsurf\rules\*.md format for project-level rules. Global guidelines in ~/.codeium/windsurf/memories/global_rules.md are supported in all versions.

    4

    Restart Windsurf

    1. Close Windsurf completely

    2. Reopen Windsurf

    3. Open Settings → Cascade → MCP Servers

    4. Click "Refresh"

    5. Verify BitoAIArchitect appears with green status

    Create or edit mcp_config.json

    Add this content:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token you received after completing the AI Architect setup.

    Save and exit (Ctrl+O, Enter, Ctrl+X)

    3

    Add guidelines (optional but highly recommended)

    The BitoAIArchitectGuidelines.md file contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server.

    The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

    Option A: Global guidelines (applies to all projects):

    Create directory:

    Copy the contents of your into global_rules.md file:

    Option B: Project-level guidelines (applies to specific project):

    In your project directory, create .windsurf/rules directory:

    Copy the contents of your into bitoai-architect.md file:

    Note: Windsurf Wave 8+ uses .windsurf/rules/*.md format for project-level rules. Global guidelines in ~/.codeium/windsurf/memories/global_rules.md are supported in all versions.

    4

    Restart Windsurf

    1. Close Windsurf completely

    2. Reopen Windsurf

    3. Open Settings → Cascade → MCP Servers

    4. Click "Refresh"

    5. Verify BitoAIArchitect appears with green status

    you received after completing the AI Architect setup.
    AI Architect installation instructions
    Windows
    macOS/Linux
    Bitbucket
  • Bitbucket (Self-Managed)

  • Read access to organization members

    /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

  • Log in to Bito Cloud
    Repositories
    Learn more
    Include Source/Target Branches filter
    Available Commands
    Agent settings
    Learn more
    Changelist in AI Code Review Agent's feedback.
    Why use repo-level settings

    Large organizations often have different review needs across projects.

    Centralized (agent-level) settings don’t scale well — especially when each repo has its own coding standards, branch structure, or tooling.

    Repo-level configuration helps by:

    • Enabling custom review behavior per repository.

    • Allowing custom guidelines flexibility at the repo level.

    • Keeping settings version-controlled and transparent.

    How it works

    1. Add a .bito.yaml file to the root of your repository. To get started, download a sample .bito.yaml file.

    2. Add the supported configuration fields (key-value pairs) to specify how the Code Review Agent should behave for that repository.

    3. When the Code Review Agent runs, Bito automatically detects the file and applies those settings for that repository.

    Note: Repo-level overrides are applied only if your workspace admin has enabled “Allow config file settings” in Agent Settings. This option is required for repo-level overrides to take effect and is turned on by default.

    Enabling repo-level overrides

    Admins can manage this from the Agent Settings panel.

    • Setting name: Allow config file settings

    • Description: Enabling this allows repositories to override Agent Settings by placing a .bito.yaml file in the repo root.

    Note: Only workspace admins can toggle this setting from the Bito dashboard (cannot be changed via .bito.yaml file).

    Supported settings in .bito.yaml file

    You can override the following Code Review Agent settings:

    suggestion_mode

    Controls how detailed the review comments are. Choose between Essential and Comprehensive review modes:

    • In Essential mode, only critical issues are posted as inline comments, and other issues appear in the main review summary under "Additional issues".

    • In Comprehensive mode, Bito also includes minor suggestion and potential nitpicks as inline comments.

    Valid values: essential or comprehensive

    post_description

    Automatically create summary of changes and append to your existing pull request summary. Valid values: true or false

    post_changelist

    Adds a walkthrough section to pull request comments. Valid values: true or false

    include_source_branches

    Source branches defined using comma-separated GLOB or regex patterns for which Bito automatically reviews pull requests. Example: "feature/*, release/*, main"

    include_target_branches

    Target branches defined using comma-separated GLOB or regex patterns for which Bito automatically reviews pull requests. Example: "feature/*, release/*, main"

    exclude_files

    Comma-separated file path GLOB patterns to exclude from code reviews. Example: "*.md, *.yaml, config/*"

    exclude_draft_pr

    Excludes draft pull requests from automatic reviews. Valid values: true or false

    Sample .bito.yaml file

    Download .bito.yaml file

    From GitHub:

    You can download a sample .bito.yaml configuration file directly from Bito’s official GitHub repository.

    This file includes all supported configuration fields with example values to help you get started quickly.

    1. Go to the Bito GitHub repository.

    2. Open the .bito.yaml file.

    3. Click the Download raw file button to download it.

    From Bito Cloud UI:

    You can also download the sample .bito.yaml configuration file from the Bito Cloud UI.

    • Go to Repositories dashboard.

    • Click the Download settings file button given in the Agent panel.

    Note: Web browsers such as Google Chrome do not allow downloading files that begin with a dot .. As a result, when you download the sample settings file, it will be saved with a different name (for example, agent.yaml or bito.yaml). To use it correctly, rename the file to .bito.yaml before adding it to your repository.

    Note: By default, files that start with a dot . are hidden in most file explorers.

    To view hidden files:

    • Windows: In File Explorer, go to the top menu bar, click View, then enable Hidden items.

    • macOS: Press Command + Shift + . in Finder.

    • Linux: Run ls -a in your terminal.

    Note: On macOS, the Finder app may not allow naming a file starting with a dot (e.g., .bito.yaml). In that case, open Terminal and use the following command to rename the file (replace filename.yaml with your actual file name):

    mv filename.yaml .bito.yaml

    Rules and limits

    • The .bito.yaml file is read from the source branch of the pull request.

    • If a repo defines custom guidelines, agent-level guidelines are ignored for that repository.

    • If any property in the .bito.yaml file contains an invalid value, the entire configuration file will be rejected and default Agent Settings will be used instead.

    • If a property is missing in the .bito.yaml file, the corresponding value from the global Agent Settings will be used instead.

    AI Code Review Agent

    Video tutorial

    coming soon...

    Prerequisites

    Before proceeding, ensure you've completed all necessary prerequisites.

    1. Create a GitHub Personal Access Token (classic):

    For GitHub pull request code reviews, ensure you have a CLASSIC personal access token with repo scope. We do not support fine-grained tokens currently.

    View Guide

    GitHub Personal Access Token (classic)

    2. Authorizing a GitHub Personal Access Token for use with SAML single sign-on:

    If your GitHub organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token (classic) through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.

    For detailed instructions, please refer to the GitHub documentation.

    Installation and configuration steps

    Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

    Step 1: Log in to Bito

    Log in to Bito Cloud and select a workspace to get started.

    Step 2: Open the Code Review Agents setup

    Click Repositories under the CODE REVIEW section in the sidebar.

    Step 3: Select your Git provider

    Bito supports integration with the following Git providers:

    • GitHub

    • GitHub (Self-Managed)

    • GitLab

    • GitLab (Self-Managed)

    • Bitbucket

    • Bitbucket (Self-Managed)

    Since we are setting up the Agent for self-managed GitHub Enterprise server, select GitHub (Self-Managed) to proceed.

    Supported versions:

    • GitHub Enterprise Server: 3.0 and above

    Step 4: Register & install the Bito App for GitHub

    To enable pull request reviews, you need to register and install the Bito’s AI Code Review Agent app on your self-managed GitHub Enterprise server.

    If your network blocks external services from interacting with the GitHub server, whitelist all of Bito's gateway IP addresses in your firewall to ensure Bito can access your self-hosted repositories. The Agent response can come from any of these IPs.

    • List of IP addresses to whitelist:

      • 18.188.201.104

      • 3.23.173.30

      • 18.216.64.170

    You need to enter the details for the below mentioned input fields:

    • Hosted GitHub URL: This is the domain portion of the URL where you GitHub Enterprise Server is hosted (e.g., https://yourcompany.github.com). Please check with your GitHub administrator for the correct URL.

    • Personal Access Token: Generate a Personal Access Token (classic) with “repo” scope in your GitHub (Self-Managed) account and enter it into the Personal Access Token input field. We do not support fine-grained tokens currently. For guidance, refer to the instructions in the Prerequisites section.

    Click Validate to ensure the login credentials are working correctly. If the credentials are successfully validated, click the Install Bito App for GitHub button. This will redirect you to your GitHub (Self-Managed) server.

    Before proceeding, you’ll be asked to enter your GitHub App name — this is the name that will appear in your GitHub Apps list and during installations. Choose a clear, recognizable name (for example, “Bito Code Reviewer”).

    Now select where you want to install the app:

    • Choose All repositories to enable Bito for every repository in your account.

    • Or, select Only select repositories and pick specific repositories using the dropdown menu.

    Bito app uses these permissions:

    • Read access to metadata

    • Read and write access to code, issues, and pull requests

    • Read access to organization members

    Click Install & Authorize to proceed. Once completed, you will be redirected to Bito.

    Step 5: Enable AI Code Review Agent on repositories

    After connecting Bito to your self-managed GitHub Enterprise server, you'll see a list of repositories that Bito has access to.

    Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

    To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

    Learn more

    Step 6: Automated and manual pull request reviews

    Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

    1. Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.

    2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the pull request and submit it. This action will start the code review process.

    The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

    Note: To enhance efficiency, the automated code reviews are only triggered for pull requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

    To review additional branches, you can use the Include Source/Target Branches filter. Bito will review pull requests when the source or target branch matches the list.

    The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    The AI Code Review Agent automatically reviews code changes up to 5000 lines when a pull request is created. For larger changes, you can use the /review command.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

    Step 7: Specialized commands for code reviews

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

    • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to Available Commands.

    Step 8: Chat with AI Code Review Agent

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    Step 9: Configure Agent settings

    Agent settings let you control how reviews are performed, ensuring feedback is tailored to your team’s needs. By adjusting the options, you can:

    • Make reviews more focused and actionable.

    • Apply your own coding standards.

    • Reduce noise by excluding irrelevant files or branches.

    • Add extra checks to improve code quality and security.

    Learn more

    Screenshots

    Screenshot # 1

    AI-generated pull request (PR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a pull request.

    Changelist in AI Code Review Agent's feedback.

    Screenshot # 3

    AI code review feedback posted as comments on the pull request.

    AI Code Review Agent
    available commands
    Pricing
    Get a 14-day FREE trial of Bito's AI Code Review Agent.
    Follow the AI Architect installation instructions. Upon successful setup, you will receive a Bito MCP URL and Bito MCP Access Token that you need to enter in your coding agent.
  • Download BitoAIArchitectGuidelines.md file. You will need to copy/paste the content from this file later when configuring AI Architect.

    • Note: This file contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server. The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

  • Set up AI Architect

    Claude Code has the same setup process across all platforms (Windows, macOS, Linux, WSL) using the command line.

    Claude Code uses CLI-based configuration, NOT manual JSON editing.

    1

    Install Claude Code

    If you haven't already:

    Verify installation:

    2

    Add Bito AI Architect MCP server

    Use the claude mcp add command with the correct parameter order:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token

    Important: The server name and URL must come BEFORE the --header option.

    Scope options:

    • --scope user: Available in all your projects (recommended)

    • --scope project: Only in current project (stored in .mcp.json

    3

    Verify installation

    List your MCP servers:

    You should see "BitoAIArchitect" in the list.

    Test the server:

    4

    Add guidelines (optional but highly recommended)

    The BitoAIArchitectGuidelines.md file contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server.

    The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

    You can either create:

    5

    Start using Claude Code

    In your project directory, run:

    Now, in the chat you can ask questions about your indexed repositories. The AI Architect will help Claude Code provide accurate answers based on your codebase.

    Try asking something like:

    Windows-specific notes

    Windows (Native - Command Prompt/PowerShell):

    • MCP servers using npx require the cmd /c wrapper:

    Windows (WSL):

    • Configuration is stored in Linux file system

    • No need for cmd /c wrapper

    • Use standard Linux paths (~/.claude/)

    Configuration file locations

    Platform
    Main config
    Settings
    Global guidelines

    Windows

    %USERPROFILE%\.claude\claude.json

    %USERPROFILE%\.claude\settings.json

    %USERPROFILE%\.claude\CLAUDE.md

    macOS

    ~/.claude/claude.json

    ~/.claude/settings.json

    ~/.claude/CLAUDE.md

    Linux

    ~/.claude/claude.json

    ~/.claude/settings.json

    IMPORTANT:

    • ✅ These files are managed automatically by claude mcp commands

    • ❌ Do NOT manually create ~/.claude/mcp.json (this file doesn't exist)

    • ❌ Do NOT manually edit ~/.claude/claude.json (use CLI commands instead)

    Common Claude Code MCP commands

    Troubleshooting Claude Code

    Server not appearing:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token you received after completing the AI Architect setup.

    Connection issues:

    Note:

    • Replace <Your-Bito-MCP-URL> with the Bito MCP URL you received after completing the AI Architect setup.

    • Replace <Your-Bito-MCP-Access-Token> with the Bito MCP Access Token you received after completing the AI Architect setup.

    Permission issues (macOS/Linux):

    AI Architect

    Key features

    Explore the powerful capabilities of the AI Code Review Agent.

    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Features overview

    A quick look at powerful features of Bito's AI Code Review Agent—click to jump to details.

    1. AI that understands your code


    AI that understands your code

    The understand code changes in pull requests. It analyzes relevant context from your entire repository, resulting in more accurate and helpful code reviews.

    To comprehend your code and its dependencies, it uses Symbol Indexing, Abstract Syntax Trees (AST), and Embeddings.

    One-click setup for GitHub, GitLab, and Bitbucket

    offers a one-click solution for using the , eliminating the need for any downloads on your machine.

    Bito supports integration with the following Git providers:

    Automated and manually-triggered AI code reviews

    By default, the AI Code Review Agent automatically reviews all new pull requests and provides detailed feedback.

    To initiate a manual review, simply type /review in the comment box on the pull request and submit it. This action will start the code review process.

    Pull request summary

    Get a concise overview of your pull request (PR) directly in the description section, making it easier to understand the code changes at a glance. This includes a summary of the PR, the type of code changes, whether unit tests were added, and the estimated effort required for review.

    The agent evaluates the complexity and quality of the changes to estimate the effort required to review them, providing reviewers the ability to plan their schedule better. For more information, see

    Changelist

    A tabular view that displays key changes in a pull request, making it easy to spot important updates at a glance without reviewing every detail. Changelist categorizes modifications and highlights impacted files, giving you a quick, comprehensive summary of what has changed.

    One-click to accept suggestions

    The AI-generated code review feedback is posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

    You can accept the suggestions with a single click, and the changes will be added as a new commit to the pull request.

    Chat with AI Code Review Agent

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    Real-time collaboration with the AI Code Review Agent accelerates your development cycle. By delivering immediate, actionable insights, it eliminates the delays typically experienced with human reviews. Developers can engage directly with the Agent to clarify recommendations on the spot, ensuring that any issues are addressed swiftly and accurately.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    Incremental code reviews

    AI Code Review Agent automatically reviews only the recent changes each time you push new commits to a pull request. This saves time and reduces costs by avoiding unnecessary re-reviews of all files.

    You can enable or disable incremental reviews using the .

    Code review analytics

    Get in-depth insights into your org’s code reviews with user-friendly dashboard. Track key metrics such as pull requests reviewed, issues found, lines of code reviewed, and understand individual contributions.

    Custom code review rules and guidelines

    The AI Code Review Agent offers a flexible solution for teams looking to enforce custom code review rules, standards, and guidelines tailored to their unique development practices. Whether your team follows specific coding conventions or industry best practices, you can customize the Agent to suite your needs.

    We support two ways to customize AI Code Review Agent’s suggestions:

    1. , and the AI Code Review Agent automatically adapts by creating code review rules to prevent similar suggestions in the future.

    2. . Define rules through the dashboard in Bito Cloud and apply them to agent instances in your workspace.

    Multiple specialized engineers for targeted code analysis

    The AI Code Review Agent acts as a team of specialized engineers, each analyzing different aspects of your pull request. You'll get specific advice for improving your code, right down to the exact line in each file.

    The areas of analysis include:

    • Security

    • Performance

    • Scalability

    • Optimization

    This multifaceted analysis results in more detailed and accurate code reviews, saving you time and improving code quality.

    Integrated feedback from dev tools you use

    Elevate your code reviews by harnessing the power of the development tools you already trust. Bito's AI Code Review Agent seamlessly integrates feedback from essential tools including:

    • Static code analysis

    • Open source security vulnerabilities check

    • Linter integrations

    • Secrets scanning (e.g., passwords, API keys, sensitive information)

    Static code analysis

    Using tools like Facebook’s open-source fbinfer (available out of the box), the Agent dives deep into your code—tailored to each language—and suggests actionable fixes. You can also configure additional tools you use for a more customized analysis experience.

    Open source security vulnerabilities check

    The AI Code Review Agent checks real-time for the latest high severity security vulnerabilities in your code, using (available out of the box). Additional tools such as , or can also be configured.

    Linter integrations

    Our integrated linter support reviews your code for consistency and adherence to best practices. By catching common errors early, it ensures your code stays clean, maintainable, and aligned with modern development standards.

    Secrets scanning

    Safeguard your sensitive data effortlessly. With built-in scanning capabilities, the Agent checks your code for exposed passwords, API keys, and other confidential information—helping to secure your codebase throughout the development lifecycle.

    Jira integration

    Seamlessly connect Bito with Jira to automatically validate pull request code changes against linked Jira tickets. This ensures your implementations meet specified requirements through real-time, structured validation feedback directly in your pull requests.

    Support for Jira Cloud and Jira Data Center setups enables flexible integrations, while multiple ticket-linking methods ensure accurate requirement tracking.

    Boost your team's code quality, collaboration, and traceability with automated Jira ticket validation.

    Supports all major programming languages

    No matter if you're coding in Python, JavaScript, Java, C++, or beyond, our AI Code Review Agent has you covered. It understands the unique syntax and best practices of every popular language, delivering tailored insights that help you write cleaner, more efficient code—every time.

    Enterprise-grade security

    Bito and third-party LLM providers never store or use your code, prompts, or any other data for model training or any other purpose.

    Bito is SOC 2 Type II compliant. This certification reinforces our commitment to safeguarding user data by adhering to strict security, availability, and confidentiality standards. SOC 2 Type II compliance is an independent, rigorous audit that evaluates how well an organization implements and follows these security practices over time.

    Create or customize an Agent instance

    Customize the AI Code Review Agent to match your workflow needs.

    Connecting your Bito workspace to GitHub, GitLab, or Bitbucket provides immediate access to the AI Code Review Agent. To get you started quickly, Bito offers a Default Agent instance—pre-configured and ready to deliver AI-powered code reviews for pull requests and code changes within supported IDEs such as VS Code and JetBrains.

    While the Default Agent is ready for use right away, Bito also gives you the option to create new Agent instances or customize existing ones to suit your specific requirements. This flexibility ensures that the Agent can adapt to a range of workflows and project needs.

    For example, you might configure one Agent to disable automatic code reviews for certain repositories, another to exclude specific Git branches from review, and yet another to filter out particular files or folders.

    This guide will walk you through how to create or customize an Agent instance, unlocking its full potential to streamline your code reviews.

    Creating or customizing AI Code Review Agents

    Once Bito is connected to your GitHub/GitLab/Bitbucket account, you can easily create a new Agent or customize an existing one to suit your workflow.

    1. To create a new Agent, navigate to the dashboard and click the New Agent button to open the Agent configuration form.

    1. If you’d like to customize an existing agent, simply go to the same dashboard and click the Settings button next to the Agent instance you wish to modify.

    Once you have selected an Agent to customize, you can modify its settings in the following areas:

    1. General settings

    Agent name

    Assign a unique alphanumeric name to your Agent. This name acts as an identifier and allows you to invoke the Agent in supported clients using the @<agent_name> command.

    2. Customization options

    Bito provides six tabs for in-depth Agent customization.

    These include:

    1. Review

    2. Custom Guidelines

    3. Filters

    4. Tools

    Let's have a look at each tab in detail.

    a. Review

    In this tab, you can configure how and when the Agent performs reviews:

    • Review language: Select the output language for code review feedback. Bito supports over 20 languages, including English, Hindi, Chinese, and Spanish. The AI code review feedback will be posted on the pull requests in the selected language.

    • Review feedback mode: Choose between Essential and Comprehensive review modes and tailor review request settings to fit your team's unique workflow requirements.

      • In Essential mode, only critical issues are posted as inline comments, and other issues appear in the main review summary under "Additional issues".

    b. Custom Guidelines

    Create, apply, and manage custom code review guidelines to align the AI agent’s reviews with your team’s specific coding standards.

    The agent will follow your guidelines when reviewing pull requests.

    c. Filters

    Use filters to customize which files, folders, and Git branches are reviewed when the Agent triggers automatically on pull requests:

    • Exclude Files and Folders: A list of files/folders that the AI Code Review Agent will not review if they are present in the diff. You can specify the files/folders to exclude from the review by name or glob/regex pattern. The Agent will automatically skip any files or folders that match the exclusion list. This filter applies to both manual reviews initiated through the /review command and automated reviews.

    • Include Source/Target Branches: This filter defines which pull requests trigger automated reviews based on their source or target branch, allowing you to focus on critical code and avoid unnecessary reviews or AI usage. By default, pull requests merging into the repository’s default branch are subject to review. To review additional branches, you can use the . Bito will review pull requests when the source or target branch matches the list. This filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    For more information and examples, see .

    d. Tools

    Enhance the Agent’s reviews by enabling additional tools for static analysis, security checks, and secret detection:

    • Secret Scanner: Enable this tool to detect and report secrets left in code changes.

    e. Chat

    You can chat with the to ask follow-up questions, request alternative solutions, or get clarification on review comments. From this tab, you can manage how the agent responds to these interactions.

    • Auto reply: Enable Bito to automatically reply to user questions posted as comments on its code review suggestions—no need to tag @bitoagent or @askbito.

    f. Functional Validation

    Automatically validate pull requests against Jira tickets. Ticket references are detected in the PR description, title, or branch name.

    If you are editing an existing agent, click Save to apply the changes.

    3. Select repositories for code review

    1. If you are creating a new agent instance, click Select repositories after configuration to choose the Git repositories the agent will review.

    1. To enable code review for a specific repository, simply select its corresponding checkbox. You can also enable repositories later, after the Agent has been created. Once done, click Save and continue to save the new Agent configuration.

    1. When you save the configuration, your new Agent instance will be added and available on the page.

    Install/run via GitHub Actions

    Seamlessly integrate automated code reviews into your GitHub Actions workflows.

    Prerequisites

    • Bito Access Key: Obtain your Bito Access Key. View Guide

    • GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.


    Installation and Configuration Steps:

    1. Enable GitHub Actions:

      • Login to your account.

      • Open your repository and click on the "Settings" tab.

    Check the above section to learn more about creating the access tokens needed to configure the Agent.

    • Configure the following under the "Variables" tab:

      For each variable, click the New repository variable button, then enter the exact name and value of the variable in the form. Finally, click Add variable to save it.

      • Name: STATIC_ANALYSIS_TOOL

    1. Create the Workflow Directory:

      • In your repository, create a new directory path: .github/workflows.

    Customizations for self-hosted GitHub

    1. Create a self-hosted Runner using Linux image and x64 architecture as described in the .

    2. Create a copy of Bito's repository main branch into your self-hosted GitHub organization e.g. "myorg" under the required name e.g. "gitbito-bitocodereview". In this example, now this repository will be accessible as "myorg/gitbito-bitocodereview".

    3. Update test_cra.ymlas below:

    Using the AI Code Review Agent

    After configuring the GitHub Actions, you can invoke the AI Code Review Agent in the following ways:

    Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please .

    1. Automated Code Review: The agent will automatically review new pull requests as soon as they are created and post the review feedback as a comment within your PR.

    2. Manually Trigger Code Review: To start the process, simply type /review in the comment box on the pull request and submit it. This command prompts the agent to review the pull request and post its feedback directly in the PR as a comment.

      Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

    Screenshots

    Screenshot # 1

    AI-generated pull request (PR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a pull request.

    Screenshot # 3

    AI code review feedback posted as comments on the pull request.

    How to install Bito extension on VS Code

    Guide for GitLab

    Integrate the AI Code Review Agent into your GitLab workflow.

    Speed up code reviews by configuring the with your GitLab repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using .

    The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

    Install AI Architect (self-hosted)

    Deploy AI Architect in your own infrastructure for complete data control and enhanced security

    This guide walks you through installing as a self-hosted service in your own infrastructure. Self-hosting gives you complete control over where your code knowledge graph resides and how AI Architect accesses your repositories.

    Why choose self-hosted deployment? Organizations with strict data governance requirements, air-gapped environments, or specific compliance needs benefit from running AI Architect within their own infrastructure. Your codebase analysis and knowledge graph stay entirely within your control, while still providing the same powerful context-aware capabilities to your AI coding tools.

    What you'll accomplish: By the end of this guide, you'll have AI Architect running on your infrastructure, connected to your Git repositories, and ready to integrate with AI coding tools like Claude Code, Cursor, Windsurf, and GitHub Copilot through the Model Context Protocol (MCP).

    mkdir -p ~/.codeium/windsurf
    # Verify file location
    ls -la ~/.codeium/windsurf/mcp_config.json
    
    # Check permissions
    chmod 755 ~/.codeium/windsurf
    chmod 644 ~/.codeium/windsurf/mcp_config.json
    
    # Verify JSON syntax
    cat ~/.codeium/windsurf/mcp_config.json | python -m json.tool
    curl -s -X POST \
      -H "Authorization: Bearer <Your-Bito-MCP-Access-Token>" \
      -H "Content-Type: application/json" \
      -d '{"jsonrpc":"2.0","method":"initialize","params":{},"id":1}' \
      <Your-Bito-MCP-URL>
    
    # Should return HTTP 200 with JSON response for valid credentials
    # HTTP 401: Invalid Bito MCP Access Token
    # HTTP 404: Invalid Bito MCP URL
    nano ~/.codeium/windsurf/mcp_config.json
    {
      "mcpServers": {
        "BitoAIArchitect": {
          "serverUrl": "<Your-Bito-MCP-URL>",
          "headers": {
            "Authorization": "Bearer <Your-Bito-MCP-Access-Token>"
          }
        }
      }
    }
    suggestion_mode: comprehensive       # 'essential' = only major issues, 'comprehensive' = everything
    post_description: true                # Include summary description in PR comment
    post_changelist: true                 # Include walkthrough of changes
    
    include_source_branches: feature/**,bugfix/**
    include_target_branches: main,develop
    exclude_files: docs/**,README.md
    
    exclude_draft_pr: true            # Don't review draft PRs
    secret_scanner_feedback: true      # Enable secret scanning feedback
    linters_feedback: true             # Enable linting / static analysis
    
    custom_guidelines:
      general:
        - name: "Global Checks"
          path: "./guidelines/global_checks.txt"
        - name: "Security Rules"
          path: "./guidelines/security.txt"
        - name: "Legacy Style Guide"
          path: "./guidelines/legacy.txt"
        - name: "Performance Checks"
          path: "./guidelines/perf.txt"
        - name: "Code Style"
          path: "./guidelines/style.txt"
      per_language:
        python:
          name: "Python Best Practices"
          path: "./guidelines/py.txt"
        javascript:
          name: "JS Style Guide"
          path: "./guidelines/js.txt"
        typescript:
          name: "TS Checks"
          path: "./guidelines/ts.txt"
        java:
          name: "Java Coding Standards"  
          Path: "./guidelines/java.txt"  
    
    npm install -g @anthropic-ai/claude-code
    claude --version
    # For stdio servers on Windows
    claude mcp add --transport stdio my-server -- cmd /c npx -y @some/package
    # Add HTTP server with Bearer token (correct parameter order)
    claude mcp add --transport http --scope user \
      <name> <url> \
      --header "Authorization: Bearer <token>"
    
    # Add server with environment variables
    claude mcp add <name> -e API_KEY="value" -- npx @server/package
    
    # Add server with JSON config (for complex setups)
    claude mcp add-json <name> '{"type":"http","url":"...","headers":{...}}'
    
    # List all MCP servers
    claude mcp list
    
    # Get server details
    claude mcp get <name>
    
    # Remove MCP server
    claude mcp remove <name>
    
    # View server status (inside Claude Code session)
    /mcp
    
    # Reset project-scoped server approval choices
    claude mcp reset-project-choices
    
    # Verify it was added
    claude mcp list
    
    # Check for errors
    claude --verbose
    
    # Try removing and re-adding
    claude mcp remove BitoAIArchitect
    claude mcp add --transport http --scope user \
      BitoAIArchitect <Your-Bito-MCP-URL> \
      --header "Authorization: Bearer <Your-Bito-MCP-Access-Token>"
    # Test the endpoint with proper MCP protocol
    curl -s -X POST \
      -H "Authorization: Bearer <Your-Bito-MCP-Access-Token>" \
      -H "Content-Type: application/json" \
      -d '{"jsonrpc":"2.0","method":"initialize","params":{},"id":1}' \
      <Your-Bito-MCP-URL>
    
    # Should return HTTP 200 with JSON response for valid credentials
    # HTTP 401: Invalid Bito MCP Access Token
    # HTTP 404: Invalid Bito MCP URL
    chmod 755 ~/.claude
    chmod 644 ~/.claude/claude.json
    chmod 644 ~/.claude/settings.json
    Bitbucket
    Will this change break anything? Based on the diff can we include anything?
  • Code structure and formatting (e.g., tab, spaces)

  • Basic coding standards including variable names (e.g., ijk)

  • One-click setup for GitHub, GitLab, and Bitbucket
    Automated and manually-triggered AI code reviews
    Pull request summary
    Changelist
    One-click to accept suggestions
    Chat with AI Code Review Agent
    Incremental code reviews
    Code review analytics
    Custom code review rules and guidelines
    Multiple specialized engineers for targeted code analysis
    Integrated feedback from dev tools you use
    Jira integration
    Supports all major programming languages
    Enterprise-grade security
    Start free trial
    Get a demo
    AI Code Review Agent
    Bito Cloud
    AI Code Review Agent
    GitHub
    GitHub (Self-Managed)
    GitLab
    GitLab (Self-Managed)
    What is "Estimated effort to review" in code review output?
    Agent settings
    Code Review Analytics
    Provide feedback on Bito-reported issues in pull requests
    Create custom code review guidelines via the dashboard
    Custom Guidelines
    OWASP Dependency-Check
    Snyk
    GitHub Dependabot

    Learn more

    Learn more

    Summary of Pull Request in the description section.
    Changelist in AI Code Review Agent's feedback.
    One-click to accept AI code review suggestions
    One-click to accept AI code review suggestions

    Learn more

    Code Review Analytics dashboard

    Learn more

    Learn more

    Learn more

    Static Code Analysis feedback highlighting suggestions and fixes.
    Showing high-severity security vulnerabilities report.

    Learn more

    Learn more

    Learn more

    Learn more

    Chat
  • Functional Validation

  • In Comprehensive mode, Bito also includes minor suggestion and potential nitpicks as inline comments.

  • Automatic review: Toggle to enable or disable automatic reviews when a pull request is created and ready for review.

  • Automatic incremental review: Toggle to enable or disable reviews for new commits added to a pull request. Only changes since the last review are assessed.

    • Batch time: Specifies how long the AI Code Review Agent waits before running an incremental review after new commits are pushed. The value can range from 0m (review immediately) to 24h (review after 24 hours). Lower values result in more frequent incremental reviews.

      Examples:

      • 10s → waits 10 seconds before running the review

      • 12m → waits 12 minutes before running the review

      • 1h10m → waits 1 hour and 10 minutes before running the review

  • Request changes comments: Enable this option to get Bito feedback as "Request changes" review comments. Depending on your organization's Git settings, you may need to resolve all comments before merging.

  • Draft pull requests: By default, the Agent excludes draft pull requests from automated reviews. Disable this toggle to include drafts.

  • Automatic summary: Toggle to enable automatic generation of AI summaries for changes, which are appended to the pull request description.

  • Change Walkthrough: Enable this option to generate a table of changes and associated files, posted as a comment on the pull request.

  • Allow config file settings: Enabling this setting will allow Agent Settings to be overridden at a repository level by placing a .bito.yaml file in the root folder of that repository. Learn more

  • Auto-apply agent rules: Automatically detect and apply best-practice guidelines from agent configuration files like CLAUDE.md, AGENTS.md, .cursor/rules, .windsurf/rules, or GEMINI.md. When enabled, Bito uses these files to guide its code review. Learn more

  • Generate interaction diagrams: When enabled, Bito will generate interaction diagrams during code reviews to visualize the architecture and impacted components in the submitted changes. Currently, it is supported for GitHub and GitLab.

  • Exclude Labels: Specify pull request (PR) labels to exclude from review by name or glob/regex pattern. The agent will skip any PRs tagged with these labels in GitHub or GitLab.

    Code Review > Repositories
    Code Review > Repositories
    Learn more
    Include Source/Target Branches filter
    Excluding Files, Folders, or Branches with Filters
    AI Code Review Agent
    Learn more
    Code Review > Repositories
    Logo
    you received after completing the AI Architect setup.
    you received after completing the AI Architect setup.
    BitoAIArchitectGuidelines.md file
    BitoAIArchitectGuidelines.md file
    BitoAIArchitectGuidelines.md file
    BitoAIArchitectGuidelines.md file

    secret_scanner_feedback

    Enables or disables secret scanning feedback. Bito detects and reports secrets left in code changes. Valid values: true or false

    linters_feedback

    Run Linting tools during code reviews. Valid values: true or false

    custom_guidelines

    Adds repo-defined coding guidelines, supporting both general and language-specific configurations. Provide the name and path to review guidelines that you want bito to follow. These files must exist in your source branch at review time. We accept up to 3 general guidelines and 1 language specific guideline per language. Example:

    dependency_check.enabled

    Run Dependency Check analysis during code reviews.

    Valid values: true or false

    repo_level_guidelines_enabled

    When enabled, Bito will automatically detect and use best-practice guidelines from agent configuration files such as CLAUDE.md, AGENTS.md, GEMINI.md, .cursor/rules, or .windsurf/rules during code reviews. Valid values: true or false

    sequence_diagram_enabled

    When enabled, Bito will generate interaction diagrams during code reviews to visualize the architecture and impacted components in the submitted changes. Currently, it is supported for GitHub and GitLab. Valid values: true or false

    static_analysis.fb_infer.enabled

    Run Static Analysis tools during code reviews for providing better feedback. Valid values: true or false

    labels_excluded

    Comma-separated list of labels that, if present on a pull request or merge request, skip automatic review. This is case-sensitive by default. For example, if we mention "Bug" in the repo-level .bito.yaml file and the tagged label is "bug", we won't match it. Users can use regex to make it case-insensitive, e.g., (?i)^bug$ or (?i)bug. Example: "wip, do-not-review, chore, size/*"

    post_as_request_changes

    Enable this option to get Bito feedback as "Request changes" review comments. Depending on your Git provider settings, you may need to resolve all comments before merging. For GitHub, this will automatically enable auto-approve for resolved PRs. Valid values: true or false

    functional_validation_enabled

    Enable this option to automatically validate pull requests against Jira ticket referenced in PR description, title, or branch name. Jira Integration must be completed from Bito dashboard for this to work. Valid values: true or false

    you received after completing the AI Architect setup.
    )
  • --scope local: Only in current directory (default)

  • Global guidelines - Apply across all your projects. Best for teams or developers who want consistent standards everywhere.

  • Project-specific guidelines - Apply to a single project only.

  • Choose one of the following based on your preference:

    Option A: Global guidelines

    Create .claude directory if it doesn't exist:

    Create or edit CLAUDE.md:

    Copy the contents of your BitoAIArchitectGuidelines.md file into this file, then save.

    Option B: Project-specific guidelines

    Run this command in your project directory:

    Or run these commands:

    Copy the contents of your BitoAIArchitectGuidelines.md file into this file, then save.

    ~/.claude/CLAUDE.md

    WSL

    ~/.claude/claude.json

    ~/.claude/settings.json

    ~/.claude/CLAUDE.md

    {
      "mcpServers": {
        "BitoAIArchitect": {
          "serverUrl": "<Your-Bito-MCP-URL>",
          "headers": {
            "Authorization": "Bearer <Your-Bito-MCP-Access-Token>"
          }
        }
      }
    }
    {
      "mcpServers": {
        "existing-server": {
          ...
        },
        "BitoAIArchitect": {
          "serverUrl": "https://mcp.bito.ai/<Your-Bito-Workspace-ID>/mcp",
          "headers": {
            "Authorization": "Bearer <your-access-token>"
          }
        }
      }
    }
    mkdir %USERPROFILE%\.codeium\windsurf\memories
    copy BitoAIArchitectGuidelines.md %USERPROFILE%\.codeium\windsurf\memories\global_rules.md
    mkdir .windsurf\rules
    copy BitoAIArchitectGuidelines.md .windsurf\rules\bitoai-architect.md
    mkdir -p ~/.codeium/windsurf/memories
    cp BitoAIArchitectGuidelines.md ~/.codeium/windsurf/memories/global_rules.md
    mkdir -p .windsurf/rules
    cp BitoAIArchitectGuidelines.md .windsurf/rules/bitoai-architect.md
    custom_guidelines:
      general:
        - name: "Global Checks"
          path: "./guidelines/global_checks.txt"
        - name: "Security Rules"
          path: "./guidelines/security.txt"
        - name: "Legacy Style Guide"
          path: "./guidelines/legacy.txt"
        - name: "Performance Checks"
          path: "./guidelines/perf.txt"
        - name: "Code Style"
          path: "./guidelines/style.txt"
      per_language:
        python:
          name: "Python Best Practices"
          path: "./guidelines/py.txt"
        javascript:
          name: "JS Style Guide"
          path: "./guidelines/js.txt"
        typescript:
          name: "TS Checks"
          path: "./guidelines/ts.txt"
        java:
          name: "Java Coding Standards"  
          Path: "./guidelines/java.txt"
    claude mcp add \
      --transport http \
      --scope user \
      BitoAIArchitect \
      <Your-Bito-MCP-URL> \
      --header "Authorization: Bearer <Your-Bito-MCP-Access-Token>"
    claude mcp list
    claude mcp get BitoAIArchitect
    claude
    What repositories are available in my organization?
    mkdir -p ~/.claude
    nano ~/.claude/CLAUDE.md
    nano CLAUDE.md
    mkdir -p .claude
    nano .claude/CLAUDE.md

    Select "Actions" from the left sidebar, then click on "General".

  • Under "Actions permissions", choose "Allow all actions and reusable workflows" and click "Save".

  • Set Up Environment Variables:

    • Still in the "Settings" tab, navigate to "Secrets and variables" > "Actions" from the left sidebar.

    • Configure the following under the "Secrets" tab:

      For each secret, click the New repository secret button, then enter the exact name and value of the secret in the form. Finally, click Add secret to save it.

      • Name: BITO_ACCESS_KEY

        • Secret: Enter your Bito Access Key here. Refer to the .

      • Name: GIT_ACCESS_TOKEN

  • Value:
    Enter the following text string as value:
    fb_infer,astral_ruff,mypy
  • Name: GIT_DOMAIN

    • Value: Enter the domain name of your Enterprise or self-hosted GitHub deployment or skip this if you are not using Enterprise or self-hosted GitHub deployment.

    • Example of domain name: https://your.company.git.com

  • Name: EXCLUDE_BRANCHES

    • Value: Specify branches to exclude from the review by name or valid glob/regex patterns. The agent will skip the pull request review if the source or target branch matches the exclusion list.

    • Note: For more information, see Source or Target branch filter.

  • Name: EXCLUDE_FILES

    • Value: Specify files/folders to exclude from the review by name or glob/regex pattern. The agent will skip files/folders that match the exclusion list.

    • Note: For more information, see Files and folders filter.

  • Name: EXCLUDE_DRAFT_PR

    • Value: Enter True to disable automated review for draft pull requests, or False to enable it.

    • Note: For more information, see Draft pull requests filter.

  • Add the Workflow File:
    • Download this test_cra.yml file from AI Code Review Agent's GitHub repo.

    • In your repository, upload this test_cra.yml file inside the .github/workflows directory either in your source branch of each PR or in a branch (e.g. main) from which all the source branches for PRs will be created.

    • Commit your changes.

    Change line from:
    • runs-on: ubuntu-latest

  • to:

    • runs-on: <label of the self-hosted GitHub Runner> e.g. self-hosted, linux etc.

  • Update test_cra.ymlas below:

    • Replace all lines having below text:

      • uses: gitbito/codereviewagent@main

    • with:

      • uses: myorg/gitbito-bitocodereview@main

  • Commit and push your changes in test_cra.yml .

  • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

  • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

  • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

  • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

  • By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to Available Commands.

    View Guide
    GitHub
    "Prerequisites"
    GitHub documentation
    gitbito/codereviewagent
    contact support
    GitHub Personal Access Token (Classic)
    Changelist in AI Code Review Agent's feedback.

    Video tutorial

    Prerequisites

    Before proceeding, ensure you've completed all necessary prerequisites.

    1. Create a GitLab Personal Access Token:

    For GitLab merge request code reviews, a token with api scope is required. Make sure that the token is created by a GitLab user who has the Maintainer access role.

    View Guide

    Important: Bito posts comments using the GitLab user account linked to the Personal Access Token used during setup. To display "Bito" instead of your name, create a separate user account (e.g., Bito Agent) and use its token for integration.

    We recommend setting the token expiration to at least one year. This prevents the token from expiring early and avoids disruptions in the AI Code Review Agent's functionality.

    Additionally, we highly recommend updating the token before expiry to maintain seamless integration and code review processes.

    GitLab Personal Access Token

    2. Authorizing a GitLab Personal Access Token for use with SAML single sign-on:

    If your GitLab organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.

    For more information, please refer to these GitLab documentation:

    • https://docs.gitlab.com/ee/user/group/saml_sso/

    • https://docs.gitlab.com/ee/integration/saml.html

    • https://docs.gitlab.com/ee/integration/saml.html#password-generation-for-users-created-through-saml

    Installation and configuration steps

    Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

    Step 1: Log in to Bito

    Log in to Bito Cloud and select a workspace to get started.

    Step 2: Open the Code Review Agents setup

    Click Repositories under the CODE REVIEW section in the sidebar.

    Step 3: Select your Git provider

    Bito supports integration with the following Git providers:

    • GitHub

    • GitHub (Self-Managed)

    • GitLab

    • GitLab (Self-Managed)

    • Bitbucket

    • Bitbucket (Self-Managed)

    Since we are setting up the Agent for GitLab, select GitLab to proceed.

    Step 4: Connect Bito to GitLab

    To enable merge request reviews, you’ll need to connect your Bito workspace to your GitLab account.

    You can either connect using OAuth (recommended) for a seamless, one-click setup or manually enter your Personal Access Token.

    To connect via OAuth, simply click the Connect with OAuth (Recommended) button. This will redirect you to the GitLab website, where you'll need to log in. Once authenticated, you'll be redirected back to Bito, confirming a successful connection.

    If you prefer not to use OAuth, you can connect manually using a Personal Access Token.

    Start by generating a Personal Access Token with api scope in your GitLab account. For guidance, refer to the instructions in the Prerequisites section.

    Once generated, click the Alternatively, use Personal or Group Access Token button.

    Now, enter the token into the Personal Access Token input field in Bito.

    Click Validate to ensure the token is functioning properly.

    If you've successfully connected via OAuth or manually validated your token, you can select your GitLab Group from the dropdown menu.

    Click Connect Bito to GitLab to proceed.

    Step 5: Enable AI Code Review Agent on repositories

    After connecting Bito to your GitLab account, you'll see a list of repositories that Bito has access to.

    Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

    To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

    Learn more

    Step 6: Automated and manual merge request reviews

    Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

    1. Automated code review: By default, the Agent automatically reviews all new merge requests and provides detailed feedback.

    2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the merge request and submit it. This action will start the code review process.

    The AI-generated code review feedback will be posted as comments directly within your merge request, making it seamless to view and address suggestions right where they matter most.

    Note: To enhance efficiency, the automated code reviews are only triggered for merge requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

    To review additional branches, you can use the Include Source/Target Branches filter. Bito will review merge requests when the source or target branch matches the list.

    The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    The AI Code Review Agent automatically reviews code changes up to 5000 lines when a merge request is created. For larger changes, you can use the /review command.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the merge request.

    Step 7: Specialized commands for code reviews

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

    • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to Available Commands.

    Step 8: Chat with AI Code Review Agent

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    To start the conversation, type your question in the comment box within the inline suggestions on your merge request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    Step 9: Configure Agent settings

    Agent settings let you control how reviews are performed, ensuring feedback is tailored to your team’s needs. By adjusting the options, you can:

    • Make reviews more focused and actionable.

    • Apply your own coding standards.

    • Reduce noise by excluding irrelevant files or branches.

    • Add extra checks to improve code quality and security.

    Learn more

    Screenshots

    Screenshot # 1

    AI-generated merge request (MR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a merge request.

    Changelist in AI Code Review Agent's feedback.

    Screenshot # 3

    AI code review feedback posted as comments on the merge request.

    AI Code Review Agent
    available commands
    Pricing
    Get a 14-day FREE trial of Bito's AI Code Review Agent.
    Deployment options

    AI Architect can be deployed in three different configurations depending on your team size, infrastructure, and security requirements:

    a. Personal use (with your LLM keys)

    Set up AI Architect on your local machine for individual development work. You'll provide your own LLM API keys for indexing, giving you complete control over the AI models used and associated costs.

    Best for: Individual developers who want codebase understanding on their personal machine.

    b. Team / shared access (with your LLM keys)

    Deploy AI Architect on a shared server within your infrastructure, allowing multiple team members to connect their AI coding tools to the same MCP server. Each team member can configure AI Architect with their preferred AI coding agent while sharing the same indexed codebase knowledge graph.

    Best for: Development teams that want to share codebase intelligence across the team while managing their own LLM costs.

    c. Enterprise deployment (requires Bito Enterprise Plan)

    Deploy AI Architect on your infrastructure (local machine or shared server) with indexing managed by Bito. Instead of providing your own LLM keys, Bito handles the repository indexing process, simplifying setup and cost management.

    Best for: Organizations that prefer managed indexing without handling individual LLM API keys and costs.

    Note: All deployment options are self-hosted on your infrastructure — your code and knowledge graph remain under your control.

    Prerequisites

    a. Required accounts and tokens

    1

    Bito API Key (aka Bito Access Key)

    You'll need a Bito account and a Bito Access Key to authenticate AI Architect. You can sign up for a Bito account at https://alpha.bito.ai, and create an access key from Settings -> Advanced Settings

    2

    Git provider

    We support the following Git providers:

    • GitHub

    3

    Git Access Token

    A personal access token from your chosen Git provider is required. You'll use this token to allow AI Architect to read and index your repositories.

    1. GitHub Personal Access Token (Classic): To use GitHub repositories with AI Architect, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.

    4

    LLM API keys

    Bito's AI Architect uses Large Language Models (LLMs) to build a knowledge graph of your codebase.

    We suggest you provide API keys for both Anthropic and Grok LLMs, as that provides the best coverage and the best cost of indexing.

    Bito will use Claude Haiku and Grok Code Fast together to index your codebase. It will cost you approximately USD $0.20 - $0.40 per MB of indexable code (we do not index binaries, TARs, zips, images, etc). If you provide only an Anthropic key without Grok, your indexing costs will be significantly higher, approximately USD $1.00 - $1.50 per MB of indexable code.

    b. System requirements

    The AI Architect supports the following operating systems:

    • macOS

    • Unix-based systems

    • Windows (via WSL2)

    1

    WSL2 is required for Windows users

    If you're running Windows, Windows Subsystem for Linux 2 (WSL2) must be installed before proceeding.

    To install WSL2:

    1. Open PowerShell or Command Prompt as Administrator

    2. Run the following command:

    1. Set up your Ubuntu username and password when prompted.

    2

    Docker Desktop (required)

    Docker Compose is required to run AI Architect.

    The easiest and recommended way to get Docker Compose is to install Docker Desktop.

    Docker Desktop includes Docker Compose along with Docker Engine and Docker CLI which are Compose prerequisites.

    Installation guide

    1

    Download AI Architect

    Download the latest version of AI Architect package from our GitHub repository.

    2

    Start Docker Desktop

    Before proceeding with the installation, ensure Docker Desktop is running on your system. If it's not already open, launch Docker Desktop and wait for it to fully start before continuing.

    3

    Extract the downloaded AI Architect package

    Open the terminal (on Windows with WSL2, launch the Ubuntu application from the Start menu).

    Navigate to the folder where the downloaded file is located. If the file is still in your Downloads folder, you can either navigate there or move the file to any other directory you prefer.

    Run the following command to extract the downloaded package.

    4

    Run setup

    The setup script will guide you through configuring AI Architect with your Git provider and LLM credentials. The process is interactive and will prompt you for the necessary information step by step.

    To begin setup, run:

    5

    Add repositories

    Edit .bitoarch-config.yaml file to add your repositories for indexing:

    Then apply the configuration:

    6

    Start indexing

    Trigger workspace synchronization to index your repositories:

    Note: Indexing process will take approximately 3-10 minutes per repository. Smaller repos take less time.

    7

    Check indexing status

    Run this command to check the status of your indexing:

    Status indicators:

    How to use AI Architect

    Configure MCP server in supported AI coding tools such as Claude Code, Cursor, Windsurf, and GitHub Copilot (VS Code).

    Select your AI coding tool from the options below and follow the step-by-step installation guide to seamlessly set up AI Architect.

    • Guide for Claude Code

    • Guide for Cursor

    • Guide for Windsurf

    • Guide for GitHub Copilot (VS Code)

    Update repository list

    Edit .bitoarch-config.yaml file to add/remove repositories.

    To apply the changes, run this command:

    Start the re-indexing process using this command:

    Available commands

    For complete reference of AI Architect CLI commands, refer to Available commands.

    Bito's AI Architect

    Available commands

    Quick reference to CLI commands for managing Bito's AI Architect.

    Note: After installation of AI Architect, the bitoarch command is available globally.

    Core operations

    Command
    Description
    Example

    Examples:


    Repository management

    Command
    Description
    Example

    Examples:


    Service operations

    Command
    Description
    Example

    Examples:


    Configuration

    Command
    Description
    Example

    Examples:


    MCP operations

    Command
    Description
    Example

    Examples:


    Output options

    Add these flags to any command:

    Flag
    Purpose
    Example

    Common workflows

    Initial setup

    Daily operations

    Adding new repositories

    Troubleshooting


    Getting help

    Command
    Shows

    Examples:


    Environment

    Configuration is loaded from .env-bitoarch file. Key variables:

    • BITO_API_KEY - API key for authentication

    • GIT_PROVIDER - Git provider (github, gitlab, bitbucket)

    • GIT_ACCESS_TOKEN - Git access token


    Version

    Check CLI version:

    Excluding files, folders, or branches with filters

    Customize which files, folders, and Git branches are reviewed when the Agent triggers automatically on pull requests.

    The offers powerful filters to exclude specific files and folders from code reviews and gives you precise control over which Git branches are included in automated reviews.

    These filters can be configured at the Agent instance level, overriding the default behavior.

    Exclude Files and Folders filter

    A list of files/folders that the AI Code Review Agent will not review if they are present in the diff. You can specify the files/folders to exclude from the review by name or glob/regex pattern. The Agent will automatically skip any files or folders that match the exclusion list.

    This filter applies to both manual reviews initiated through the

    Guide for Bitbucket

    Integrate the AI Code Review Agent into your Bitbucket workflow.

    Speed up code reviews by configuring the with your Bitbucket repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using .

    The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our page.

    vim .bitoarch-config.yaml
    bitoarch update-repos .bitoarch-config.yaml
    bitoarch index-repos

    Secret: Enter your GitHub Personal Access Token (Classic) with repo access. We do not support fine-grained tokens currently. For more information, see the Prerequisites section.

    guide for obtaining your Bito Access Key

    bitoarch repo-info <name>

    Get detailed repository info

    bitoarch repo-info myrepo --dependencies

    bitoarch mcp-info

    Show MCP configuration

    Display URL and token info

    BITO_MCP_ACCESS_TOKEN - MCP server access token
  • CIS_*_EXTERNAL_PORT - Service external ports

  • bitoarch index-repos

    Trigger workspace repository indexing

    Simple index without parameters

    bitoarch index-status

    Check indexing status

    View progress and state

    bitoarch index-repo-list

    List all repositories

    bitoarch index-repo-list --status active

    bitoarch show-config

    Show current configuration

    bitoarch show-config --raw

    bitoarch add-repo <namespace>

    Add single repository

    bitoarch add-repo myorg/myrepo

    bitoarch remove-repo <namespace>

    Remove repository

    bitoarch remove-repo myorg/myrepo

    bitoarch add-repos <file>

    Load configuration from YAML

    bitoarch add-repos .bitoarch-config.yaml

    bitoarch update-repos <file>

    Update configuration from YAML

    bitoarch update-repos .bitoarch-config.yaml

    bitoarch status

    View all services status

    Docker ps-like output

    bitoarch health

    Check health of all services

    bitoarch health --verbose

    bitoarch info

    Get platform information

    Version, ports, resources

    bitoarch update-api-key

    Update Bito API key

    Interactive or with --api-key flag

    bitoarch update-git-creds

    Update Git provider credentials

    Interactive or with flags

    bitoarch rotate-mcp-token

    Rotate MCP access token

    bitoarch rotate-mcp-token <new-token>

    bitoarch mcp-test

    Test MCP connection

    Verify server connectivity

    bitoarch mcp-tools

    List available MCP tools

    bitoarch mcp-tools --details

    bitoarch mcp-capabilities

    Show MCP server capabilities

    bitoarch mcp-capabilities --output caps.json

    bitoarch mcp-resources

    List MCP resources

    View available data sources

    --format json

    JSON output

    For automation/scripts

    --raw

    Show full API response

    For debugging

    --output json

    Filtered JSON output

    For index-status

    --help

    Show command help

    Get usage information

    bitoarch --help

    Main menu with all commands

    bitoarch <command> --help

    Command-specific help

    # Trigger repository indexing
    bitoarch index-repos
    
    # Check indexing status (default summary)
    bitoarch index-status
    
    # Full API response for debugging
    bitoarch index-status --raw
    
    # Machine-readable filtered JSON
    bitoarch index-status --output json
    
    # List all repositories
    bitoarch index-repo-list
    # Add a single repository
    bitoarch add-repo myorg/myrepo
    
    # Remove a repository
    bitoarch remove-repo myorg/myrepo
    
    # Load multiple repositories from YAML
    bitoarch add-repos .bitoarch-config.yaml
    
    # Update configuration
    bitoarch update-repos .bitoarch-config.yaml
    
    # Get repository details
    bitoarch repo-info myrepo
    # Check service status (docker ps-like)
    bitoarch status
    
    # Health check
    bitoarch health
    
    # Detailed health information
    bitoarch health --verbose
    
    # Platform information
    bitoarch info
    # Update API key (interactive)
    bitoarch update-api-key
    
    # Update API key with flag
    bitoarch update-api-key --api-key <key> --restart
    
    # Update Git credentials (interactive)
    bitoarch update-git-creds
    
    # Update Git credentials with flags
    bitoarch update-git-creds --provider github --token <token> --restart
    
    # Rotate MCP token
    bitoarch rotate-mcp-token <new-token>
    # Test MCP connection
    bitoarch mcp-test
    
    # List MCP tools
    bitoarch mcp-tools
    
    # Show detailed tool information
    bitoarch mcp-tools --details
    
    # Get server capabilities
    bitoarch mcp-capabilities
    
    # Save capabilities to file
    bitoarch mcp-capabilities --output capabilities.json
    
    # List resources
    bitoarch mcp-resources
    
    # Show MCP configuration
    bitoarch mcp-info
    # 1. Check services are running
    bitoarch status
    
    # 2. Add repositories
    bitoarch add-repos .bitoarch-config.yaml
    
    # 3. Trigger indexing
    bitoarch index-repos
    
    # 4. Monitor progress
    bitoarch index-status
    # Check health
    bitoarch health
    
    # View repositories
    bitoarch index-repo-list
    
    # Check index status
    bitoarch index-status
    # Single repository
    bitoarch add-repo myorg/newrepo
    
    # Multiple repositories from file
    bitoarch add-repos new-repos.yaml
    
    # Trigger re-indexing
    bitoarch index-repos
    # Check all services
    bitoarch status
    bitoarch health --verbose
    
    # View full configuration
    bitoarch show-config --raw
    
    # Test MCP connection
    bitoarch mcp-test
    
    # Check indexing status with details
    bitoarch index-status --raw
    # Main help
    bitoarch --help
    
    # Command help
    bitoarch index-repos --help
    bitoarch add-repo --help
    bitoarch mcp-tools --help
    bitoarch --version

    GitLab

  • Bitbucket

  • So, you'll need an account on one of these Git providers to index your repositories with AI Architect.

  • View Guide

  • GitLab Personal Access Token: To use GitLab repositories with AI Architect, a token with API access is required.

    • View Guide

  • Bitbucket API Token: To use Bitbucket repositories with AI Architect, an API token is required.

    • View Guide

  • Configuration for Windows (WSL2):

    If you're using Windows with WSL2, you need to enable Docker integration with your WSL distribution:

    1. Open Docker Desktop

    2. Go to Settings > Resources > WSL Integration

    3. Enable integration for your WSL distribution (e.g., Ubuntu)

    4. Click Apply

    Note: Replace bito-cis-*.tar.gz with the actual name of the file you downloaded.

    Navigate to the extracted folder:

    Note: Replace bito-cis-* with your actual folder name.

    Note for Windows users (WSL2): To navigate to a Windows folder from the WSL terminal, use a path like:

    Installing dependencies:

    The AI Architect setup process will automatically check for required tools on your system. If any dependencies are missing (such as jq, which is needed for JSON processing), you'll be prompted to install them. Simply type y and press Enter to proceed with the installation.

    You'll need to provide the following details when prompted:

    Note: Refer to the Prerequisites section for details on how to obtain these.

    • Bito API Key (required) - Enter your Bito Access key and press Enter.

    • Select your Git provider (required):

      You'll be prompted to choose your Git provider:

      1. GitLab

      2. GitHub

      3. Bitbucket

      Enter the number corresponding to your Git provider and press Enter.

    • Is your Git provider self-hosted or cloud-based?

      • Type y for enterprise/self-hosted instances (like https://github.company.com) and enter your custom domain URL

      • Type n for standard cloud providers (github.com, gitlab.com, bitbucket.org)

      Press Enter to continue.

    • Git Access Token (required) - Enter personal access token for your Git provider and press Enter.

    • Configure LLM API keys (required) - Choose which AI model provider(s) to configure:

      1. Anthropic

      2. Grok

      3. OpenAI

      Enter the number corresponding to your AI model provider, then provide your API key when prompted.

    • Generate a secure MCP access token? - You'll be asked if you want Bito to create a secure token to prevent unauthorized access to your MCP server:

      • Type y to generate a secure access token (recommended)

      • Type n to skip token generation

      Press Enter to continue.

    Note: Once the setup is complete, your Bito MCP URL and Bito MCP Access Token will be displayed. Make sure to store them in a safe place, you'll need them later when configuring MCP server in your AI coding agent (e.g., Claude Code, Cursor, Windsurf, GitHub Copilot (VS Code), etc.).

  • in_progress - Indexing is running

  • completed - All repositories indexed

  • failed - Check logs for errors

  • View Guide
    Install Docker Desktop
    Guide for Junie (JetBrains)
    Guide for JetBrains AI Assistant
    /review
    command and automated reviews triggered via webhook.

    By default, these files are excluded: *.xml, *.json, *.properties, .gitignore, *.yml, *.md

    Examples

    Note:

    • Patterns are case sensitive.

    • Don’t use double quotes, single quotes or comma in the pattern.

    • Users can pass both types of patterns - Unix files system based glob pattern or regex.

    Exclusion Rule for Files & Folders
    Applicable Pattern
    Matched Examples
    Not Matched Examples

    Exclude all properties files in all folders and subfolders

    *.properties

    resource/config.properties, resource/server/server.properties

    resource/config.yaml, resource/config.json

    Exclude all files, folders and subfolders in folder starting with resources

    resources/

    resources/application.properties, resources/config/config.yaml

    app/resources/file.txt, config/resources/service.properties

    Exclude all files, folders and subfolders in folder src/com/resources

    src/com/resources/

    Include Source/Target Branches filter

    This filter defines which pull requests trigger automated reviews based on their source or target branch, allowing you to focus on critical code and avoid unnecessary reviews or AI usage.

    By default, pull requests merging into the repository’s default branch are subject to review. To extend review coverage, additional branches may be specified using explicit branch names or valid glob/regex patterns. When the source or target branch of a pull request matches one of the patterns on your inclusion list, Bito’s AI Code Review Agent will trigger an automated review.

    This filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    Watch video tutorial:

    Examples

    Note:

    • Patterns are case sensitive.

    • Don’t use double quotes, single quotes or comma in the pattern.

    • Users can pass both types of patterns - Unix files system based glob pattern or regex.

    Inclusion Rules for Branch
    Pattern
    Matched Examples
    Not Matched Examples

    Include any branch that starts with name BITO-

    BITO-*

    BITO-feature, BITO-123

    feature-BITO, development

    Include any branch that does not start with BITO-

    ^(?!BITO-).*

    feature-123, release-v1.0

    BITO-feature, BITO-123

    Include any branch which is not BITO

    ^(?!BITO$).*

    Draft pull requests filter

    A binary setting that enables/disables automated review of pull requests (PR) based on the draft status. Enter True to disable automated review for draft pull requests, or False to enable it.

    The default value is True which skips automated review of draft PR.

    How to configure the filters?

    Bito Cloud (Bito-hosted Agent)

    You can configure filters using the Agent configuration page. For detailed instructions, please refer to the Install/run Using Bito Cloud documentation page.

    CLI or webhooks service (self-hosted Agent)

    You can configure filters using the bito-cra.properties file. Check the options exclude_branches, exclude_files, and exclude_draft_pr for more details.

    GitHub Actions (self-hosted Agent)

    You can configure filters using the GitHub Actions repository variables: EXCLUDE_BRANCHES, EXCLUDE_FILES, and EXCLUDE_DRAFT_PR. For detailed instructions, please refer to the Install/Run via GitHub Actions documentation page.

    AI Code Review Agent

    Video tutorial

    Installation and configuration steps

    Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

    Step 1: Log in to Bito

    Log in to Bito Cloud and select a workspace to get started.

    Step 2: Open the Code Review Agents setup

    Click Repositories under the CODE REVIEW section in the sidebar.

    Step 3: Select your Git provider

    Bito supports integration with the following Git providers:

    • GitHub

    • GitHub (Self-Managed)

    • GitLab

    • GitLab (Self-Managed)

    • Bitbucket

    • Bitbucket (Self-Managed)

    Since we are setting up the Agent for Bitbucket, select Bitbucket to proceed.

    Step 4: Connect Bito to Bitbucket

    To enable pull request reviews, you’ll need to connect your Bito workspace to your Bitbucket account.

    If your Bitbucket access control settings block external services from interacting with the Bitbucket server, whitelist all of Bito's gateway IP addresses to ensure Bito can access your repositories. The Agent response can come from any of these IPs.

    • List of IP addresses to whitelist:

      • 18.188.201.104

      • 3.23.173.30

      • 18.216.64.170

    See the for more information.

    Click Install Bito App for Bitbucket. This will redirect you to Bitbucket.

    Now, authorize the Bito App to access your Bitbucket repositories.

    Select your Bitbucket workspace from the Authorize for workspace dropdown menu and then click Grant access. Once completed, you will be redirected to Bito.

    Note: You'll only see Bitbucket workspaces where you have Admin access. If no workspaces appear in the dropdown, it means your account doesn’t have admin access to any workspace. To connect a workspace, make sure you have admin access for it.

    Step 5: Enable AI Code Review Agent on repositories

    After connecting Bito to your Bitbucket account, you'll see a list of repositories that Bito has access to.

    Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

    To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

    Learn more

    Step 6: Automated and manual pull request reviews

    Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

    1. Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.

    2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the pull request and click Add comment now to submit it. This action will start the code review process.

    Note: After typing /review, add a space inside the comment box to ensure that /review is not highlighted as a Bitbucket slash command so that the comment can be posted correctly.

    The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.

    Note: To enhance efficiency, the automated code reviews are only triggered for pull requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

    To review additional branches, you can use the Include Source/Target Branches filter. Bito will review pull requests when the source or target branch matches the list.

    The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    The AI Code Review Agent automatically reviews code changes up to 5000 lines when a pull request is created. For larger changes, you can use the /review command.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

    Step 7: Specialized commands for code reviews

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

    • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to Available Commands.

    Step 8: Chat with AI Code Review Agent

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    Step 9: Configure Agent settings

    Agent settings let you control how reviews are performed, ensuring feedback is tailored to your team’s needs. By adjusting the options, you can:

    • Make reviews more focused and actionable.

    • Apply your own coding standards.

    • Reduce noise by excluding irrelevant files or branches.

    • Add extra checks to improve code quality and security.

    Learn more

    Screenshots

    Screenshot # 1

    AI-generated pull request (PR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a pull request.

    Changelist in AI Code Review Agent's feedback.

    Screenshot # 3

    AI code review feedback posted as comments on the pull request.

    AI Code Review Agent
    available commands
    Pricing
    Get a 14-day FREE trial of Bito's AI Code Review Agent.

    Guide for GitHub Copilot (VS Code)

    Integrate GitHub Copilot in VS Code with AI Architect for more accurate, codebase-aware AI assistance.

    Use Bito's AI Architect with GitHub Copilot in VS Code to enhance your AI-powered coding experience.

    Once connected via MCP (Model Context Protocol), GitHub Copilot can leverage AI Architect’s deep contextual understanding of your project, enabling more accurate code suggestions, explanations, and code insights.

    Prerequisites

    1. Follow the . Upon successful setup, you will receive a Bito MCP URL and Bito MCP Access Token that you need to enter in your coding agent.

    2. Download BitoAIArchitectGuidelines.md file. You will need to copy/paste the content from this file later when configuring AI Architect.

      • Note: This file contains best practices, usage instructions, and prompting guidelines for the Bito AI Architect MCP server. The setup will work without this file, but including it helps AI tools interact more effectively with the Bito AI Architect MCP server.

    3. Requires Visual Studio Code version 1.99 or later (check with code --version)

    4. GitHub Copilot extension installed and enabled

    5. GitHub account with Copilot access

    Set up AI Architect

    Follow the setup instructions for your operating system:

    Windows

    1

    Ensure VS Code is up to date

    1. Open VS Code

    macOS

    1

    Ensure VS Code is up to date

    1. Open VS Code

    Linux

    1

    Ensure VS Code is up to date

    1. Open VS Code

    Guide for GitLab (Self-Managed)

    Integrate the AI Code Review Agent into your self-hosted GitLab workflow.

    Speed up code reviews by configuring the AI Code Review Agent with your GitLab (Self-Managed) server. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a merge request, as well as how to manually initiate reviews using available commands.

    The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the Team Plan. For detailed pricing information, visit our Pricing page.

    Video tutorial

    coming soon...

    Prerequisites

    Before proceeding, ensure you've completed all necessary prerequisites.

    1. Create a GitLab Personal Access Token:

    For GitLab merge request code reviews, a token with api scope is required. Make sure that the token is created by a GitLab user who has the Maintainer access role.

    Important: Bito posts comments using the GitLab user account linked to the Personal Access Token used during setup. To display "Bito" instead of your name, create a separate user account (e.g., Bito Agent) and use its token for integration.

    We recommend setting the token expiration to at least one year. This prevents the token from expiring early and avoids disruptions in the AI Code Review Agent's functionality.

    Additionally, we highly recommend updating the token before expiry to maintain seamless integration and code review processes.

    2. Authorizing a GitLab Personal Access Token for use with SAML single sign-on:

    If your GitLab organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.

    For more information, please refer to the following GitLab documentation:

    Installation and configuration steps

    Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:

    Step 1: Log in to Bito

    and select a workspace to get started.

    Step 2: Open the Code Review Agents setup

    Click under the CODE REVIEW section in the sidebar.

    Step 3: Select your Git provider

    Bito supports integration with the following Git providers:

    • GitHub

    • GitHub (Self-Managed)

    • GitLab

    • GitLab (Self-Managed)

    Since we are setting up the Agent for GitLab (Self-Managed) server, select GitLab (Self-Managed) to proceed.

    Supported versions:

    • GitLab (Self-Managed): 15.5 and above

    Step 4: Connect Bito to GitLab

    To enable merge request reviews, you’ll need to connect your Bito workspace to your GitLab (Self-Managed) server.

    If your network blocks external services from interacting with the GitLab server, whitelist all of Bito's gateway IP addresses in your firewall to ensure Bito can access your self-hosted repositories. The Agent response can come from any of these IPs.

    • List of IP addresses to whitelist:

      • 18.188.201.104

    You need to enter the details for the below mentioned input fields:

    • Hosted GitLab URL: This is the domain portion of the URL where you GitLab Enterprise Server is hosted (e.g., https://yourcompany.gitlab.com). Please check with your GitLab administrator for the correct URL.

    • Personal Access Token: Generate a GitLab Personal Access Token with api scope in your GitLab (Self-Managed) account and enter it into the Personal Access Token input field. For guidance, refer to the instructions in the section.

    Click Validate to ensure the token is functioning properly.

    If the token is successfully validated, you can select your GitLab Group from the dropdown menu.

    • Note: You can select multiple groups after the setup is complete.

    Click Connect Bito to GitLab to proceed.

    Step 5: Enable AI Code Review Agent on repositories

    After connecting Bito to your GitLab self-managed server, you'll see a list of repositories that Bito has access to.

    Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.

    To customize the Agent’s behavior, you can edit existing configurations or create new Agents as needed.

    Step 6: Automated and manual merge request reviews

    Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:

    1. Automated code review: By default, the Agent automatically reviews all new merge requests and provides detailed feedback.

    2. Manually trigger code review: To initiate a manual review, simply type /review in the comment box on the merge request and submit it. This action will start the code review process.

    The AI-generated code review feedback will be posted as comments directly within your merge request, making it seamless to view and address suggestions right where they matter most.

    Note: To enhance efficiency, the automated code reviews are only triggered for merge requests merging into the repository’s default branch. This prevents unnecessary processing and Advanced AI requests usage.

    To review additional branches, you can use the . Bito will review merge requests when the source or target branch matches the list.

    The Include Source/Target Branches filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review command.

    The AI Code Review Agent automatically reviews code changes up to 5000 lines when a merge request is created. For larger changes, you can use the /review command.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the merge request.

    Step 7: Specialized commands for code reviews

    Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.

    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to .

    Step 8: Chat with AI Code Review Agent

    Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.

    To start the conversation, type your question in the comment box within the inline suggestions on your merge request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.

    Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.

    Step 9: Configure Agent settings

    let you control how reviews are performed, ensuring feedback is tailored to your team’s needs. By adjusting the options, you can:

    • Make reviews more focused and actionable.

    • Apply your own coding standards.

    • Reduce noise by excluding irrelevant files or branches.

    • Add extra checks to improve code quality and security.

    Managing multiple GitLab groups in Bito Cloud

    allows you to connect and manage multiple GitLab groups for GitLab (Self-Managed) integrations. Use the instructions below to add or remove GitLab groups for AI code reviews.

    How to add multiple GitLab groups?

    You can connect more than one GitLab group to Bito for AI code reviews.

    Follow these steps to add additional groups:

    1. Go to the page.

    1. At the top-center of the page, click the “+” (plus) icon next to the currently selected GitLab group name, then select Add group from the dropdown menu.

    1. A popup will appear. Use the dropdown menu to select a GitLab group you want to add.

    1. Click the Add group button.

    Once added, all repositories from that group will be listed and available for AI code reviews under the default agent.

    Note: This multiple GitLab groups feature is currently available only for GitLab (Self-Managed) integrations.

    How to remove a GitLab group?

    To disconnect a GitLab group from Bito Cloud:

    1. Go to the page.

    1. At the top-center of the page, click the three dots icon next to the currently selected GitLab group name, then select Manage groups from the dropdown menu.

    1. A popup will appear showing a list of connected groups. Click the “✕” (cross) icon next to the group you want to remove.

    1. Confirm the removal in the prompt.

    Once removed, the repositories from that group will no longer appear in Bito or be included in AI code reviews.

    How to select one or more GitLab Groups?

    When you have multiple GitLab groups connected in Bito Cloud, the group name at the top-center of the page becomes a dropdown menu.

    From this dropdown, you can:

    • Select a single group

    • Select multiple groups as needed

    • Select All groups

    The list of repositories displayed below will update automatically based on your selection—showing only the repositories from the selected groups.

    Screenshots

    Screenshot # 1

    AI-generated merge request (MR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a merge request.

    Screenshot # 3

    AI code review feedback posted as comments on the merge request.

    Agent Configuration: bito-cra.properties File

    Setting up your agent: understanding the bito-cra.properties file

    Note: This file is only available for people who are using the self-hosted version of AI Code Review Agent.

    The bito-cra.properties file offers a comprehensive range of options for configuring the AI Code Review Agent, enhancing its flexibility and adaptability to various workflow requirements.

    bito-cra.properties Available Options

    Property Name
    Supported Values
    Is Mandatory?
    Description

    cd /mnt/c/Users/YourName/path/to/project
    wsl --install
    tar -xzf bito-cis-*.tar.gz
    ./setup.sh
    repository:
      configured_repos:
        - namespace: your-org/repo-name-1
        - namespace: your-org/repo-name-2
        - namespace: your-org/repo-name-3
    bitoarch add-repos .bitoarch-config.yaml
    bitoarch index-repos
    bitoarch index-status
    cd bito-cis-*
    We suggest you provide API keys for both Anthropic and Grok LLMs, as that provides the best coverage and the best cost of indexing.

    After adding a provider, you'll be asked: "Do you want to configure another provider?"

    • Type y to add additional providers (recommended for better coverage and fallback options).

    • Type n when you're done adding LLM providers.

    Press Enter to continue.

    Bitbucket
  • Bitbucket (Self-Managed)

  • 3.23.173.30

  • 18.216.64.170

  • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

  • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

  • Get a 14-day FREE trial of Bito's AI Code Review Agent.
    View Guide
    SAML SSO for GitLab.com groups
    SAML SSO for GitLab Self-Managed
    Password generation for users created through SAML
    Log in to Bito Cloud
    Repositories
    Prerequisites
    Learn more
    Include Source/Target Branches filter
    Available Commands
    Agent settings
    Learn more
    Bito Cloud
    Repositories
    Repositories
    Repositories
    GitLab Personal Access Token
    Changelist in AI Code Review Agent's feedback.
    Logo
    Go to Help > Check for Updates
  • Install any available updates

  • Verify version: Open terminal and run code --version

  • 2

    Enable Agent mode

    1. Press Ctrl + , to open Settings

    2. In the search bar, type: chat.agent.enabled

    3. Check the box to enable Chat: Agent Enabled

    3

    Choose configuration method

    You have two options:

    Option A: Workspace configuration (recommended for team projects)

    • Location: [ProjectRoot]\.vscode\mcp.json

    • Shared with team via version control

    • Project-specific

    Option B: User configuration (personal, all workspaces)

    • Location: %APPDATA%\Code\User\settings.json

    • Only available to you across all projects

    • Global configuration

    4

    Workspace configuration (Option A)

    1. In your project root, create folder: .vscode

    2. Inside .vscode, create file: mcp.json

    3. Open with VS Code or Notepad

    4. Paste:

    Note:

    • Replace the text Enter_MCP_Server_URL_Here with the actual MCP server URL.

    • Replace the text Enter_Auth_Token_Here with the actual auth token.

    1. Save the file (Ctrl + S)

    5

    User configuration (Option B) (Alternative)

    1. Press Ctrl + Shift + P to open Command Palette

    2. Type: Preferences: Open User Settings (JSON)

    3. Press Enter

    4. Add this configuration to your settings.json:

    Note: If settings.json already has content, make sure to add the "mcp" section properly within the existing JSON structure.

    Note:

    • Replace the text Enter_MCP_Server_URL_Here with the actual MCP server URL.

    • Replace the text Enter_Auth_Token_Here with the actual auth token.

    1. Save (Ctrl + S)

    6

    Start the MCP server

    1. If using workspace config, open .vscode/mcp.json in VS Code

    2. A Start button will appear above the configuration in the editor

    3. Click Start to activate the server

    4. Wait for confirmation that the server has started

    7

    Verify setup

    1. Open GitHub Copilot Chat (Ctrl + Alt + I)

    2. Click the dropdown at the bottom and select Agent mode

    3. Click the Tools icon (wrench symbol) in the chat interface

    4. Verify "BitoAIArchitect" appears in the list of available tools

    5. You should see the available tools from the BitoAIArchitect MCP server

    8

    Optional - Add project guidelines

    1. In your project root, create folder: .github

    2. Inside .github, create file: copilot-instructions.md

    3. Copy and paste ALL contents of BitoAIArchitectGuidelines.md into this file

    4. Save the file

    File paths:

    • Workspace config: [ProjectRoot]\.vscode\mcp.json

    • User config: C:\Users\[YourUsername]\AppData\Roaming\Code\User\settings.json

    • Guidelines (optional): [ProjectRoot]\.github\copilot-instructions.md

    Go to Code > Check for Updates (or Help > Check for Updates)
  • Install any available updates

  • Verify version in Terminal: code --version

  • 2

    Enable Agent mode

    1. Press Cmd + , to open Settings

    2. In the search bar, type: chat.agent.enabled

    3. Check the box to enable Chat: Agent Enabled

    3

    Choose configuration method

    Option A: Workspace configuration (recommended for team projects)

    • Location: [ProjectRoot]/.vscode/mcp.json

    • Shared with team, project-specific

    Option B: User configuration (personal, all workspaces)

    • Location: ~/Library/Application Support/Code/User/settings.json

    • Global across all projects

    4

    Workspace configuration (Option A)

    1. In Terminal, navigate to your project root

    2. Run:

    1. Paste:

    Note:

    • Replace the text Enter_MCP_Server_URL_Here with the actual MCP server URL.

    • Replace the text Enter_Auth_Token_Here with the actual auth token.

    1. Save: Ctrl + O, Enter, Ctrl + X

    5

    User configuration (Option B) (Alternative)

    1. Press Cmd + Shift + P to open Command Palette

    2. Type: Preferences: Open User Settings (JSON)

    3. Press Enter

    4. Add this configuration to your settings.json:

    Note: If settings.json already has content, make sure to add the "mcp" section properly within the existing JSON structure.

    Note:

    • Replace the text Enter_MCP_Server_URL_Here with the actual MCP server URL.

    • Replace the text Enter_Auth_Token_Here with the actual auth token.

    1. Save: Cmd + S

    6

    Start the MCP server

    1. Open the mcp.json file in VS Code

    2. Click Start button that appears above the configuration

    3. Wait for server to activate

    7

    Verify setup

    1. Open Copilot Chat (Cmd + Alt + I or Cmd + Shift + I)

    2. Select Agent mode from the dropdown

    3. Click Tools icon to view available tools

    4. Verify "BitoAIArchitect" is listed

    8

    Optional - Add project guidelines

    Copy and paste ALL contents of BitoAIArchitectGuidelines.md into this file

    File paths:

    • Workspace: [ProjectRoot]/.vscode/mcp.json

    • User: ~/Library/Application Support/Code/User/settings.json

    • Guidelines: [ProjectRoot]/.github/copilot-instructions.md

    Go to Help > Check for Updates
  • Install updates if available

  • Verify: code --version in terminal

  • 2

    Enable Agent mode

    1. Press Ctrl + , to open Settings

    2. Search: chat.agent.enabled

    3. Enable Chat: Agent Enabled

    3

    Choose configuration method

    Option A: Workspace - [ProjectRoot]/.vscode/mcp.json (recommended)

    Option B: User - ~/.config/Code/User/settings.json (global)

    4

    Workspace configuration (Option A)

    Paste:

    Note:

    • Replace the text Enter_MCP_Server_URL_Here with the actual MCP server URL.

    • Replace the text Enter_Auth_Token_Here with the actual auth token.

    Save: Ctrl + O, Enter, Ctrl + X

    5

    User configuration (Option B) (Alternative)

    1. Press Ctrl + Shift + P

    2. Type: Preferences: Open User Settings (JSON)

    3. Add to settings.json:

    Note: If settings.json already has content, make sure to add the "mcp" section properly within the existing JSON structure.

    Note:

    • Replace the text Enter_MCP_Server_URL_Here with the actual MCP server URL.

    • Replace the text Enter_Auth_Token_Here with the actual auth token.

    1. Save: Ctrl + S

    6

    Start server & verify

    1. Open mcp.json in VS Code, click Start button

    2. Open Copilot Chat (Ctrl + Alt + I)

    3. Select Agent mode

    4. Click Tools icon to verify "BitoAIArchitect" is listed

    7

    Optional - Add guidelines

    1. In Terminal, run:

    1. Copy and paste ALL contents of BitoAIArchitectGuidelines.md into this file

    2. Save: Ctrl + O, Enter, Ctrl + X

    File paths:

    • Workspace: [ProjectRoot]/.vscode/mcp.json

    • User: ~/.config/Code/User/settings.json

    • Guidelines: [ProjectRoot]/.github/copilot-instructions.md

    AI Architect installation instructions
    Windows
    macOS
    Linux
    • True

    • False

    No

    Setting it to True activates general code review comments to identify functional issues. If set to False, general code review will not be conducted.

    bito_cli.bito.access_key

    A valid Bito Access Key generated through Bito's web UI.

    Yes

    Bito Access Key is an alternative to standard email and OTP authentication.

    git.provider

    • GITLAB

    • GITHUB

    • BITBUCKET

    Yes, if the mode is CLI.

    The name of git repository provider.

    git.access_token

    A valid Git access token provided by GITLAB or GITHUB or BITBUCKET

    Yes

    You can use a personal access token in place of a password when authenticating to GitHub/GitLab/BitBucket in the command line or with the API.

    git.domain

    A URL where Git is hosted.

    No

    It is used to enter the custom URL of self-hosted GitHub/GitLab Enterprise.

    static_analysis

    • True

    • False

    No

    Enable or disable static code analysis, which is used to uncover functional issues in the code.

    static_analysis_tool

    • fb_infer

    • astral_ruff

    • mypy

    No

    Comma-separated list of static analysis tools to run (e.g., fb_infer,astral_ruff,mypy).

    linters_feedback

    • True

    • False

    No

    Enables feedback from linters like ESLint, golangci-lint, and Astral Ruff.

    secret_scanner_feedback

    • True

    • False

    No

    Enables detection of secrets in code. For example, passwords, API keys, sensitive information, etc.

    dependency_check

    • True

    • False

    No

    This feature is designed to identify security vulnerabilities in open-source dependency packages, specifically for JS/TS/Node.JS and GoLang. Without this input, reviews for security vulnerabilities will not be conducted.

    dependency_check.snyk_auth_token

    A valid authentication token for accessing Snyk's cloud-based security database.

    No

    If not provided, access to Snyk's cloud-based security database for checking security vulnerabilities in open-source dependency packages will not be available.

    code_context

    • True

    • False

    No

    Enables enhanced code context awareness.

    server_port

    A valid and available TCP port number.

    No

    This is applicable when the mode is set to server. If not specified, the default value is 10051.

    review_comments

    • 1

    • 2

    No

    Set the value to 1 to display the code review in a single post, or 2 to show code review as inline comments, placing suggestions directly beneath the corresponding lines in each file for clearer guidance on improvements.

    The default value is 2.

    review_scope

    • security

    • performance

    • scalability

    • codeorg

    No

    Specialized commands to perform detailed analyses on specific aspects of your code. You can provide comma-separated values to perform multiple types of code analysis simultaneously.

    include_source_branches

    Glob/regex pattern.

    No

    Comma-separated list of branch patterns (glob/regex) to allow as pull request sources.

    include_target_branches

    Glob/regex pattern.

    No

    Comma-separated list of branch patterns (glob/regex) to allow as pull request targets.

    exclude_files

    Glob/regex pattern.

    No

    A list of files/folders that the AI Code Review Agent will not review if they are present in the diff.

    By default, these files are excluded: *.xml, *.json, *.properties, .gitignore, *.yml, *.md

    exclude_draft_pr

    • True

    • False

    No

    A binary setting that enables/disables automated review of pull requests (PR) based on the draft status. The default value is True which skips automated review of draft PR.

    cra_version

    • latest

    • Any specific version tag

    No

    Sets the agent version to run (latest or a specific version tag).

    post_as_request_changes

    • True

    • False

    No

    Posts feedback as 'Request changes' review comments. Depending on your organization's Git settings, you may need to resolve all comments before merging.

    support_email

    Email address

    No

    Contact email shown in error messages.

    suggestion_mode

    • essential

    • comprehensive

    No

    Controls AI suggestion verbosity. Available options are essential and comprehensive.

    In Essential mode, only critical issues are posted as inline comments, and other issues appear in the main review summary under "Additional issues".

    In Comprehensive mode, Bito also includes minor suggestion and potential nitpicks as inline comments.

    mode

    • cli

    • server

    Yes

    Whether to run the Docker container in CLI mode for a one-time code review or as a webhooks service to continuously monitor for code review requests.

    pr_url

    Pull request URL in GitLab, GitHub and Bitbucket

    Yes, if the mode is CLI.

    The pull request provides files with changes and the actual code modifications. When the mode is set to server, the pr_url is received either through a webhook call or via a REST API call.

    This release only supports webhook calls; other REST API calls are not yet supported.

    code_feedback

    resources/application.properties, resources/config/config.yaml

    app/resources/file.txt, config/resources/service.properties

    Exclude all files, folders and subfolders in subfolder resource and in parent folder src

    src/*/resource/*

    src/com/resource/main.html,

    src/com/resource/script/file.css, src/com/resource/app/script.js

    src/resource/file.txt, src/com/config/file.txt, app/com/config/file.txt

    Exclude non-css files from folder src/com/resource/ and subfolders

    ^src\/com\/resource\\/(?!.*\\.css$).*$

    src/com/resource/main.html, src/com/resource/app/script.js,

    src/com/config/file.txt

    src/com/resource/script/file.css

    Exclude specific file controller/webhook_controller.go

    controller/webhook_controller.go

    controller/webhook_controller.go

    controller/controller.go, controller/webhook_service.go

    Exclude non-css files from folder starting with config and its subfolders

    ^config\\/(?!.*\\.css$).*$

    config/server.yml, config/util/conf.properties

    config/profile.css, config/styles/main.css

    Exclude all files & folders

    *

    resource/file.txt, config/file.properties, app/folder/

    -

    Exclude all files & folders starting with name bito in module folder

    module/bito*

    module/bito123, module/bitofile.js, module/bito/file.js

    module/filebito.js, module/file2.txt, module/util/file.txt

    Exclude single-character folder names

    */?/*

    src/a/file.txt, app/b/folder/file.yaml

    folder/file.txt, ab/folder/file.txt

    Exclude all folders, subfolders and files in those folders except folder starting with service folder

    ^(?!service\\/).*$

    config/file.txt, resources/file.yaml

    service/file.txt, service/config/file.yaml

    Exclude all files in all folders except .py, .go, and .java files

    ^(?!.*\\.(py|go|java)$).*$

    config/file.txt, app/main.js

    main.py, module/service.go, test/Example.java

    Exclude non-css files from folder src/com/config and its subfolders

    ^config\\/(?!.*\\.css$).*$

    config/server.yml, config/util/conf.properties

    config/profile.css, config/styles/main.css

    feature-BITO, development

    BITO

    Include branches like release/v1.0 and release/v1.0.1

    release/v\\d+\\.\\d+(\\.\\d+)?

    release/v1.0, release/v1.0.1

    release/v1, release/v1.0.x

    Include any branch ending with -test

    *-test

    feature-test, release-test

    test-feature, release-testing

    Include the branch that has keyword main

    main

    main, main-feature, mainline

    master, development

    Include the branch named main

    ^main$

    main

    main-feature, mainline, master, development

    Include any branch name that does not start with feature- or release-

    ^(?!release-|feature-).*$

    hotfix-123, development

    feature-123, release-v1.0

    Include branches with names containing digits

    .*\\d+.*

    feature-123, release-v1.0

    feature-abc, main

    Include branches with names ending with test or testing

    .*(test|testing)$

    feature-test, bugfix-testing

    testing-feature, test-branch

    Include branches with names containing a specific substring test

    *test*

    feature-test, test-branch, testing

    feature, release

    Include branches with names containing exactly three characters

    ^.{3}$

    abc, 123

    abcd, ab

    Include branch names starting with release, hotfix, or development but not starting with Bito or feature

    ^(?!Bito|feature)(release|hotfix|development).*$

    release-v1.0, hotfix-123, development-xyz

    Bito-release, feature-hotfix, main-release

    Include all branches where name do not contains version like 1.0, 1.0.1, etc.

    ^(?!.\\b\\d+\\.\\d+(\\.\\d+)?\\b).*

    feature-xyz, main

    release-v1.0, hotfix-1.0.1

    Include all branches which are not alphanumeric

    ^.[^a-zA-Z0-9].$

    feature-!abc, release-@123

    feature-123, release-v1.0

    Include all branches which contains space

    .*\\s.*

    feature 123, release v1.0

    feature-123, release-v1.0

    Bitbucket documentation
    mkdir -p .github
    nano .github/copilot-instructions.md
    mkdir -p .vscode
    nano .vscode/mcp.json
    {
      "servers": {
        "Bito": {
          "type": "http",
          "url": "Enter_MCP_Server_URL_Here", // e.g. "https://mcp.bito.ai/123456/mcp"
          "headers": {
            "Authorization": "Enter_Auth_Token_Here" // e.g. "Bearer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
          },
          "timeout": 60000,
          "disabled": false
        }
      }
    }
    mkdir -p .github
    nano .github/copilot-instructions.md

    codeoptimize

    Learn More
    Learn more
    Learn more
    Learn more
    {
      "servers": {
        "Bito": {
          "type": "http",
          "url": "Enter_MCP_Server_URL_Here", // e.g. "https://mcp.bito.ai/123456/mcp"
          "headers": {
            "Authorization": "Enter_Auth_Token_Here" // e.g. "Bearer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
          },
          "timeout": 60000,
          "disabled": false
        }
      }
    }
    {
      "mcp": {
        "servers": {
          "Bito": {
            "type": "http",
            "url": "Enter_MCP_Server_URL_Here", // e.g. "https://mcp.bito.ai/123456/mcp"
            "headers": {
              "Authorization": "Enter_Auth_Token_Here" // e.g. "Bearer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
            },
            "timeout": 60000,
            "disabled": false
          }
        }
      }
    }
    mkdir -p .vscode
    nano .vscode/mcp.json
    {
      "servers": {
        "Bito": {
          "type": "http",
          "url": "Enter_MCP_Server_URL_Here", // e.g. "https://mcp.bito.ai/123456/mcp"
          "headers": {
            "Authorization": "Enter_Auth_Token_Here" // e.g. "Bearer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
          },
          "timeout": 60000,
          "disabled": false
        }
      }
    }
    {
      "mcp": {
        "servers": {
          "Bito": {
            "type": "http",
            "url": "Enter_MCP_Server_URL_Here", // e.g. "https://mcp.bito.ai/123456/mcp"
            "headers": {
              "Authorization": "Enter_Auth_Token_Here" // e.g. "Bearer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
            },
            "timeout": 60000,
            "disabled": false
          }
        }
      }
    }
    {
      "mcp": {
        "servers": {
          "Bito": {
            "type": "http",
            "url": "Enter_MCP_Server_URL_Here", // e.g. "https://mcp.bito.ai/123456/mcp"
            "headers": {
              "Authorization": "Enter_Auth_Token_Here" // e.g. "Bearer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
            },
            "timeout": 60000,
            "disabled": false
          }
        }
      }
    }

    Install/run via webhooks service

    The webhooks service is best suited for continuous, automated reviews.

    Prerequisites

    Minimum System Requirements

    A machine with the following minimum specifications is recommended for Docker image deployment and for obtaining optimal performance of the AI Code Review Agent.

    Requirement
    Minimum Specification

    Supported Operating Systems

    • Windows

    • Linux

    • macOS


    OS Prerequisites

    Operating System
    Installation Steps

    Required Access Tokens

    • Bito Access Key: Obtain your Bito Access Key.

    • GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.

    • GitLab Personal Access Token: For GitLab PR code reviews, a token with API access is required.

    • Snyk API Token (Auth Token): For Snyk vulnerability reports, obtain a Snyk API Token.


    Installation and Configuration Steps

    1. Prerequisites: Before proceeding, ensure you've completed all necessary AI Code Review Agent.

    2. Server Requirement: Ensure you have a server with a domain name or IP address.

    3. Start Docker: Initialize Docker on your server.

    4. Clone the repository:

    • Note the full path to the “cra-scripts” folder for later use.

    1. Open Command Line:

      • Use Bash for Linux and macOS.

      • Use PowerShell for Windows.

    2. Set Directory:

    1. Configure Properties:

      • Open the bito-cra.properties file in a text editor from the “cra-scripts” folder. Detailed information for each property is provided on page.

      • Set mandatory properties:

    Note: Valid values for git.provider are GITHUB or GITLAB.

    Note: Detailed information for each property is provided on page.

    Check the guide to learn more about creating the access tokens needed to configure the Agent.

    1. Run the Agent:

      • On Linux/macOS in Bash:

        • Run ./bito-cra.sh service start bito-cra.properties

    This step might take time initially as it pulls the Docker image and performs the code review.

    1. Provide Missing Property Values: The script may prompt for values of mandatory/optional properties if they are not preconfigured.

    2. Copy Webhook Secret: During the script execution, a webhook secret is generated and displayed in the shell. Copy the secret displayed under "Use below as Gitlab and Github Webhook secret:" for use in GitHub or GitLab when setting up the webhook.

    Webhook Setup Guide

    :

    • Login to your account.

    • Navigate to the main page of the repository. Under your repository name, click Settings.

    • In the left sidebar, click Webhooks.

    :

    • Login to your account.

    • Select the repository where the webhook needs to be configured.

    • On the left sidebar, select Settings > Webhooks.

    • Select Add new webhook

    :

    • Login to your account.

    • Navigate to the main page of the repository. Under your repository name, click Repository Settings.

    • In the left sidebar, click Webhooks.

    • Click Add webhook.


    Using the AI Code Review Agent

    After configuring the webhook, you can invoke the AI Code Review Agent in the following ways:

    Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please .

    1. Automated Code Review: If the webhook is configured to be triggered on the Pull requests event (for GitHub) or Merge request event (for GitLab), the agent will automatically review new pull requests as soon as they are created and post the review feedback as a comment within your PR.

    2. Manually Trigger Code Review: To start the process, simply type /review in the comment box on the pull request and submit it. If the webhook is configured to be triggered on the Issue comments event (for GitHub) or Comments event (for GitLab), this action will initiate the code review process. The /review command prompts the agent to review the pull request and post its feedback directly in the PR as a comment.

    It may take a few minutes to get the code review posted as a comment, depending on the size of the pull request.

    Screenshots

    Screenshot # 1

    AI-generated pull request (PR) summary

    Screenshot # 2

    Changelist showing key changes and impacted files in a pull request.

    Screenshot # 3

    AI code review feedback posted as comments on the pull request.


    How to update the self-hosted AI Code Review Agent

    Please follow these steps:

    1. Update the Agent's repository:

      • Pull the latest changes from the repository by running the following command in your terminal, ensuring you are inside the repository folder:

      • git pull origin main


    Stop Docker Container

    To stop the Docker container running as a service, use the below command.

    • On Linux/macOS in Bash: Run ./bito-cra.sh service stop

    • On Windows in PowerShell: Run ./bito-cra.ps1 service stop


    Check Status

    To check the status of Docker container running as a service, use the below command.

    • On Linux/macOS in Bash: Run ./bito-cra.sh service status

    • On Windows in PowerShell: Run ./bito-cra.ps1 service status

    Use cases and examples

    Supercharging development with AI

    Bito AI Assistance can help with generating new content or help with your existing code.

    Click on each use case for an example instruction and response.

    Learn technical concepts
  • Generate test cases

  • Explain code

  • Comment method

  • Improve code performance

  • Check security

  • Find style issues

  • Generate Code

    Instruction

    Implement a simple rest API in Go.

    Response

    Command Syntax

    Instruction

    How to rebase a branch in git?

    Response

    1. Checkout the branch you want to rebase:

    $ git checkout

    1. Rebase your branch against the upstream branch:

    $ git rebase upstream/

    1. Resolve any conflicts that arise.

    2. Once all conflicts are resolved, do a git status to verify that all files have been updated correctly.

    3. Finally, push the rebased branch to the remote repository:

    $ git push origin --force

    Learn Technical Concepts

    Instruction

    Can you explain the B+ tree with a code example?

    Response

    A B+ tree is a self-balancing tree data structure used in databases to store and retrieve data efficiently. It is a variation of the B-tree and is designed to minimize the number of disk accesses required for insertion, deletion, and retrieval operations. B+ trees are characterized by the number of keys stored per node, the order of the nodes, and the number of children per node.

    For example, a B+ tree of order 5 would have 5 keys per node and 5 children per node. When a node becomes full, it will split into two nodes, each containing half of the keys and half of the children. This means that all the nodes at the same level will have the same number of keys, making retrieval operations more efficient.

    Here is an example in Python

    Test Case

    Instruction

    Generate test cases for the following method.

    Response

    Test Case 1: Input: newLabelArray = [ 'Apple', 'Banana', 'Mango', 'Apple' ] oldLabelArray = [ 'Banana', 'Mango', 'Orange' ] Expected Output: ['Apple:2', 'Orange:-1']

    Test Case 2: Input: newLabelArray = [ 'Apple', 'Banana', 'Apple', 'Orange' ] oldLabelArray = [ 'Banana', 'Mango', 'Orange' ] Expected Output: ['Apple:2', 'Mango:-1']

    Explain this Code

    Instruction

    What does this code do?

    Response

    This code takes two arrays, newLabelArray and oldLabelArray, and generates a new array that contains the difference between the two arrays. It does this by first creating three empty arrays, new_, old_, and merged_. It then iterates through the newLabelArray and adds each label to the new_ array, as well as the merged_ array. It then iterates through the oldLabelArray and does the same for the old_ array, but also subtracts it from the merged_ array. Finally, it iterates through the merged_ array and checks for any values that are not equal to 0 and adds them to the final array. It then returns this final array.

    Comment Method

    Instruction

    Generate a comment for this method, explaining the parameters and output

    Response

    /* This method takes two arrays, newLabelArray and oldLabelArray. It uses the elements of both arrays to generate a merged array, merged_, where it counts the occurrences of each element in both arrays. It then creates a third array, final, containing the elements of merged_ where the difference is not 0. The output is the final array. */

    Improve Performance

    Instruction

    How can I improve the performance of this code?

    Response

    1. Use ES6 syntax to simplify the code.

    2. Use a hashmap instead of looping through the arrays.

    3. Use a single loop to build the final array instead of two loops.

    4. Use array methods instead of for-each loops.

    5. Use optimized data structures like Set and Map.

    Instruction

    Rewrite this code with performance improvements?

    Response

    Check for Security

    Instruction

    Find potential security issues in this code

    Response

    1. A lack of input validation: the user-supplied search term is not being validated to ensure it is safe to use in a database query. This could leave the system open to SQL injection attacks.

    2. A lack of output encoding: the results from the database are not being encoded before they are printed. This could leave the system open to cross-site scripting attacks.

    Instruction

    Rewrite this code with fix to potential security issues

    Response

    Find Style Issues

    Instruction

    Identify potential issues that EsLint would find in this code

    Response

    //ESLint would find the following issues:

    • Missing semicolons (potential syntax errors)

    • Missing spacing after 'forEach' statements

    • Unused variables (e.g. 'old_')

    • Missing spacing between operators (e.g. '++new_[lbl]')

    Generate Code
    Command Syntax
    package main
    import ( 
    "encoding/json" "fmt" "net/http" )
    type user struct { Name string json:"name" Age int json:"age" }
    func main() 
    {http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { 
    	// Create user 
    	u := user{ Name: "John Doe", Age: 25, }
    	// Encode user
    	uj, err := json.Marshal(u)
    	if err != nil {
    		fmt.Fprintf(w, err.Error())
    	}
    	// Return user
    	w.Header().Set("Content-Type", "application/json")
    	w.WriteHeader(http.StatusOK)
    	fmt.Fprintf(w, "%s\n", uj)
    })
    http.ListenAndServe(":3000", nil)
    }
    class Node:
        def __init__(self, order):
            self.order = order
            self.keys = []
            self.children = [None] * (order + 1)
    
    class BPlusTree:
        def __init__(self, order):
            self.root = Node(order)
     
        def insert(self, key):
            if not self.root.keys:
                self.root.keys.append(key)
            else:
                pass # traverse the tree and insert the key
     
        def delete(self, key):
            pass # traverse the tree and delete the key
     
        def search(self, key):
            pass # traverse the tree and search for the key
     
    # Create a B+ tree of order 5
    bplus_tree = BPlusTree(5)
     
    # Insert a key
    bplus_tree.insert(10)
     
    # Delete a key
    bplus_tree.delete(10)
    
    # Search for a key
    bplus_tree.search(10)
    genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
      {
        let new_ =[],old_=[],merged_=[];
        newLabelArray.forEach((lbl)=>{
          if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
          ++new_[lbl];
          ++merged_[lbl];
        });
        
        oldLabelArray.forEach((lbl)=>{
          if(old_[lbl] === undefined){ old_[lbl] =0; }
          if(merged_[lbl] === undefined){ merged_[lbl] =0; }
    
          ++old_[lbl];
          --merged_[lbl]
        });
        let final = [];
    
        merged_.forEach((value,key)=>{
          if(value!=0){ 
          //  final[key.toString()] = value;
          final.push(key+":"+value);
           }
        })
        
          return final;
      }
    genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
      {
        let new_ =[],old_=[],merged_=[];
        newLabelArray.forEach((lbl)=>{
          if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
          ++new_[lbl];
          ++merged_[lbl];
        });
        
        oldLabelArray.forEach((lbl)=>{
          if(old_[lbl] === undefined){ old_[lbl] =0; }
          if(merged_[lbl] === undefined){ merged_[lbl] =0; }
    
          ++old_[lbl];
          --merged_[lbl]
        });
        let final = [];
    
        merged_.forEach((value,key)=>{
          if(value!=0){ 
          //  final[key.toString()] = value;
          final.push(key+":"+value);
           }
        })
        
          return final;
      }
    genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
      {
        let new_ =[],old_=[],merged_=[];
        newLabelArray.forEach((lbl)=>{
          if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
          ++new_[lbl];
          ++merged_[lbl];
        });
        
        oldLabelArray.forEach((lbl)=>{
          if(old_[lbl] === undefined){ old_[lbl] =0; }
          if(merged_[lbl] === undefined){ merged_[lbl] =0; }
    
          ++old_[lbl];
          --merged_[lbl]
        });
        let final = [];
    
        merged_.forEach((value,key)=>{
          if(value!=0){ 
          //  final[key.toString()] = value;
          final.push(key+":"+value);
           }
        })
        
          return final;
      }
    genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
      {
        let new_ =[],old_=[],merged_=[];
        newLabelArray.forEach((lbl)=>{
          if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
          ++new_[lbl];
          ++merged_[lbl];
        });
        
        oldLabelArray.forEach((lbl)=>{
          if(old_[lbl] === undefined){ old_[lbl] =0; }
          if(merged_[lbl] === undefined){ merged_[lbl] =0; }
    
          ++old_[lbl];
          --merged_[lbl]
        });
        let final = [];
    
        merged_.forEach((value,key)=>{
          if(value!=0){ 
          //  final[key.toString()] = value;
          final.push(key+":"+value);
           }
        })
        
          return final;
      }
    genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
      {
        let new_ =[],old_=[],merged_=[];
        newLabelArray.forEach((lbl)=>{
          if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
          ++new_[lbl];
          ++merged_[lbl];
        });
        
        oldLabelArray.forEach((lbl)=>{
          if(old_[lbl] === undefined){ old_[lbl] =0; }
          if(merged_[lbl] === undefined){ merged_[lbl] =0; }
    
          ++old_[lbl];
          --merged_[lbl]
        });
        let final = [];
    
        merged_.forEach((value,key)=>{
          if(value!=0){ 
          //  final[key.toString()] = value;
          final.push(key+":"+value);
           }
        })
        
          return final;
      }
    // Rewritten Code
    function genrateLabelDiffArray(newLabelArray, oldLabelArray = []) {
      let new_ = {},
        old_ = {},
        merged_ = {},
        final = [];
    
      newLabelArray.forEach((lbl) => {
        new_[lbl] = new_[lbl] ? new_[lbl] + 1 : 1;
        merged_[lbl] = merged_[lbl] ? merged_[lbl] + 1 : 1;
      });
    
      oldLabelArray.forEach((lbl) => {
        old_[lbl] = old_[lbl] ? old_[lbl] + 1 : 1;
        merged_[lbl] = merged_[lbl] ? merged_[lbl] - 1 : -1;
      });
    
      Object.entries(merged_).forEach(([key, value]) => {
        if (value !== 0) {
          final.push(key + ":" + value);
        }
      });
      
      return final;
    }
    import database
    search_term = input("Enter a search term: ")
    query = "SELECT * FROM users WHERE name = '" + search_term + "'"
    results = database.query(query)
    print(results)
    import database
    search_term = input("Enter a search term: ")
    query = "SELECT * FROM users WHERE name = '" + search_term + "'"
    results = database.query(query)
    print(results)
    import database
    
    # Get the user-supplied search term
    search_term = input("Enter a search term: ")
    
    # Use the user-supplied search term to query the database
    query = "SELECT * FROM users WHERE name = %s"
    results = database.query(query, (search_term,))
    
    # Print the results
    print(results)
    genrateLabelDiffArray(newLabelArray,oldLabelArray=[])
      {
        let new_ =[],old_=[],merged_=[];
        newLabelArray.forEach((lbl)=>{
          if(new_[lbl] === undefined){ new_[lbl] =0;merged_[lbl]=0; }
          ++new_[lbl];
          ++merged_[lbl];
        });
        
        oldLabelArray.forEach((lbl)=>{
          if(old_[lbl] === undefined){ old_[lbl] =0; }
          if(merged_[lbl] === undefined){ merged_[lbl] =0; }
    
          ++old_[lbl];
          --merged_[lbl]
        });
        let final = [];
    
        merged_.forEach((value,key)=>{
          if(value!=0){ 
          //  final[key.toString()] = value;
          final.push(key+":"+value);
           }
        })
        
          return final;
      }
    GitHub repository to your server using the following command:
    • git clone https://github.com/gitbito/CodeReviewAgent.git

    • Note: It is recommended to clone the repository instead of downloading the .zip file. This approach allows you to easily update the Agent later using the git pull command.

  • Open the repository folder:

    • Navigate to the repository folder and then to the “cra-scripts” subfolder.

  • Change the current directory in Bash/PowerShell to the “cra-scripts” folder.

  • Example command: cd [Path to cra-scripts folder]

  • Note: Adjust the path based on where you cloned the repository on your system.

  • mode = server

  • bito_cli.bito.access_key

  • git.access_token

  • Optional properties (can be skipped or set as needed):

    • git.provider

    • git.domain

    • code_feedback

    • static_analysis

    • dependency_check

    • dependency_check.snyk_auth_token

    • server_port

    • review_scope

    • exclude_branches

    • exclude_files

    • exclude_draft_pr

  • Note: It will provide the Git Webhook secret in encrypted format.
  • On Windows in PowerShell:

    • Install OpenSSL

      • Reference-1: https://wiki.openssl.org/index.php/Binaries

      • Reference-2: https://slproweb.com/products/Win32OpenSSL.html

    • Run ./bito-cra.ps1 service start bito-cra.properties

    • Note: It will provide the Git Webhook secret in encrypted format.

  • Click
    Add webhook
    .
  • Under Payload URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.

    • Note: The GitHub Payload URL should follow this format: https://<domain name/ip-address>/api/v1/github_webhooks, where https://<domain name/ip-address> should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/github_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.

    • For example, a typical webhook URL would be https://cra.example.com/api/v1/github_webhooks

  • Select the Content type “application/json” for JSON payloads.

  • In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.

  • Click on Let me select individual events to select the events that you want to trigger the webhook. For code review select these:

    • Issue comments - To enable Code Review on-demand by issuing a command in the PR comment.

    • Pull requests - To auto-trigger Code Review when a pull request is created.

    • Pull request review comments - So, you can share feedback on the review quality by answering the feedback question in the code review comment.

  • To make the webhook active immediately after adding the configuration, select Active.

  • Click Add webhook.

  • .
  • In URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.

    • Note: The GitLab webhook URL should follow this format: https://<domain name/ip-address>/api/v1/gitlab_webhooks, where https://<domain name/ip-address> should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/gitlab_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.

    • For example, a typical webhook URL would be https://cra.example.com/api/v1/gitlab_webhooks

  • In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.

  • In the Trigger section, select the events to trigger the webhook. For code review select these:

    • Comments - for on-demand code review.

    • Merge request events - for automatic code review when a merge request is created.

    • Emoji events - So, you can share feedback on the review quality using emoji reactions.

  • Select Add webhook.

  • Under URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.

    • Note: The BitBucket Payload URL should follow this format: https://<domain name/ip-address>/api/v1/bitbucket_webhooks, where https://<domain name/ip-address> should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/bitbucket_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.

    • For example, a typical webhook URL would be https://cra.example.com/api/v1/bitbucket_webhooks

  • In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.

  • In the Triggers section, select the events to trigger the webhook. For code review select these:

    • Pull Request > Comment created - for on-demand code review.

    • Pull Request > Created - for automatic code review when a merge request is created.

  • Select Save.

  • Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
    • /review security: Analyzes code to identify security vulnerabilities and ensure secure coding practices.

    • /review performance: Evaluates code for performance issues, identifying slow or resource-heavy areas.

    • /review scalability: Assesses the code's ability to handle increased usage and scale effectively.

    • /review codeorg: Scans for readability and maintainability, promoting clear and efficient code organization.

    • /review codeoptimize: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.

    By default, the /review command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False

    For more details, refer to Available Commands.

    Restart the Docker container:
    • To restart the Docker container running as a service, use the below command.

    • On Linux/macOS in Bash: Run ./bito-cra.sh service restart bito-cra.properties

    • On Windows in PowerShell: Run ./bito-cra.ps1 service restart bito-cra.properties

    CPU Cores

    4

    RAM

    8 GB

    Hard Disk Drive

    80 GB

    Linux

    You will need:

    1. Bash (minimum version 4.x)

      • For Debian and Ubuntu systems

        sudo apt-get install bash

      • For CentOS and other RPM-based systems

        sudo yum install bash

    1. Docker (minimum version 20.x)

    macOS

    You will need:

    1. Bash (minimum version 4.x)

      brew install bash

    1. Docker (minimum version 20.x)

    Windows

    You will need:

    1. PowerShell (minimum version 5.x)

      • View Guide

      • Note: In PowerShell version 7.x, run Set-ExecutionPolicy Unrestricted command. It allows the execution of scripts without any constraints, which is essential for running scripts that are otherwise blocked by default security settings.

    1. Docker (minimum version 20.x)

    View Guide
    View Guide
    View Guide
    View Guide
    prerequisites for self-hosted
    Agent Configuration: bito-cra.properties File
    Agent Configuration: bito-cra.properties File
    Required Access Tokens
    GitHub Webhook Setup Guide
    GitHub
    GitLab Webhook Setup Guide
    GitLab
    BitBucket Webhook Setup Guide
    BitBucket
    contact support
    https://github.com/gitbito/CodeReviewAgent
    GitHub Personal Access Token (Classic)
    GitLab Personal Access Token
    Changelist in AI Code Review Agent's feedback.
    Clone the AI Code Review Agent

    Supported programming languages and tools

    Supports key languages & tools, including fbInfer, Dependency Check, and Snyk.

    Supported Programming Languages

    AI Code Review

    The AI Code Review Agent understands code changes in pull requests by analyzing relevant context from your entire repository, resulting in more accurate and helpful code reviews. The agent provides either Basic Code Understanding or Advanced Code Understanding based on the programming languages used in the code diff. Learn more about all the supported languages in the table below.

    Basic Code Understanding is providing the surrounding code for the diff to help AI better understand the context of the diff.

    Advanced Code Understanding is providing detailed information holistically to the LLM about the changes the diff is making—from things such as global variables, libraries, and frameworks (e.g., Lombok in Java, React for JS/TS, or Angular for TS) being used, the specific functions/methods and classes the diff is part of, to the upstream and downstream impact of a change being made. Using advanced code traversal and understanding techniques, such as symbol indexes, embeddings, and abstract syntax trees, Bito deeply tries to understand what your changes are about and the impact and relevance to the greater codebase, like a senior engineer does when doing code review. .

    For requests to add support for specific programming languages, please reach out to us at

    Languages
    AI Code Review
    Basic Code Understanding
    Advanced Code Understanding


    Static Code Analysis and Open Source Vulnerabilities Check

    For custom SAST tools configuration to support specific languages in the , please reach out to us at

    Languages
    Static Code Analysis / Linters
    Open Source Vulnerabilities Check


    Supported Tools and Platforms

    Tool
    Type
    Supported/Integrated


    Supported output languages for code review feedback

    Bito supports posting code review feedback in over 20 languages. You can choose your preferred language in the . Supported languages include the following:

    1. Arabic (عربي)

    2. Bulgarian (български)

    3. Chinese (Simplified) (简体中文)

    4. Chinese (Traditional) (繁體中文)

    View Guide
    View Guide
    View Guide

    YES

    C++

    YES

    YES

    YES

    C#

    YES

    YES

    YES

    Dart

    YES

    YES

    YES

    Delphi

    YES

    YES

    YES

    Go

    YES

    YES

    YES

    Groovy

    YES

    YES

    YES

    HTML/CSS

    YES

    YES

    YES

    Java

    YES

    YES

    YES

    JavaScript

    YES

    YES

    YES

    JavaScript Framework

    YES

    YES

    YES

    Kotlin

    YES

    YES

    YES

    Lua

    YES

    YES

    YES

    Objective-C

    YES

    YES

    YES

    PHP

    YES

    YES

    YES

    PowerShell

    YES

    YES

    YES

    Python

    YES

    YES

    YES

    R

    YES

    YES

    YES

    Ruby

    YES

    YES

    YES

    Rust

    YES

    YES

    YES

    Scala

    YES

    YES

    YES

    SCSS

    YES

    YES

    YES

    SQL

    YES

    YES

    YES

    Swift

    YES

    YES

    YES

    Terraform

    YES

    YES

    YES

    TypeScript

    YES

    YES

    YES

    TypeScript Framework

    YES

    YES

    YES

    Vue.js

    YES

    YES

    YES

    Visual Basic .NET

    YES

    YES

    YES

    Others

    YES

    YES

    YES

    YES (using Facebook Infer)

    NO

    C#

    NO

    NO

    Dart

    NO

    NO

    Delphi

    NO

    NO

    Go

    YES (using golangci-lint)

    YES

    Groovy

    NO

    NO

    HTML/CSS

    NO

    NO

    Java

    YES (using Facebook Infer)

    NO

    JavaScript

    YES (using ESLint)

    YES

    Kotlin

    NO

    NO

    Lua

    NO

    NO

    Objective-C

    YES (using Facebook Infer)

    NO

    PHP

    NO

    NO

    PowerShell

    NO

    NO

    Python

    YES (using Astral Ruff and Mypy)

    NO

    R

    NO

    NO

    Ruby

    NO

    NO

    Rust

    NO

    NO

    Scala

    NO

    NO

    SCSS

    NO

    NO

    SQL

    NO

    NO

    Swift

    NO

    NO

    Terraform

    NO

    NO

    TypeScript

    YES (using ESLint)

    YES

    Vue.js

    NO

    NO

    Visual Basic .NET

    NO

    NO

    Others

    NO

    NO

    YES

    ESLint

    Linter for JavaScript and TypeScript

    YES

    Facebook Infer

    Static Code Analysis for Java, C, C++, and Objective-C

    YES

    GitHub cloud

    Code Repository

    YES

    GitHub (Self-Managed)

    Code Repository

    YES, supports version 3.0 and above.

    GitLab cloud

    Code Repository

    YES

    GitLab (Self-Managed)

    Code Repository

    YES, supports version 15.5 and above.

    golangci-lint

    Linter for Go

    YES

    Mypy

    Static Type Checker for Python

    YES

    OWASP dependency Check

    Security

    YES

    Snyk

    Security

    YES

    Whispers

    Secrets scanner (e.g., passwords, API keys, sensitive information)

    YES

    Czech (čeština)
  • Dutch (Nederlands)

  • English (English)

  • French (français)

  • German (Deutsch)

  • Hebrew (עִברִית)

  • Hindi (हिंदी)

  • Hungarian (magyar)

  • Italian (italiano)

  • Japanese (日本語)

  • Korean (한국어)

  • Malay (Melayu)

  • Polish (polski)

  • Portuguese (português)

  • Russian (русский)

  • Spanish (español)

  • Turkish (Türkçe)

  • Vietnamese (Tiếng Việt)

  • Assembly

    YES

    YES

    YES

    Bash/Shell

    YES

    YES

    YES

    C

    YES

    Assembly

    NO

    NO

    Bash/Shell

    NO

    NO

    C

    YES (using Facebook Infer)

    NO

    Astral Ruff

    Linter for Python

    YES

    Azure DevOps

    Code Repository

    Coming soon

    Bitbucket

    Code Repository

    YES

    detect-secrets

    Read more here about our approach
    [email protected]
    AI Code Review Agent
    [email protected]
    agent settings

    YES

    C++

    Secrets scanner (e.g., passwords, API keys, sensitive information)

    Implementing custom code review rules

    Customize Bito’s AI Code Review Agent to enforce your coding practices.

    Bito’s AI Code Review Agent offers a flexible solution for teams looking to enforce custom code review rules, standards, and guidelines tailored to their unique development practices. Whether your team follows specific coding conventions or industry best practices, you can customize the Agent to suite your needs.

    We support three ways to customize AI Code Review Agent’s suggestions:

    1. Provide feedback on Bito-reported issues in pull requests, and the AI Code Review Agent automatically adapts by creating code review rules to prevent similar suggestions in the future.

    2. Create custom code review guidelines via the dashboard. Define rules through the Custom Guidelines dashboard in Bito Cloud and apply them to agent instances in your workspace.

    3. . Add guideline files (like .cursor/rules/*.mdc, .windsurf/rules/*.md, CLAUDE.md, GEMINI.md, or AGENTS.md) to your repository, and the AI Code Review Agent automatically uses them during pull request reviews to provide feedback aligned with your project's standards.

    1- Provide feedback on Bito-reported issues

    AI Code Review Agent refines its suggestions based on your feedback. When you provide negative feedback on Bito-reported issues in pull requests, the Agent automatically adapts by creating custom code review rules to prevent similar suggestions in the future.

    Depending on your Git platform, you can provide negative feedback in the following ways:

    • GitHub: Select the checkbox given in feedback question at the end of each Bito suggestion or leave a negative comment explaining the issue with the suggestion.

    • GitLab: React with negative emojis (e.g., thumbs down) or leave a negative comment explaining the issue with the suggestion.

    • Bitbucket: Provide manual review feedback by leaving a negative comment explaining the issue with the suggestion.

    The custom code review rules are displayed on the dashboard in Bito Cloud.

    These rules are applied at the repository level for the specific programming language.

    By default, newly generated custom code review rules are disabled. Once negative feedback for a specific rule reaches a threshold of 3, the rule is automatically enabled. You can also manually enable or disable these rules at any time using the toggle button in the Status column.

    Note: Providing a positive reaction emoji or comment has no effect and will not generate a new code review rule.

    After you provide negative feedback, Bito generates a new code review rule in your workspace. The next time the AI Code Review Agent reviews your pull requests, it will automatically filter out the unwanted suggestions.

    2- Create custom code review guidelines

    We understand that different development teams have unique needs. To accommodate these needs, we offer the ability to implement custom code review guidelines in Bito’s .

    Once you add guidelines, the agent will follow them when reviewing pull requests. You can manage guidelines (create, apply, and edit) entirely in the dashboard.

    By enabling custom code review guidelines, Bito helps your team maintain consistency and improve code quality.

    Note: Custom review guidelines are available only on the . Enabling them also upgrades your workspace to the Enterprise Plan.

    How to add a guideline

    Step 1: Open the Custom Guidelines tab

    • Sign in to .

    • Click in the sidebar.

    Step 2: Fill the form

    A. Manual setup

    1. Click Add guidelines button from the top right.

    2. Fill out:

      • Guideline name

      • Language (select a specific programming language or select General if the guideline applies to all languages)

    B. Use a Template

    1. Click Add guidelines button from the top right.

    2. Choose a template from the Use template dropdown menu.

    3. Review/edit fields as needed.

    4. Click Create guideline.

    Step 3: Apply to an Agent

    • After creating a guideline, you’ll see an Apply review guideline dropdown.

    • Select the Agent instance, then click Manage review guidelines to open its settings.

    To apply the guideline later: go to , find the Agent instance, click Settings, and manage guidelines there.

    Step 4: Save configuration

    On the Agent settings page, hit Save (top-right) to apply guideline changes.

    Note: Visit the tab to edit or delete any guideline.

    Managing review guidelines from agent settings

    Efficiently control which custom guidelines apply to your AI Code Review Agent through the Agent settings interface.

    1. Go to dashboard from the Bito Cloud sidebar.

    2. Click Settings next to the target agent instance.

    1. Navigate to the Custom Guidelines section. Here you can either create a new guideline or select from existing guidelines.

    1. Create a new guideline

      • If you click Create a new guideline button, you will see the same form as mentioned earlier where you can enter the details to create a review guideline.

    1. Or select an existing guideline

      • If you click Select from existing guidelines button, you will get a popup screen from where you can select from a list review guidelines you already created. Use checkboxes to enable or disable each guideline for the selected agent and then click Add selected.

    1. Once you’ve applied or adjusted guidelines, click the Save button in the top-right corner to confirm changes.

    FAQs

    What types of custom code review guidelines can be implemented?

    You can implement a wide range of custom code review guidelines, including:

    • Style and formatting guidelines

    • Security best practices

    • Performance optimization checks

    • Code complexity and maintainability standards

    Is "custom code review guidelines" feature available in Team Plan?

    No, this feature is available exclusively on the . Enabling the "custom code review guidelines" feature also upgrades your workspace to the Enterprise Plan.

    For more details on Enterprise Plan, visit our .

    3- Use project-specific guideline files

    The AI Code Review Agent can read guideline files directly from your repository and use them during code reviews. These are the same guideline files that AI coding assistants (like Cursor, Windsurf, and Claude Code) use to help developers write code.

    By adding these files to your repository, the agent automatically follows your project's specific coding standards, architecture patterns, and best practices when reviewing pull requests.

    Supported guideline files

    The AI Code Review Agent currently supports analyzing the following guideline files that are commonly used by different AI coding agents:

    CRA currently supports analyzing the following guideline files that are commonly used by different AI coding agents:

    How to organize your guideline files

    Multiple files in one directory

    You can split your guidelines across multiple files:

    For Windsurf, use the .md extension:

    Module-specific guidelines:

    Place guideline files in subdirectories to create rules for specific parts of your codebase:

    The agent finds all relevant guideline files based on which files changed in your pull request.

    Note: Rule precedence (where subdirectory rules override parent-level rules) will be added in a future release. Currently, the agent considers all applicable guideline files equally.

    How citations work

    Every relevant Bito comment includes a Citations section that links to the specific guideline that triggered the comment. The link takes you directly to the relevant line in your guideline file, making it easy to verify the feedback and understand why it was given.

    Example scenario

    Let's say you're building an application that integrates multiple LLM providers. Your guideline file specifies:

    • All providers must extend the BaseLLMProvider class

    • All providers must implement standard methods like generateResponse() and streamResponse()

    • New providers must be registered in the config/providers.json file

    When someone submits a pull request to add a new provider, the agent can catch issues like:

    • The new provider doesn't extend the base class

    • Required methods are missing

    • The provider wasn't added to the configuration file

    Each comment links back to the specific guideline, so the developer knows exactly what needs to be fixed.

    Sample guideline file

    Here's an example AGENT.md file to help you get started:

    Custom Guidelines and Rules (enter your guidelines here)

  • Click Create guideline.

  • etc.

    .cursor/rules/*.mdc

    Cursor IDE

    .windsurf/rules/*.md

    Windsurf IDE

    CLAUDE.md

    Claude Code

    GEMINI.md

    Gemini CLI

    AGENTS.md

    OpenAI CodeX, Cursor IDE

    Use project-specific guideline files
    Learned Rules
    AI Code Review Agent
    Enterprise Plan
    Visit pricing page
    Bito Cloud
    Custom Guidelines
    Repositories
    Custom Guidelines
    Repositories
    Enterprise Plan
    Pricing Page
    .cursor/rules/project-overview.mdc
    .cursor/rules/architecture-principles.mdc
    .cursor/rules/security-standards.mdc
    .windsurf/rules/coding-standards.md
    .windsurf/rules/api-patterns.md
    .cursor/rules/global-standards.mdc
    providers/.cursor/rules/provider-implementation.mdc
    auth/.cursor/rules/authentication-rules.mdc
    # LLM Proxy Architecture & Design Document
    
    ## Document Overview
    
    ### Purpose
    This document serves as a coding guideline and technical reference for AI agents working with this codebase. It provides comprehensive information about the current architecture, design patterns, implementation details, and the rationale behind design decisions. AI agents should use this document to understand the existing code structure, maintain consistency when making modifications, and follow established patterns when extending functionality.
    
    ### What This Document Covers
    - **System Architecture**: High-level overview of components and their interactions
    - **Design Patterns**: Detailed explanation of the Factory Pattern implementation
    - **Component Design**: In-depth analysis of each system component
    - **Data Flow**: Request/response lifecycle through the system
    - **Design Decisions**: Rationale behind current architectural choices
    - **Implementation Details**: Code structure, conventions, and patterns in use
    
    ---
    
    ## Table of Contents
    1. [System Architecture](#system-architecture)
    2. [Design Patterns](#design-patterns)
    3. [Component Design](#component-design)
    4. [Data Flow](#data-flow)
    5. [Design Decisions](#design-decisions)
    6. [Error Handling Strategy](#error-handling-strategy)
    7. [Security Considerations](#security-considerations)
    8. [Coding Conventions](#coding-conventions)
    
    ---
    
    ## System Architecture
    
    ### High-Level Overview
    
    The LLM Proxy application follows a layered architecture with clear separation between the presentation layer (FastAPI), business logic layer (Provider implementations), and integration layer (external LLM APIs).
    
    ```
    ┌─────────────────────────────────────────────┐
    │           FastAPI Application               │
    │         (Presentation Layer)                │
    │   - Request validation (Pydantic)           │
    │   - Route handling (/chat endpoint)         │
    │   - Response formatting                     │
    └────────────────┬────────────────────────────┘
                     │
                     ▼
    ┌─────────────────────────────────────────────┐
    │          Provider Factory                   │
    │        (Abstraction Layer)                  │
    │   - Provider selection logic                │
    │   - Instance creation                       │
    └────────────────┬────────────────────────────┘
                     │
            ┌────────┴────────┐
            ▼                 ▼
    ┌──────────────┐   ┌──────────────┐
    │   OpenAI     │   │  Anthropic   │
    │   Provider   │   │   Provider   │
    │              │   │              │
    │ (Concrete    │   │ (Concrete    │
    │  Impl.)      │   │  Impl.)      │
    └──────┬───────┘   └──────┬───────┘
           │                  │
           ▼                  ▼
    ┌──────────────┐   ┌──────────────┐
    │  OpenAI API  │   │ Anthropic API│
    └──────────────┘   └──────────────┘
    ```
    
    ### Component Layers
    
    1. **Presentation Layer** (`main.py`)
       - Handles HTTP requests/responses
       - Validates input using Pydantic models
       - Manages API endpoints
    
    2. **Abstraction Layer** (`providers/factory.py`)
       - Implements Factory Pattern
       - Routes requests to appropriate providers
       - Decouples client code from concrete implementations
    
    3. **Business Logic Layer** (`providers/*.py`)
       - Abstract base class defines contract
       - Concrete providers implement LLM-specific logic
       - Handles API communication and response parsing
    
    4. **Integration Layer**
       - External API calls via httpx
       - Authentication management
       - Network error handling
    
    ---
    
    ## Design Patterns
    
    ### Factory Design Pattern
    
    The application implements the **Factory Design Pattern** to create provider instances without exposing creation logic to the client.
    
    #### Pattern Components
    
    1. **Abstract Product** (`LLMProvider`)
    ```python
    class LLMProvider(ABC):
        def __init__(self, model: str):
            self.model = model
        
        @abstractmethod
        def generate_response(self, prompt: str) -> str:
            pass
    ```
    
    **Purpose**: Defines the contract that all concrete providers must implement.
    
    2. **Concrete Products** (`OpenAIProvider`, `AnthropicProvider`)
    ```python
    class OpenAIProvider(LLMProvider):
        def generate_response(self, prompt: str) -> str:
            # OpenAI-specific implementation
            pass
    ```
    
    **Purpose**: Implement provider-specific logic while adhering to the base contract.
    
    3. **Factory** (`ProviderFactory`)
    ```python
    class ProviderFactory:
        @staticmethod
        def get_provider(provider_name: str, model: str) -> LLMProvider:
            providers = {
                "openai": OpenAIProvider,
                "anthropic": AnthropicProvider
            }
            return providers[provider_name.lower()](model)
    ```
    
    **Purpose**: Encapsulates provider instantiation logic.
    
    #### Benefits of This Pattern
    
    - **Loose Coupling**: Client code depends on abstractions, not concrete classes
    - **Open/Closed Principle**: Open for extension (new providers), closed for modification
    - **Single Responsibility**: Each provider handles only its specific implementation
    - **Testability**: Easy to mock providers for testing
    - **Scalability**: Adding new providers requires minimal changes
    
    ---
    
    ## Component Design
    
    ### 1. Base Provider (`providers/base.py`)
    
    **Responsibility**: Define the contract for all LLM providers
    
    **Key Design Decisions**:
    - Uses ABC (Abstract Base Class) to enforce implementation
    - Stores model name as instance variable for reuse
    - Single abstract method keeps interface simple
    
    **Design Rationale**:
    - Python's ABC ensures compile-time checking of implementations
    - Simple interface reduces cognitive load for implementers
    - Storing model allows for provider-specific model validation in future
    
    ### 2. OpenAI Provider (`providers/openai_provider.py`)
    
    **Responsibility**: Implement OpenAI Chat Completions API integration
    
    **Key Features**:
    - Environment-based API key management
    - Message format conversion (user prompt → OpenAI format)
    - Response parsing (extract content from choices)
    - Timeout handling (30 seconds)
    
    **API Contract**:
    ```
    POST https://api.openai.com/v1/chat/completions
    Headers: Authorization: Bearer <key>
    Body: {
      "model": "gpt-4",
      "messages": [{"role": "user", "content": "prompt"}]
    }
    ```
    
    **Error Handling**:
    - Validates API key presence on initialization
    - Catches HTTP errors and wraps with descriptive messages
    - Re-raises exceptions for upstream handling
    
    ### 3. Anthropic Provider (`providers/anthropic_provider.py`)
    
    **Responsibility**: Implement Anthropic Messages API integration
    
    **Key Features**:
    - Custom header format (x-api-key, anthropic-version)
    - Max tokens configuration (1024)
    - Content array response parsing
    
    **API Contract**:
    ```
    POST https://api.anthropic.com/v1/messages
    Headers: 
      x-api-key: <key>
      anthropic-version: 2023-06-01
    Body: {
      "model": "claude-3-sonnet",
      "max_tokens": 1024,
      "messages": [{"role": "user", "content": "prompt"}]
    }
    ```
    
    **Design Choices**:
    - Hard-coded max_tokens provides consistent behavior
    - Version header ensures API stability
    - Array access for content assumes single response
    
    ### 4. Provider Factory (`providers/factory.py`)
    
    **Responsibility**: Create provider instances based on string identifiers
    
    **Implementation Strategy**:
    - Dictionary-based mapping for O(1) lookup
    - Case-insensitive provider names
    - Descriptive error messages for invalid providers
    
    **Extensibility**:
    ```python
    # Adding new provider:
    providers = {
        "openai": OpenAIProvider,
        "anthropic": AnthropicProvider,
        "deepseek": DeepseekProvider,  # Just add here
    }
    ```
    
    ### 5. FastAPI Application (`main.py`)
    
    **Responsibility**: HTTP interface and request orchestration
    
    **Key Components**:
    
    1. **Request Model**:
    ```python
    class ChatRequest(BaseModel):
        provider: str
        model: str
        prompt: str
    ```
    - Leverages Pydantic for automatic validation
    - Clear field names match user expectations
    
    2. **Response Model**:
    ```python
    class ChatResponse(BaseModel):
        provider: str
        model: str
        response: str
    ```
    - Echoes input parameters for traceability
    - Returns plain text response
    
    3. **Endpoint Handler**:
    ```python
    @app.post("/chat", response_model=ChatResponse)
    async def chat(request: ChatRequest):
        provider = ProviderFactory.get_provider(request.provider, request.model)
        response_text = provider.generate_response(request.prompt)
        return ChatResponse(...)
    ```
    
    **Error Mapping**:
    - `ValueError` (invalid provider) → HTTP 400
    - Generic `Exception` (API errors) → HTTP 500
    
    ---
    
    ## Data Flow
    
    ### Request Lifecycle
    
    ```
    1. Client sends POST /chat
       ↓
    2. FastAPI receives request
       ↓
    3. Pydantic validates request body
       ↓
    4. ProviderFactory.get_provider() called
       ↓
    5. Factory returns concrete provider instance
       ↓
    6. provider.generate_response() called
       ↓
    7. Provider makes HTTP call to LLM API
       ↓
    8. Provider parses response
       ↓
    9. Response wrapped in ChatResponse model
       ↓
    10. JSON response sent to client
    ```
    
    ### Detailed Flow Example (OpenAI)
    
    ```python
    # Client Request
    POST /chat
    {
      "provider": "openai",
      "model": "gpt-4",
      "prompt": "Tell me a joke"
    }
    
    # Internal Processing
    1. Pydantic validates: ChatRequest object created
    2. Factory called: ProviderFactory.get_provider("openai", "gpt-4")
    3. OpenAIProvider instantiated with model="gpt-4"
    4. generate_response("Tell me a joke") called
    5. HTTP POST to OpenAI API:
       {
         "model": "gpt-4",
         "messages": [{"role": "user", "content": "Tell me a joke"}]
       }
    6. OpenAI responds with completion
    7. Extract: data["choices"][0]["message"]["content"]
    8. Return text to endpoint
    9. Wrap in ChatResponse
    
    # Client Response
    {
      "provider": "openai",
      "model": "gpt-4",
      "response": "Why did the chicken cross the road?..."
    }
    ```
    
    ---
    
    ## Design Decisions
    
    ### 1. Why Factory Pattern?
    
    **Decision**: Use Factory Pattern instead of simple if/else logic
    
    **Rationale**:
    - **Scalability**: Adding providers doesn't require modifying existing code
    - **Testability**: Easy to mock factory for unit tests
    - **Maintainability**: Provider logic isolated in separate classes
    - **Professional Standard**: Industry-recognized pattern for this use case
    
    **Alternative Considered**: Direct instantiation with if/else
    ```python
    # Rejected approach
    if provider == "openai":
        result = OpenAIProvider(model).generate_response(prompt)
    elif provider == "anthropic":
        result = AnthropicProvider(model).generate_response(prompt)
    ```
    **Why Rejected**: Violates Open/Closed Principle, harder to extend
    
    ### 2. Why httpx Over Official SDKs?
    
    **Decision**: Use httpx for HTTP calls instead of official provider SDKs
    
    **Rationale**:
    - **Minimal Dependencies**: Keeps requirements.txt small
    - **Unified Interface**: Single HTTP client for all providers
    - **Transparency**: Direct API calls are easier to debug
    - **Control**: Full control over request/response handling
    
    **Trade-offs**:
    - Less abstraction (must handle response parsing)
    - No built-in retry logic
    - Manual API version management
    
    ### 3. Synchronous vs Asynchronous
    
    **Decision**: Use synchronous HTTP calls with httpx.Client
    
    **Rationale**:
    - **Simplicity**: Easier to understand and debug
    - **Current Scale**: Single request doesn't benefit from async
    - **API Constraints**: LLM APIs are inherently blocking
    
    **Future Consideration**: Switch to async if supporting streaming responses
    
    ### 4. Error Handling Strategy
    
    **Decision**: Simple try/except with HTTP status code mapping
    
    **Rationale**:
    - **Simplicity**: Requirements specified basic error handling
    - **Client Clarity**: HTTP status codes are standard
    - **Debugging**: Error messages preserved in exceptions
    
    **Not Included** (but recommended for production):
    - Structured logging
    - Retry logic
    - Rate limiting
    - Circuit breakers
    
    ### 5. Environment Variables for API Keys
    
    **Decision**: Use environment variables instead of configuration files
    
    **Rationale**:
    - **Security**: Prevents accidental commit of credentials
    - **12-Factor App**: Follows best practices for configuration
    - **Flexibility**: Easy to change without code modification
    - **Cloud-Ready**: Works seamlessly with container orchestration
    
    
    ---
    
    ## Error Handling Strategy
    
    ### Current Implementation
    
    ```python
    try:
        provider = ProviderFactory.get_provider(request.provider, request.model)
        response_text = provider.generate_response(request.prompt)
        return ChatResponse(...)
    except ValueError as e:
        # Invalid provider name
        raise HTTPException(status_code=400, detail=str(e))
    except Exception as e:
        # API errors, network issues, etc.
        raise HTTPException(status_code=500, detail=str(e))
    ```
    
    ### Error Categories
    
    1. **Client Errors (400)**:
       - Invalid provider name
       - Unsupported model
       - Malformed request
    
    2. **Server Errors (500)**:
       - Missing API keys
       - Network timeouts
       - API errors (rate limits, service unavailable)
       - Response parsing failures
    
    
    ---
    
    ## Security Considerations
    
    ### Current Implementation
    
    1. **API Key Management**:
       - Stored in environment variables
       - Never logged or returned in responses
       - Validated on provider initialization
    
    2. **Request Validation**:
       - Pydantic models enforce type safety
       - No SQL injection risk (no database)
       - No command injection (no shell execution)
    
    ### Current Limitations
    
    1. **No Rate Limiting**: The application does not implement rate limiting
    2. **No Authentication**: Endpoints are publicly accessible
    3. **No Input Sanitization**: Prompt length and content are not validated beyond Pydantic type checking
    4. **No Retry Logic**: Failed API calls are not automatically retried
    
    ---
    
    ## Coding Conventions
    
    ### File Organization
    
    **Current Structure**:
    ```
    llm-proxy/
    ├── main.py                      # FastAPI application entry point
    ├── providers/                   # Provider package
    │   ├── __init__.py             # Package exports
    │   ├── base.py                 # Abstract base class
    │   ├── openai_provider.py      # OpenAI implementation
    │   ├── anthropic_provider.py   # Anthropic implementation
    │   └── factory.py              # Factory implementation
    ├── requirements.txt             # Python dependencies
    ├── .env.example                # Environment variable template
    └── README.md                   # User documentation
    ```
    
    ### Naming Conventions
    
    1. **Classes**: PascalCase (e.g., `LLMProvider`, `OpenAIProvider`)
    2. **Functions/Methods**: snake_case (e.g., `generate_response`, `get_provider`)
    3. **Constants**: UPPER_SNAKE_CASE (e.g., `OPENAI_API_KEY`)
    4. **Files**: snake_case (e.g., `openai_provider.py`)
    
    ### Code Patterns
    
    1. **Provider Implementation**:
       - Inherit from `LLMProvider`
       - Validate API key in `__init__`
       - Implement `generate_response(prompt: str) -> str`
       - Use httpx.Client with 30-second timeout
       - Wrap errors with descriptive messages
    
    2. **Error Handling**:
       - Use `try/except` blocks in provider implementations
       - Raise `ValueError` for missing API keys
       - Raise generic `Exception` with descriptive messages for API errors
       - Let FastAPI endpoint handle HTTP status code mapping
    
    3. **Environment Variables**:
       - Load with `os.getenv()`
       - Validate presence in provider `__init__`
       - Use pattern: `{PROVIDER}_API_KEY`
    
    4. **Type Hints**:
       - All methods should include type hints
       - Use Pydantic models for request/response validation
       - Return type explicitly stated
    
    ### Documentation Standards
    
    1. **Docstrings**: All classes and methods include docstrings
    2. **Comments**: Inline comments explain non-obvious logic
    3. **README**: User-facing documentation with examples
    
    ### Dependencies
    
    **Current Dependencies**:
    - `fastapi==0.109.0`: Web framework
    - `uvicorn[standard]==0.27.0`: ASGI server
    - `pydantic==2.5.3`: Data validation
    - `httpx==0.26.0`: HTTP client
    - `python-dotenv==1.0.0`: Environment variable management
    
    **Rationale**: Minimal, well-maintained dependencies that serve specific purposes.
    
    ---
    
    ## Summary
    
    This document captures the current state of the LLM Proxy application. When working with this codebase, AI agents should:
    
    1. **Follow the Factory Pattern**: All new providers must inherit from `LLMProvider` and be registered in `ProviderFactory`
    2. **Maintain Consistency**: Use the same error handling, timeout values, and code structure as existing providers
    3. **Respect Abstractions**: Keep provider-specific logic within provider classes
    4. **Update Documentation**: Any changes to architecture should be reflected in this document
    5. **Preserve Simplicity**: The design prioritizes simplicity and clarity over advanced features
    
    The architecture demonstrates clean separation of concerns through the Factory Design Pattern, making the codebase maintainable and understandable for both human developers and AI agents.