Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
10X Developer with Bito
Bito's AI helps developers dramatically accelerate their impact. It's a Swiss Army knife of capabilities that can 10x your developer productivity and save you an hour a day, using the same models as ChatGPT!
Bito AI makes it easy to write code, understand syntax, write test cases, explain code, comment on code, check security, and even explain high-level concepts. Trained on billions of lines of code and millions of documents, it's pretty incredible what we can help you do without having to search the web or waste time on tedious stuff.
Bito AI is a general-purpose AI assistance in that developers can ask any technical question, generate code from the natural language prompts, and get feedback on the existing code. Here are some things you can do with Bito AI Knowledge Assistance.
Generate Code: Ask Bito to generate code in any language with the natural language prompt. (e.g., write a Java function to convert a number from one base to another)
Command Syntax: Ask for the syntax of any technical command. (e.g., How to set a global variable for git?")
Test Cases: Generate test cases for the code.
Explain Code: Explain the selected code. Ask how this code works or what it does.
Comment Method: Generate a comment for the function or method to add to your code.
Improve Performance: Ask how you can improve the performance of a given code.
Check Security: Ask if the selected code has any known security issues.
Learn Technical Concepts: Ask a question about any technical concept (e.g., Explains B+ trees, explain Banker's algorithm)
Through extensions, Bito meets you where you work, your IDEs, like Visual Studio Code or JetBrains family of IDE.
Next, learn how to install Bito extensions.
Install on VS Code
Install on JetBrains
Try AI Code Review Agent
It takes less than 2 minutes
Watch the video below to learn how to download the Bito extension on VS Code.
In Visual Studio Code, go to the extension tab and search for Bito.
Install the extension. We recommend you restart the IDE after the installation is complete.
After a successful install, the Bito logo appears in the Visual Studio Code pane.
Click the Bito logo to launch the extension and complete the setup process. You will either need to create a new workspace if you are the first in your company to install Bito or join an existing workspace created by a co-worker. See Managing Workspace Members
SSH (Secure Shell) is a network protocol that securely enables remote access, system management, and file transfer between computers over unsecured networks.
Visual Studio Code IDE allows developers to access and collaborate on projects from any connected machine remotely. The corresponding extension [Remote -SSH] must be installed on the host machine's Visual Studio Code IDE to utilize this feature.
The Bito VS Code extension seamlessly integrates with Remote development via SSH, allowing developers to utilize Bito features and capabilities on their remote machines.
Please follow the instructions given in the links below:
Video Guide:
Running VS Code on WSL allows developers to work in a Linux-like environment directly from Windows. This kind of setup is to take advantage of development experience on both operating systems.
WSL provides access to Linux command-line tools, utilities, and applications, to enhance productivity and streamline the development process.
This setup ensures a consistent development environment across different systems, making it easier to develop, test, and deploy applications that will run on Linux servers.
Please follow the instructions given in the links below:
Video Guide:
It takes less than 2 minutes
Watch the video below to learn how to download the Bito extension on JetBrains IDEs.
In JetBrains IDEs such as IntelliJ, go to File -> Settings to open the Settings dialog, and click Plugins -> Marketplace tab in the settings dialog. Search for Bito.
2. Click "Install" to install the Bito extension. We recommend you restart the IDE after the installation is complete.
3. Bito panel will appear on the right-hand sidebar. Click it to complete the setup process. You will either need to create a new workspace if you are the first in your company to install Bito or join an existing workspace created by a co-worker. See Managing Workspace Members
Welcome to Bito, a developer's personal assistance that can boost productivity by more than 30%. In this chapter, we will cover the essentials of what you need to know to kickstart your journey with Bito.
It takes less than 2 minutes
Step-by-Step Instructions
Now click on the “Add to Chrome” button.
A popup will appear. Click on “Add extension” to install Bito in Google Chrome. The installation process will only take a few seconds.
Once Bito is installed, a new popup will appear, informing you that the Bito extension has been added to Chrome. Simply close this popup.
Now click on the “Extensions icon” given on the top-right side of your browser window. A popup box will appear. Here you will see something like “Bito” in the list. Select the small pin icon in front of it to always show the “Bito icon” in the browser ribbon.
For example:
Now, click the “Bito icon” from the browser ribbon. You will be asked to create a new Bito account or sign in to your existing account. You can also try Bito as a Guest user. But you will be limited to only asking 10 questions as a Guest. So, it is better to sign up to get unlimited access for free.
Type your email address and click on “Continue”. A six-digit confirmation code will be sent to your email.
Enter the confirmation code in the given input field and click on “Submit”.
If the confirmation code you entered is correct, you will be logged in to Bito Chrome Extension. Here is how the user interface (UI) will look like.
You can try the prompts already given in the UI by simply clicking on them. Additionally, you can type your own technical questions in the given input box and click on the “Send” button.
Bito will quickly start writing the answer. Once the answer is generated, you can select the “Copy Answer” button to copy it to your clipboard and paste it wherever you want to.
Supercharging development with AI
Bito AI Assistance can help with generating new content or help with your existing code.
Click on each use case for an example instruction and response.
Instruction
Response
Instruction
Response
Checkout the branch you want to rebase:
$ git checkout
Rebase your branch against the upstream branch:
$ git rebase upstream/
Resolve any conflicts that arise.
Once all conflicts are resolved, do a git status to verify that all files have been updated correctly.
Finally, push the rebased branch to the remote repository:
$ git push origin --force
Instruction
Response
A B+ tree is a self-balancing tree data structure used in databases to store and retrieve data efficiently. It is a variation of the B-tree and is designed to minimize the number of disk accesses required for insertion, deletion, and retrieval operations. B+ trees are characterized by the number of keys stored per node, the order of the nodes, and the number of children per node.
For example, a B+ tree of order 5 would have 5 keys per node and 5 children per node. When a node becomes full, it will split into two nodes, each containing half of the keys and half of the children. This means that all the nodes at the same level will have the same number of keys, making retrieval operations more efficient.
Here is an example in Python
Instruction
Response
Test Case 1: Input: newLabelArray = [ 'Apple', 'Banana', 'Mango', 'Apple' ] oldLabelArray = [ 'Banana', 'Mango', 'Orange' ] Expected Output: ['Apple:2', 'Orange:-1']
Test Case 2: Input: newLabelArray = [ 'Apple', 'Banana', 'Apple', 'Orange' ] oldLabelArray = [ 'Banana', 'Mango', 'Orange' ] Expected Output: ['Apple:2', 'Mango:-1']
Instruction
Response
This code takes two arrays, newLabelArray and oldLabelArray, and generates a new array that contains the difference between the two arrays. It does this by first creating three empty arrays, new_, old_, and merged_. It then iterates through the newLabelArray and adds each label to the new_ array, as well as the merged_ array. It then iterates through the oldLabelArray and does the same for the old_ array, but also subtracts it from the merged_ array. Finally, it iterates through the merged_ array and checks for any values that are not equal to 0 and adds them to the final array. It then returns this final array.
Instruction
Response
/* This method takes two arrays, newLabelArray and oldLabelArray. It uses the elements of both arrays to generate a merged array, merged_, where it counts the occurrences of each element in both arrays. It then creates a third array, final, containing the elements of merged_ where the difference is not 0. The output is the final array. */
Instruction
Response
Use ES6 syntax to simplify the code.
Use a hashmap instead of looping through the arrays.
Use a single loop to build the final array instead of two loops.
Use array methods instead of for-each loops.
Use optimized data structures like Set and Map.
Instruction
Response
Instruction
Response
A lack of input validation: the user-supplied search term is not being validated to ensure it is safe to use in a database query. This could leave the system open to SQL injection attacks.
A lack of output encoding: the results from the database are not being encoded before they are printed. This could leave the system open to cross-site scripting attacks.
Instruction
Response
Instruction
Response
//ESLint would find the following issues:
Missing semicolons (potential syntax errors)
Missing spacing after 'forEach' statements
Unused variables (e.g. 'old_')
Missing spacing between operators (e.g. '++new_[lbl]')
On-demand, context-aware AI code reviews for GitHub, GitLab, and Bitbucket.
It seamlessly integrates with Git providers such as GitHub, GitLab, and Bitbucket, automatically posting recommendations directly as comments within the corresponding Pull Request. It includes real-time recommendations from Static Code Analysis and OSS vulnerability tools such as fbinfer, Dependency-Check, etc. and can include high severity suggestions from other 3rd party tools you use such as Snyk.
The AI Code Review Agent acts as a set of specialized engineers each analyzing different aspects of your PR. They analyze aspects such as Performance, Code Structure, Security, Optimization, and Scalability. By combining and filtering the results, the Agent can provide you with much more detailed and insightful code reviews, bringing you a better quality code review and helping you save time.
The AI Code Review Agent helps engineering teams merge code faster while also keeping the code clean and up to standard, making sure it runs smoothly and follows best practices.
The AI Code Review Agent is built using Bito Dev Agents, an open framework and engine to build custom AI Agents for software developers that understands code, can connect to your organization’s data and tools, and can be discovered and shared via a global registry.
In many organizations, senior developers spend approximately half of their time reviewing code changes in PRs to find potential issues. The AI Code Review Agent can help save this valuable time.
AI Code Review Agent speeds up PR merges by 89%, reduces regressions by 34%, and delivers 87% human-grade feedback.
However, it's important to remember that the AI Code Review Agent is designed to assist, not replace, senior software engineers. It takes care of many of the more mundane issues involved in code review, so senior engineers can focus on the business logic and how new development is aligned with your organization’s business goals.
Explore the powerful capabilities of the AI Code Review Agent.
To comprehend your code and its dependencies, it uses Symbol Indexing, Abstract Syntax Trees (AST), and Embeddings.
Bito supports integration with the following Git providers:
By default, the AI Code Review Agent automatically reviews all new pull requests and provides detailed feedback. To initiate a manual review, simply type /review
in the comment box on the pull request and submit it. This action will start the code review process.
Get a concise overview of your pull request (PR) directly in the description section, making it easier to understand the code changes at a glance. This includes a summary of the PR, the type of code changes, whether unit tests were added, and the estimated effort required for review.
A tabular view that displays key changes in a pull request, making it easy to spot important updates at a glance without reviewing every detail. Changelist categorizes modifications and highlights impacted files, giving you a quick, comprehensive summary of what has changed.
The AI-generated code review feedback is posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.
You can accept the suggestions with a single click, and the changes will be added as a new commit to the pull request.
Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.
Real-time collaboration with the AI Code Review Agent accelerates your development cycle. By delivering immediate, actionable insights, it eliminates the delays typically experienced with human reviews. Developers can engage directly with the Agent to clarify recommendations on the spot, ensuring that any issues are addressed swiftly and accurately.
Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.
AI Code Review Agent automatically reviews only the recent changes each time you push new commits to a pull request. This saves time and reduces costs by avoiding unnecessary re-reviews of all files.
The AI Code Review Agent offers a flexible solution for teams looking to enforce custom code review rules, standards, and guidelines tailored to their unique development practices. Whether your team follows specific coding conventions or industry best practices, you can customize the Agent to suite your needs.
We support two ways to customize AI Code Review Agent’s suggestions:
The AI Code Review Agent acts as a team of specialized engineers, each analyzing different aspects of your pull request. You'll get specific advice for improving your code, right down to the exact line in each file.
The areas of analysis include:
Security
Performance
Scalability
Optimization
Will this change break anything? Based on the diff can we include anything?
Code structure and formatting (e.g., tab, spaces)
Basic coding standards including variable names (e.g., ijk)
This multifaceted analysis results in more detailed and accurate code reviews, saving you time and improving code quality.
Elevate your code reviews by harnessing the power of the development tools you already trust. Bito's AI Code Review Agent seamlessly integrates feedback from essential tools including:
Static code analysis
Open source security vulnerabilities check
Linter integrations
Secrets scanning (e.g., passwords, API keys, sensitive information)
Static code analysis
Using tools like Facebook’s open-source fbinfer (available out of the box), the Agent dives deep into your code—tailored to each language—and suggests actionable fixes. You can also configure additional tools you use for a more customized analysis experience.
Open source security vulnerabilities check
Linter integrations
Our integrated linter support reviews your code for consistency and adherence to best practices. By catching common errors early, it ensures your code stays clean, maintainable, and aligned with modern development standards.
Secrets scanning
Safeguard your sensitive data effortlessly. With built-in scanning capabilities, the Agent checks your code for exposed passwords, API keys, and other confidential information—helping to secure your codebase throughout the development lifecycle.
No matter if you're coding in Python, JavaScript, Java, C++, or beyond, our AI Code Review Agent has you covered. It understands the unique syntax and best practices of every popular language, delivering tailored insights that help you write cleaner, more efficient code—every time.
Bito and third-party LLM providers never store or use your code, prompts, or any other data for model training or any other purpose.
Bito is SOC 2 Type II compliant. This certification reinforces our commitment to safeguarding user data by adhering to strict security, availability, and confidentiality standards. SOC 2 Type II compliance is an independent, rigorous audit that evaluates how well an organization implements and follows these security practices over time.
Supports key languages & tools, including fbInfer, Dependency Check, and Snyk.
Basic Code Understanding is providing the surrounding code for the diff to help AI better understand the context of the diff.
C
YES
YES
YES
.c, .h
C++
YES
YES
YES
.cpp, .hpp
C#
YES
YES
YES
.cs
Go
YES
YES
YES
.go
HTML/CSS
YES
YES
YES
.html, .css
SCSS
YES
YES
YES
.scss
Java
YES
YES
YES
.java
JavaScript
YES
YES
YES
.js
JavaScript Framework
YES
YES
YES
.jsx
Kotlin
YES
YES
YES
.kt
PHP
YES
YES
YES
.php
Python
YES
YES
YES
.py
Ruby
YES
YES
YES
.rb
Rust
YES
YES
YES
.rs
Scala
YES
YES
YES
.scala, .sc
Swift
YES
YES
YES
.swift
Terraform
YES
YES
YES
.tf
TypeScript
YES
YES
YES
.ts
TypeScript Framework
YES
YES
YES
.tsx
Vue.js
YES
YES
YES
.vue
SQL
YES
YES
Coming soon
Coming soon
Bash/Shell
YES
YES
Coming soon
Coming soon
PowerShell
YES
YES
Coming soon
Coming soon
Dart
YES
YES
Coming soon
Coming soon
Lua
YES
YES
Coming soon
Coming soon
Visual Basic .NET
YES
YES
Coming soon
Coming soon
R
YES
YES
Coming soon
Coming soon
Assembly
YES
YES
Coming soon
Coming soon
Groovy
YES
YES
Coming soon
Coming soon
Delphi
YES
YES
Coming soon
Coming soon
Objective-C
YES
YES
Coming soon
Coming soon
Others
YES
YES
Coming soon
Coming soon
C
YES (using Facebook Infer)
NO
C++
YES (using Facebook Infer)
NO
C#
NO
NO
Go
YES (using golangci-lint)
YES
HTML/CSS
NO
NO
SCSS
NO
NO
Java
YES (using Facebook Infer)
NO
JavaScript
YES (using ESLint)
YES
Kotlin
NO
NO
PHP
NO
NO
Python
YES (using Astral Ruff and Mypy)
NO
Ruby
NO
NO
Rust
NO
NO
Scala
NO
NO
Swift
NO
NO
Terraform
NO
NO
TypeScript
YES (using ESLint)
YES
Vue.js
NO
NO
SQL
NO
NO
Bash/Shell
NO
NO
PowerShell
NO
NO
Dart
NO
NO
Lua
NO
NO
Visual Basic .NET
NO
NO
R
NO
NO
Assembly
NO
NO
Groovy
NO
NO
Delphi
NO
NO
Objective-C
YES (using Facebook Infer)
NO
Others
NO
NO
Facebook Infer
Static Code Analysis for Java, C, C++, and Objective-C
YES
ESLint
Linter for JavaScript and TypeScript
YES
golangci-lint
Linter for Go
YES
Astral Ruff
Linter for Python
YES
Mypy
Static Type Checker for Python
YES
OWASP dependency Check
Security
YES
Snyk
Security
YES
Whispers
Secrets scanner (e.g., passwords, API keys, sensitive information)
YES
detect-secrets
Secrets scanner (e.g., passwords, API keys, sensitive information)
YES
GitHub cloud
Code Repository
YES
GitHub (Self-Managed)
Code Repository
YES, supports version 3.0 and above.
GitLab cloud
Code Repository
YES
GitLab (Self-Managed)
Code Repository
YES, supports version 15.5 and above.
Bitbucket
Code Repository
YES
Azure DevOps
Code Repository
Coming soon
Deploy the AI Code Review Agent in Bito Cloud or opt for self-hosted service.
Pros:
Simplicity: Enjoy a straightforward setup with a single-click installation process, making it easy to get started without technical hurdles.
Maintenance-Free: Bito Cloud takes care of all necessary updates and maintenance, ensuring your Agent always operates on the latest software version without any effort on your part.
Scalability: The platform is designed to easily scale, accommodating project growth effortlessly and ensuring reliable performance under varying loads.
Cons:
Handling of Pull Request Diffs: For analysis purposes, diffs from pull requests are temporarily stored on our servers.
Pros:
Full Control: Self-hosting provides complete control over the deployment environment, allowing for extensive customization and the ability to integrate with existing systems as needed.
Privacy and Security: Keeping the AI Code Review Agent within your own infrastructure can enhance data security and privacy, as all information remains under your direct control.
Cons:
Setup Complexity: Establishing a self-hosted environment requires technical know-how and can be more complex than using a managed service, potentially leading to longer setup times.
Maintenance Responsibility: The responsibility of maintaining and updating the software falls entirely on your team, which includes ensuring the system is scaled appropriately to handle demand.
Deploy the AI Code Review Agent in Bito Cloud.
Vim/ Neovim Plugin for Bito Using Bito CLI
We are excited to announce that one of our users has developed a dedicated Vim and Neovim plugin for Bito, integrating it seamlessly with your favorite code editor. This plugin enhances your coding experience by leveraging the power of Bito's AI capabilities directly within Vim and Neovim.
Installation
To get started with "vim-bitoai," follow these steps:
Step 1: Install Bito CLI
Step 2: Install the Plugin
Open your terminal and navigate to your Vim or Neovim plugin directory. Then, clone the "vim-bitoai" repository using the following command:
Step 3: Configure the Plugin
Open your Vim or Neovim configuration file and add the following lines:
Save the configuration file and restart your editor or run :source ~/.vimrc (for Vim) or :source ~/.config/nvim/init.vim (for Neovim) to load the changes.
Step 4: Verify the Installation
Open Vim or Neovim, and you should now have the "vim-bitoai" plugin installed and ready to use.
Usage
You can use its powerful features once you have installed the "vim-bitoai" plugin. Here are some of the available commands:
BitoAiGenerate: Generates code based on a given prompt.
BitoAiGenerateUnit: Generates unit test code for the selected code block.
BitoAiGenerateComment: Generates comments for methods, explaining parameters and output.
BitoAiCheck: Performs a check for potential issues in the code and suggests improvements.
BitoAiCheckSecurity: Checks the code for security issues and provides recommendations.
BitoAiCheckStyle: Checks the code for style issues and suggests style improvements.
BitoAiCheckPerformance: Analyzes the code for performance issues and suggests optimizations.
BitoAiReadable: Organizes the code to enhance readability and maintainability.
BitoAiExplain: Generates an explanation for the selected code.
To execute a command, follow these steps:
Open a file in Vim or Neovim that you want to work on.
Select the code block you want to act on. You can use visual mode or manually specify the range using line numbers.
Execute the desired command by running the corresponding command in command mode. For example, to generate code based on a prompt, use the : BitoAiGenerate command. Note: Some commands may prompt you for additional information or options.
The plugin will communicate with the Bito CLI and execute the command, providing the output directly within your editor.
By leveraging the "vim-bitoai" plugin, you can directly harness the power of Bito's AI capabilities within your favorite Vim or Neovim editor. This integration lets you streamline your software development process, saving time and effort in repetitive tasks and promoting efficient coding practices.
Customization
The "vim-bitoai" plugin also offers customization options tailored to your specific needs. Here are a few variables you can configure in your Vim or Neovim configuration file:
g:bito_buffer_name_prefix: Sets the prefix for the buffer name in the Bito history. By default, it is set to 'bito_history_'.
g:vim_bito_path: Specifies the path to the Bito CLI executable. If the Bito CLI is not in your system's command path, you can provide the full path to the executable.
g:vim_bito_prompt_{command}: Allows you to customize the prompt for a specific command. Replace {command} with the desired command.
To define a custom prompt, add the following line to your Vim or Neovim configuration file and replace your prompt with the desired prompt text:
Remember to restart your editor or run the appropriate command to load the changes.
We encourage you to explore the "vim-bitoai" plugin and experience the benefits of seamless integration between Bito and your Vim or Neovim editor. Feel free to contribute to the repository or provide feedback to help us further improve this plugin and enhance your coding experience.
Integrate the AI Code Review Agent into your GitHub workflow.
Coming soon...
Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:
Bito supports integration with the following Git providers:
GitHub
GitHub (Self-Managed)
GitLab
GitLab (Self-Managed)
Bitbucket
Since we are setting up the Agent for GitHub, select GitHub to proceed.
To enable pull request reviews, you need to install and authorize the Bito’s AI Code Review Agent app.
Click the Install Bito App for GitHub button. This will redirect you to GitHub.
On GitHub, select where you want to install the app.
Grant Bito access to your repositories:
Choose All repositories to enable Bito for every repository in your account.
Or, select Only select repositories and pick specific repositories using the dropdown menu.
Click Install & Authorize to proceed. Once completed, you will be redirected to Bito.
After connecting Bito to your GitHub account, you need to enable the AI Code Review Agent for your repositories.
Click the Go to repository list button to view all repositories Bito can access in your GitHub account.
Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.
Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:
Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.
Manually trigger code review: To initiate a manual review, simply type /review
in the comment box on the pull request and submit it. This action will start the code review process.
The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.
Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
/review security
: Analyzes code to identify security vulnerabilities and ensure secure coding practices.
/review performance
: Evaluates code for performance issues, identifying slow or resource-heavy areas.
/review scalability
: Assesses the code's ability to handle increased usage and scale effectively.
/review codeorg
: Scans for readability and maintainability, promoting clear and efficient code organization.
/review codeoptimize
: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.
By default, the /review
command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False
Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.
To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.
Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.
Get up and running with Bito in just a few steps! Bito seamlessly integrates with Visual Studio Code, providing powerful AI-driven coding assistance directly within your editor. Click the button below to quickly install the Bito extension and start optimizing your development workflow with context-aware , , and more.
Starting with Bito version 1.3.4, the extension is only supported on VS Code versions 1.72 and higher. Bito does not support VS Code versions below 1.72, and earlier versions of Bito do not function properly on these older versions. Additionally, while Bito is supported on VS Code versions 1.72 and above, the feature in Bito only works on VS Code version 1.80 and higher.
Visual Studio Code Marketplace Link
Get up and running with Bito in just a few steps! Bito seamlessly integrates with popular JetBrains IDEs such as IntelliJ IDEA, PyCharm, and WebStorm, providing powerful AI-driven coding assistance directly within your editor. Click the button below to quickly install the Bito extension and start optimizing your development workflow with context-aware , , and more.
The menu to invoke the settings dialog may differ for different IDEs of the JetBrains family. The screenshots highlighted above are for the IntelliJ IDEA. You can access the Bito extension directly from the JetBrains marketplace at .
to open the Bito Chrome Extension page.
Bito’s is the first agent built with Bito’s AI Agent framework and engine. It is an automated AI assistant (powered by Anthropic’s Claude Sonnet 3.7) that will review your team’s code; it spots bugs, issues, code smells, and security vulnerabilities in Pull/Merge Requests (PR/MR) and provides high-quality suggestions to fix them.
It ensures a secure and confidential experience without compromising on reliability. Bito neither reads nor stores your code, and none of your code is used for AI model training. Learn more about our .
By accessing Bito's feature, the AI Code Review Agent can analyze relevant context from your entire repository, providing better context-aware analysis and suggestions. This tailored approach ensures a more personalized and contextually relevant code review experience.
To comprehend your code and its dependencies, we use Symbol Indexing, Abstract Syntax Trees (AST), and Embeddings. Each step feeds into the next, starting from locating specific code snippets with Symbol Indexing, getting their structural context with AST parsing, and then leveraging embedding vectors for broader semantic insights. This approach ensures a detailed understanding of the code's functionality and its dependencies. For more information, see
The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the 10X Developer Plan. For detailed pricing information, visit our page.
A quick look at powerful features of —click to jump to details.
The understand code changes in pull requests. It analyzes relevant context from your entire repository, resulting in more accurate and helpful code reviews.
offers a one-click solution for using the , eliminating the need for any downloads on your machine.
The agent evaluates the complexity and quality of the changes to estimate the effort required to review them, providing reviewers the ability to plan their schedule better. For more information, see
You can enable or disable incremental reviews at the Agent instance level or workspace level, giving your team more control over the review process. Contact to customize this feature according to your team's needs.
Get in-depth insights into your org’s code reviews with user-friendly dashboard. Track key metrics such as pull requests reviewed, issues found, lines of code reviewed, and understand individual contributions.
, and the AI Code Review Agent automatically adapts by creating code review rules to prevent similar suggestions in the future.
, and we will implement them within your Bito workspace.
The AI Code Review Agent checks real-time for the latest high severity security vulnerabilities in your code, using (available out of the box). Additional tools such as , or can also be configured.
The understands code changes in pull requests by analyzing relevant context from your entire repository, resulting in more accurate and helpful code reviews. The agent provides either Basic Code Understanding or Advanced Code Understanding based on the programming languages used in the code diff. Learn more about all the supported languages in the table below.
Advanced Code Understanding is providing detailed information holistically to the LLM about the changes the diff is making—from things such as global variables, libraries, and frameworks (e.g., Lombok in Java, React for JS/TS, or Angular for TS) being used, the specific functions/methods and classes the diff is part of, to the upstream and downstream impact of a change being made. Using advanced code traversal and understanding techniques, such as symbol indexes, embeddings, and abstract syntax trees, Bito deeply tries to understand what your changes are about and the impact and relevance to the greater codebase, like a senior engineer does when doing code review. .
For requests to add support for specific programming languages, please reach out to us at
For custom SAST tools configuration to support specific languages in the , please reach out to us at
The AI Code Review Agent offers two primary deployment options: running in or as a . Each option comes with its own set of benefits and considerations.
provides a managed environment for running the AI Code Review Agent, offering a seamless, hassle-free experience. This option is ideal for teams looking for quick deployment and minimal operational overhead.
AI Code Review Agent offers a higher degree of control and customization, suited for organizations with specific requirements or those who prefer to manage their own infrastructure.
offers a single-click solution for using the , eliminating the need for any downloads on your machine. You can create multiple instances of the Agent, allowing each to be used with a different repository on a Git provider such as GitHub, GitLab, or Bitbucket.
The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the 10X Developer Plan. For detailed pricing information, visit our page.
Make sure you have Bito CLI installed on your system. If you haven't installed it, you can find detailed instructions in the Bito CLI repository at.
Speed up code reviews by configuring the with your GitHub repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using .
The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the 10X Developer Plan. For detailed pricing information, visit our page.
and select a workspace to get started.
Navigate to the setup page via the sidebar.
Note: To enhance efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main", "master", and all non-default branches. This prevents unnecessary processing and token usage, as changes to these branches are typically already reviewed in release or feature branches. To modify this default behavior and include the "main" or "master" branches, you can use the .
For more details, refer to .
Install on VS Code
Install on JetBrains
Get started
Get a demo
Getting Started
Key Features
Supported Programming Languages and Tools
Agent Configuration: bito-cra.properties File
FAQs
Start free trial
Get a demo
Learn more
Learn more
Learn more
Learn more
Learn more
Learn more
Learn more
Learn more
Guide for GitHub
Guide for GitHub (Self-Managed)
Guide for GitLab
Guide for GitLab (Self-Managed)
Guide for Bitbucket
From one-time reviews to continuous automated reviews.
On your machine or in a Private Cloud, you can run the AI Code Review Agent via either CLI or webhooks service. This guide will teach you about the key differences between CLI and webhooks service and when to use each mode.
The main difference between CLI and webhooks service lies in their operational approach and purpose. In CLI, the docker container is used for a one-time code review. This mode is ideal for isolated, single-instance analyses where a quick, direct review of the code is needed.
On the other hand, webhooks service is designed for continuous operation. When set in webhooks service mode, the AI Code Review Agent remains online and active at a specified URL. This continuous operation allows it to respond automatically whenever a pull request is opened in a repository. In this scenario, the git provider notifies the server, triggering the AI Code Review Agent to analyze the pull request and post its review as a comment directly on it.
Selecting the appropriate mode for code review with the AI Code Review Agent depends largely on the nature and frequency of your code review needs.
CLI mode is best suited for scenarios requiring immediate, one-time code reviews. It's particularly effective for:
Conducting quick assessments of specific pull requests.
Performing periodic, scheduled code analyses.
Reviewing code in environments with limited or no continuous integration support.
Integrating with batch processing scripts for ad-hoc analysis.
Using in educational settings to demonstrate code review practices.
Experimenting with different code review configurations.
Reviewing code on local setups or for personal projects.
Performing a final check before pushing code to a repository.
CLI mode stands out for its simplicity and is perfect for standalone tasks where a single, direct execution of the code review process is all that's needed.
Webhooks service, on the other hand, is the go-to choice for continuous code review processes. It excels in:
Continuously monitoring all pull requests in a repository.
Providing instant feedback in collaborative projects.
Seamlessly integrating with CI/CD pipelines for automated reviews.
Performing automated code quality checks in team environments.
Conducting real-time security scans on new pull requests.
Ensuring adherence to coding standards in every pull request.
Streamlining the code review process in large-scale projects.
Maintaining consistency in code review across multiple projects.
Enhancing workflows in remote or distributed development teams.
Offering prompt feedback in agile development settings.
Webhooks service is indispensable in active development environments where consistent monitoring and immediate feedback are critical. It automates the code review process, integrating seamlessly into the workflow and eliminating the need for manual initiation of code reviews.
Integrate the AI Code Review Agent into your self-hosted GitLab workflow.
coming soon...
Before proceeding, ensure you've completed all necessary prerequisites.
For GitLab merge request code reviews, a token with api
scope is required. Make sure that the token is created by a GitLab user who has the Maintainer
access role.
If your GitLab organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.
For more information, please refer to these GitLab documentation:
Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:
Bito supports integration with the following Git providers:
GitHub
GitHub (Self-Managed)
GitLab
GitLab (Self-Managed)
Bitbucket
Since we are setting up the Agent for GitLab (Self-Managed) server, select GitLab (Self-Managed) to proceed.
To enable merge request reviews, you’ll need to connect your Bito workspace to your GitLab (Self-Managed) server.
You need to enter the details for the below mentioned input fields:
Hosted GitLab URL: This is the domain portion of the URL where you GitLab Enterprise Server is hosted (e.g., https://yourcompany.gitlab.com
). Please check with your GitLab administrator for the correct URL.
Click Validate to ensure the token is functioning properly.
If the token is successfully validated, you can select your GitLab Group from the dropdown menu.
Click Connect Bito to GitLab to proceed.
After connecting Bito to your GitLab self-managed server, you need to enable the AI Code Review Agent for your repositories.
Click the Go to repository list button to view all repositories Bito can access in your GitLab self-managed server.
Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.
Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:
Automated code review: By default, the Agent automatically reviews all new merge requests and provides detailed feedback.
Manually trigger code review: To initiate a manual review, simply type /review
in the comment box on the merge request and submit it. This action will start the code review process.
The AI-generated code review feedback will be posted as comments directly within your merge request, making it seamless to view and address suggestions right where they matter most.
Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
/review security
: Analyzes code to identify security vulnerabilities and ensure secure coding practices.
/review performance
: Evaluates code for performance issues, identifying slow or resource-heavy areas.
/review scalability
: Assesses the code's ability to handle increased usage and scale effectively.
/review codeorg
: Scans for readability and maintainability, promoting clear and efficient code organization.
/review codeoptimize
: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.
By default, the /review
command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False
Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.
To start the conversation, type your question in the comment box within the inline suggestions on your merge request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.
Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.
Integrate the AI Code Review Agent into your self-hosted GitHub Enterprise workflow.
coming soon...
Before proceeding, ensure you've completed all necessary prerequisites.
For GitHub pull request code reviews, ensure you have a CLASSIC personal access token with repo
scope. We do not support fine-grained tokens currently.
Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:
Bito supports integration with the following Git providers:
GitHub
GitHub (Self-Managed)
GitLab
GitLab (Self-Managed)
Bitbucket
Since we are setting up the Agent for self-managed GitHub Enterprise server, select GitHub (Self-Managed) to proceed.
To enable pull request reviews, you need to register and install the Bito’s AI Code Review Agent app on your self-managed GitHub Enterprise server.
You need to enter the details for the below mentioned input fields:
Hosted GitHub URL: This is the domain portion of the URL where you GitHub Enterprise Server is hosted (e.g., https://yourcompany.github.com
). Please check with your GitHub administrator for the correct URL.
Click Validate to ensure the login credentials are working correctly. If the credentials are successfully validated, click the Install Bito App for GitHub button. This will redirect you to your GitHub (Self-Managed) server.
Now select where you want to install the app:
Choose All repositories to enable Bito for every repository in your account.
Or, select Only select repositories and pick specific repositories using the dropdown menu.
Click Install & Authorize to proceed. Once completed, you will be redirected to Bito.
After connecting Bito to your self-managed GitHub Enterprise server, you need to enable the AI Code Review Agent for your repositories.
Click the Go to repository list button to view all repositories Bito can access in your self-managed GitHub Enterprise server.
Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.
Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:
Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.
Manually trigger code review: To initiate a manual review, simply type /review
in the comment box on the pull request and submit it. This action will start the code review process.
The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.
Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
/review security
: Analyzes code to identify security vulnerabilities and ensure secure coding practices.
/review performance
: Evaluates code for performance issues, identifying slow or resource-heavy areas.
/review scalability
: Assesses the code's ability to handle increased usage and scale effectively.
/review codeorg
: Scans for readability and maintainability, promoting clear and efficient code organization.
/review codeoptimize
: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.
By default, the /review
command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False
Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.
To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.
Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.
Automate code reviews in your Continuous Integration/Continuous Deployment (CI/CD) pipeline—compatible with all CI/CD tools, including Jenkins, Argo CD, GitLab CI/CD, and more.
You can integrate the AI Code Review Agent into your CI/CD pipeline in two ways, depending on your preference:
Option 1: Using the bito_action.properties
File
Configure the following properties in the bito_action.properties
file located in the downloaded bito-action-script folder.
agent_instance_url
The URL of the Agent instance provided after configuring the AI Code Review Agent with Bito Cloud.
agent_instance_secret
The secret key for the Agent instance obtained after configuring the AI Code Review Agent with Bito Cloud.
pr_url
URL of your pull request on GitLab, GitHub, or BitBucket.
Run the following command:
./bito_actions.sh bito_action.properties
Note: When using the properties file, make sure to provide all the three parameters in .properties
file
Provide all necessary values directly on the command line:
./bito_actions.sh agent_instance_url=<agent_instance_url> agent_instance_secret=<secret> pr_url=<pr_url>
Replace <agent_instance_url>
, <secret>
, and <pr_url>
with your specific values.
Note: You can also override the values given in the .properties
file or provide values that are not included in the file. For example, you can configure agent_instance_url
and agent_instance_secret
in the bito_action.properties
file, and only pass pr_url
on the command line during runtime.
./bito_actions.sh bito_action.properties pr_url=<pr_url>
Replace <pr_url>
with your specific values.
Incorporate the AI Code Review Agent into your CI/CD pipeline by adding the appropriate commands to your build or deployment scripts. This integration will automatically trigger code reviews as part of the pipeline, enhancing your development workflow by enforcing code quality checks with every change.
Integrate the AI Code Review Agent into your Bitbucket workflow.
Coming soon...
Before proceeding, ensure you've completed all necessary prerequisites.
For Bitbucket pull request code reviews, you’ll need to connect your Bito workspace to your Bitbucket account.
Ensure the required permissions are checked:
Under Account, select Read.
Under Pull requests, select Write.
Under Webhooks, select Read and write.
Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:
Bito supports integration with the following Git providers:
GitHub
GitHub (Self-Managed)
GitLab
GitLab (Self-Managed)
Bitbucket
Since we are setting up the Agent for Bitbucket, select Bitbucket to proceed.
To enable pull request reviews, you’ll need to connect your Bito workspace to your Bitbucket account.
Ensure the required permissions are checked:
Under Account, select Read.
Under Pull requests, select Write.
Under Webhooks, select Read and write.
Once generated, enter your Bitbucket username and App password into the input fields in Bito.
Click Authorize to ensure the login credentials are working correctly.
If the credentials are successfully authorized, you can select your Bitbucket workspace from the dropdown menu.
Click Connect Bito to Bitbucket to proceed.
After connecting Bito to your Bitbucket account, you need to enable the AI Code Review Agent for your repositories.
Click the Go to repository list button to view all repositories Bito can access in your Bitbucket account.
Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.
Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:
Automated code review: By default, the Agent automatically reviews all new pull requests and provides detailed feedback.
Manually trigger code review: To initiate a manual review, simply type /review
in the comment box on the pull request and click Add comment now to submit it. This action will start the code review process.
The AI-generated code review feedback will be posted as comments directly within your pull request, making it seamless to view and address suggestions right where they matter most.
Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
/review security
: Analyzes code to identify security vulnerabilities and ensure secure coding practices.
/review performance
: Evaluates code for performance issues, identifying slow or resource-heavy areas.
/review scalability
: Assesses the code's ability to handle increased usage and scale effectively.
/review codeorg
: Scans for readability and maintainability, promoting clear and efficient code organization.
/review codeoptimize
: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.
By default, the /review
command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False
Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.
To start the conversation, type your question in the comment box within the inline suggestions on your pull request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.
Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.
Customize the AI Code Review Agent to match your workflow needs.
While the Default Agent is ready for use right away, Bito also gives you the option to create new Agent instances or customize existing ones to suit your specific requirements. This flexibility ensures that the Agent can adapt to a range of workflows and project needs.
For example, you might configure one Agent to disable automatic code reviews for certain repositories, another to exclude specific Git branches from review, and yet another to filter out particular files or folders.
This guide will walk you through how to create or customize an Agent instance, unlocking its full potential to streamline your code reviews.
Once Bito is connected to your GitHub/GitLab/Bitbucket account, you can easily create a new Agent or customize an existing one to suit your workflow.
Once you have selected an Agent to customize, you can modify its settings in the following areas:
Assign a unique alphanumeric name to your Agent. This name acts as an identifier and allows you to invoke the Agent in supported clients using the @<agent_name>
command.
Provide a concise description of the Agent's purpose, such as the use case or project it will support. This makes it easier to manage multiple Agents.
Bito provides three tabs for in-depth Agent customization:
In this tab, you can configure how and when the Agent performs reviews:
Automatic review: Toggle to enable or disable automatic reviews when a pull request is created and ready for review.
Automatic incremental review: Toggle to enable or disable reviews for new commits added to a pull request. Only changes since the last review are assessed.
Batch time (hours): Set the wait time (0 to 24 hours) for batching new commits before triggering a review. Lower values result in more frequent incremental reviews.
Draft pull requests: By default, the Agent excludes draft pull requests from automated reviews. Disable this toggle to include drafts.
Automatic summary: Toggle to enable automatic generation of AI summaries for changes, which are appended to the pull request description.
Change Walkthrough: Enable this option to generate a table of changes and associated files, posted as a comment on the pull request.
Use filters to exclude specific pull requests or files from automated workflows:
Files and folders: A list of files/folders that the AI Code Review Agent will not review if they are present in the diff. You can specify the files/folders to exclude from the review by name or glob/regex pattern. The Agent will automatically skip any files or folders that match the exclusion list. This filter applies to both manual reviews initiated through the /review
command and automated reviews.
Source or Target branch: This filter allows users to skip automated reviews for pull requests based on the source or target branch. It is useful in scenarios where automated reviews are unnecessary or could potentially disrupt the workflow. This filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review
command.
Enhance the Agent’s reviews by enabling additional tools for static analysis, security checks, and secret detection:
Secret Scanner: Enable this tool to detect and report secrets left in code changes.
Click Select repositories to choose Git repositories for the Agent.
To enable code review for a specific repository, simply select its corresponding checkbox. You can also enable repositories later, after the Agent has been created. Once done, click Save and continue to save the new Agent configuration.
Integrate the AI Code Review Agent into your GitLab workflow.
Coming soon...
Before proceeding, ensure you've completed all necessary prerequisites.
For GitLab merge request code reviews, a token with api
scope is required. Make sure that the token is created by a GitLab user who has the Maintainer
access role.
If your GitLab organization enforces SAML Single Sign-On (SSO), you must authorize your Personal Access Token through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.
For more information, please refer to these GitLab documentation:
Follow the step-by-step instructions below to install the AI Code Review Agent using Bito Cloud:
Bito supports integration with the following Git providers:
GitHub
GitHub (Self-Managed)
GitLab
GitLab (Self-Managed)
Bitbucket
Since we are setting up the Agent for GitLab, select GitLab to proceed.
To enable merge request reviews, you’ll need to connect your Bito workspace to your GitLab account.
You can either connect using OAuth (recommended) for a seamless, one-click setup or manually enter your Personal Access Token.
To connect via OAuth, simply click the Connect with OAuth (Recommended) button. This will redirect you to the GitLab website, where you'll need to log in. Once authenticated, you'll be redirected back to Bito, confirming a successful connection.
If you prefer not to use OAuth, you can connect manually using a Personal Access Token.
Once generated, click the Alternatively, use Personal or Group Access Token button.
Now, enter the token into the Personal Access Token input field in Bito.
Click Validate to ensure the token is functioning properly.
If you've successfully connected via OAuth or manually validated your token, you can select your GitLab Group from the dropdown menu.
Click Connect Bito to GitLab to proceed.
After connecting Bito to your GitLab account, you need to enable the AI Code Review Agent for your repositories.
Click the Go to repository list button to view all repositories Bito can access in your GitLab account.
Use the toggles in the Code Review Status column to enable or disable the Agent for each repository.
Once a repository is enabled, you can invoke the AI Code Review Agent in the following ways:
Automated code review: By default, the Agent automatically reviews all new merge requests and provides detailed feedback.
Manually trigger code review: To initiate a manual review, simply type /review
in the comment box on the merge request and submit it. This action will start the code review process.
The AI-generated code review feedback will be posted as comments directly within your merge request, making it seamless to view and address suggestions right where they matter most.
Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
/review security
: Analyzes code to identify security vulnerabilities and ensure secure coding practices.
/review performance
: Evaluates code for performance issues, identifying slow or resource-heavy areas.
/review scalability
: Assesses the code's ability to handle increased usage and scale effectively.
/review codeorg
: Scans for readability and maintainability, promoting clear and efficient code organization.
/review codeoptimize
: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.
By default, the /review
command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False
Ask questions directly to the AI Code Review Agent regarding its code review feedback. You can inquire about highlighted issues, request alternative solutions, or seek clarifications on suggested fixes.
To start the conversation, type your question in the comment box within the inline suggestions on your merge request, and then submit it. Typically, Bito AI responses are delivered in about 10 seconds. On GitHub and Bitbucket, you need to manually refresh the page to see the responses, while GitLab updates automatically.
Bito supports over 20 languages—including English, Hindi, Chinese, and Spanish—so you can interact with the AI in the language you’re most comfortable with.
Easily duplicate Agent configurations for faster setup.
Follow the steps below to get started:
If your Bito workspace is connected to your GitHub/GitLab/Bitbucket account, a list of AI Code Review Agent instances configured in your workspace will appear. Locate the instance you wish to duplicate and click the Clone button given in front of it.
An Agent configuration form will open, pre-populated with the input field values. You can edit these values as needed.
Click Select repositories to choose Git repositories for the new Agent.
To enable code review for a specific repository, simply select its corresponding checkbox. You can also enable repositories later, after the Agent has been created. Once done, click Save and continue to save the new Agent configuration.
Deploy the AI Code Review Agent on your machine.
The self-hosted AI Code Review Agent offers a more private and customizable option for teams looking to enhance their code review processes within their own infrastructure, while maintaining complete control over their data. This approach is ideal for organizations with specific compliance, security, or customization requirements.
When setting up the AI Code Review Agent, you have the flexibility to choose between two primary modes of operation: CLI and webhooks service.
CLI allows developers to manually initiate code reviews directly from terminal. This mode is ideal for quick, on-demand code reviews without the need for continuous monitoring or integration.
Webhooks service transforms the Agent into a persistent service that automatically triggers code reviews based on specific events, such as pull requests or comments on pull requests. This mode is suitable for teams looking to automate their code review processes.
Based on your needs and the desired integration level with your development workflow, choose one of the following options to install and run the AI Code Review Agent:
CLI mode is best suited for immediate, one-time code reviews.
Start Docker: Ensure Docker is running on your machine.
Extract and Navigate:
Extract the downloaded .zip file to a preferred location.
Navigate to the extracted folder and then to the “cra-scripts” subfolder.
Note the full path to the “cra-scripts” folder for later use.
Open Command Line:
Use Bash for Linux and macOS.
Use PowerShell for Windows.
Set Directory:
Change the current directory in Bash/PowerShell to the “cra-scripts” folder.
Example command: cd [Path to cra-scripts folder]
Adjust the path based on your extraction location.
Configure Properties:
Set mandatory properties:
mode = cli
pr_url
bito_cli.bito.access_key
git.provider
git.access_token
Optional properties (can be skipped or set as needed):
git.domain
code_feedback
static_analysis
dependency_check
dependency_check.snyk_auth_token
review_scope
exclude_branches
exclude_files
exclude_draft_pr
Run the Agent:
On Linux/macOS in Bash: Run ./bito-cra.sh bito-cra.properties
On Windows in PowerShell: Run ./bito-cra.ps1 bito-cra.properties
Final Steps:
The script may prompt values of mandatory/optional properties if they are not preconfigured.
Upon completion, a code review comment is automatically posted on the Pull Request specified in the pr_url property.
Key requirements for self-hosting the AI Code Review Agent.
Easily delete Agent instances you no longer need.
If your Bito workspace is connected to your GitHub/GitLab/Bitbucket account, a list of AI Code Review Agent instances configured in your workspace will appear.
Before deleting an Agent, ensure that any repositories currently using it are reassigned to another Agent otherwise a warning popup will appear.
Locate the Agent you wish to delete and click the Delete button given in front of it.
Invoke the AI Code Review Agent manually or within a workflow.
This command provides a broad overview of your code changes, offering suggestions for improvement across various aspects but without diving deep for secure coding or performance optimizations or scalability improvements etc. This makes it ideal for catching general code quality issues that might not necessarily be critical blockers but can enhance readability, maintainability, and overall code health.
Think of it as a first-pass review to identify potential areas for improvement before delving into more specialized analyses.
Five specialized commands are available to perform detailed analyses on specific aspects of your code. Details for each command are given below.
/review security
/review performance
/review scalability
/review codeorg
/review codeoptimize
This command performs an in-depth analysis of your code to identify vulnerabilities that could allow attackers to steal data, gain unauthorized access, or disrupt your application. This includes checking for weaknesses in input validation, output encoding, authentication, authorization, and session management. It also looks for proper encryption of sensitive data, secure coding practices, and potential misconfigurations that could expose your system.
This command evaluates the current performance of the code by pinpointing slow or resource-intensive areas and identifying potential bottlenecks. It helps developers understand where the code may be underperforming against expected benchmarks or standards. It is particularly useful for identifying slow processes that could benefit from further investigation and refinement.
This includes checking how well your code accesses data and manages tasks like database interactions and memory usage.
This command analyzes your code to identify potential roadblocks to handling increased usage or data. It checks how well the codebase supports horizontal scaling and whether it is compatible with load balancing strategies. It also ensures the code can handle concurrent requests efficiently and avoids bottlenecks from single points of failure. The command further examines error handling and retry mechanisms to promote system resilience under pressure.
This command scans your code for readability, maintainability, and overall clarity. This includes checking for consistent formatting, clear comments, well-defined functions, and efficient use of data structures. It also looks for opportunities to reduce code duplication, improve error handling, and ensure the code is written for future growth and maintainability.
This command helps identify specific parts of the code that can be made more efficient through optimization techniques. It suggests refactoring opportunities, algorithmic improvements, and areas where resource usage can be minimized. This command is essential for enhancing the overall efficiency of the code, making it faster and less resource-heavy.
By default, the /review
command generates inline comments, placing code suggestions directly beneath the corresponding lines in each file for clearer guidance on improvements. If you prefer a single consolidated code review instead of separate inline comments, use the #inline_comment
parameter and set its value to False
.
Example: /review #inline_comment=False
Example: /review scalability #inline_comment=False
A list of files/folders that the AI Code Review Agent will not review if they are present in the diff. You can specify the files/folders to exclude from the review by name or glob/regex pattern. The Agent will automatically skip any files or folders that match the exclusion list.
This filter applies to both manual reviews initiated through the /review
command and automated reviews triggered via webhook.
By default, these files are excluded: *.xml
, *.json
, *.properties
, .gitignore
, *.yml
, *.md
This filter allows users to skip automated reviews for pull requests based on the source or target branch. It is useful in scenarios where automated reviews are unnecessary or could potentially disrupt the workflow.
For example, this filter is useful in scenarios such as:
Merging to upstream branches from development branches.
Pull requests from PoC/experiment branches.
Aggregated code changes moving towards the main branch.
This filter applies only to automatically triggered reviews. Users should still be able to trigger reviews manually via the /review
command.
By default, the main
, master
and *
all non-default branches are excluded from automated code reviews. This means you won’t get reviews on branches that aren’t the main focus of your development. However, pull requests merging into your repository’s default branch are always reviewed, even if the branch is added in the exclusion list.
A binary setting that enables/disables automated review of pull requests (PR) based on the draft status. Enter True
to disable automated review for draft pull requests, or False
to enable it.
The default value is True
which skips automated review of draft PR.
A 2-minute video overview of Bito
Automate code reviews for pull requests on GitHub and GitLab.
Step-by-step instructions for installing Bito on Visual Studio Code
Step-by-step instructions for installing Bito on JetBrains
Step by step guide on how to install Bito Chrome extension
Enabling Bito in VIM and NewVIM editors, a community project
Try advanced AI coding assistant for free
Upgrading Bito extensions in Visual Studio Code and Jetbrains IDEs
Supercharging development with AI
Speed up code reviews by configuring the with your GitLab (Self-Managed) server. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a merge request, as well as how to manually initiate reviews using .
The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the 10X Developer Plan. For detailed pricing information, visit our page.
and select a workspace to get started.
Navigate to the setup page via the sidebar.
Personal Access Token: Generate a GitLab Personal Access Token with api
scope in your GitLab (Self-Managed) account and enter it into the Personal Access Token input field. For guidance, refer to the instructions in the section.
Note: To enhance efficiency, the AI Code Review Agent is disabled by default for merge requests involving the "main", "master", and all non-default branches. This prevents unnecessary processing and token usage, as changes to these branches are typically already reviewed in release or feature branches. To modify this default behavior and include the "main" or "master" branches, you can use the .
For more details, refer to .
Speed up code reviews by configuring the with your self-managed GitHub Enterprise server. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using .
The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the 10X Developer Plan. For detailed pricing information, visit our page.
If your GitHub organization enforces , you must authorize your Personal Access Token (classic) through your Identity Provider (IdP); otherwise, Bito's AI Code Review Agent won't function properly.
For detailed instructions, please refer to the .
and select a workspace to get started.
Navigate to the setup page via the sidebar.
Personal Access Token: Generate a Personal Access Token (classic) with “repo” scope in your GitHub (Self-Managed) account and enter it into the Personal Access Token input field. We do not support fine-grained tokens currently. For guidance, refer to the instructions in the section.
Note: To enhance efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main", "master", and all non-default branches. This prevents unnecessary processing and token usage, as changes to these branches are typically already reviewed in release or feature branches. To modify this default behavior and include the "main" or "master" branches, you can use the .
For more details, refer to .
lets you integrate the into your CI/CD pipeline for automated code reviews. This document provides a step-by-step guide to help you configure and run the script successfully.
Follow the step-by-step instructions provided to install the AI Code Review Agent using Bito Cloud. Review the Prerequisites, Installation and Configuration Steps, and the Webhook Setup Guide provided there.
, which includes a shell script (bito-actions.sh
) and a configuration file (bito_action.properties
).
Speed up code reviews by configuring the with your Bitbucket repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using .
The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the 10X Developer Plan. For detailed pricing information, visit our page.
Start by . App Passwords allow apps like Bito to access your Bitbucket account. Make sure that the App Password is created by a Bitbucket user who has the Admin
access role to the repositories.
and select a workspace to get started.
Navigate to the setup page via the sidebar.
Start by . App Passwords allow apps like Bito to access your Bitbucket account. Make sure that the App Password is created by a Bitbucket user who has the Admin
access role to the repositories.
For guidance, refer to the instructions in the section.
Note: To enhance efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main", "master", and all non-default branches. This prevents unnecessary processing and token usage, as changes to these branches are typically already reviewed in release or feature branches. To modify this default behavior and include the "main" or "master" branches, you can use the .
For more details, refer to .
provides immediate access to the . To get you started quickly, Bito offers a Default Agent instance—pre-configured and ready to deliver AI-powered code reviews for pull requests and code changes within supported IDEs such as VS Code and JetBrains.
To create a new Agent, navigate to the page and click the New Agent button to open the Agent configuration form.
If you’d like to customize an existing agent, simply go to the same page and click the Edit button next to the Agent you wish to modify.
For more information and examples, see .
When you save the configuration, your new Agent instance will be added and available on the page.
Speed up code reviews by configuring the with your GitLab repositories. In this guide, you'll learn how to set up the Agent to receive automated code reviews that trigger whenever you create a pull request, as well as how to manually initiate reviews using .
The Free Plan offers AI-generated pull request summaries to provide a quick overview of changes. For advanced features like line-level code suggestions, consider upgrading to the 10X Developer Plan. For detailed pricing information, visit our page.
and select a workspace to get started.
Navigate to the setup page via the sidebar.
Start by with api
scope in your GitLab account. For guidance, refer to the instructions in the section.
Note: To enhance efficiency, the AI Code Review Agent is disabled by default for merge requests involving the "main", "master", and all non-default branches. This prevents unnecessary processing and token usage, as changes to these branches are typically already reviewed in release or feature branches. To modify this default behavior and include the "main" or "master" branches, you can use the .
For more details, refer to .
Save time and effort by quickly creating a new instance using the configuration settings of an existing one. It’s a fast and simple way to set up multiple Agent instances without having to reconfigure each one.
and select a workspace to get started.
From the left sidebar, select .
When you save the configuration, your new Agent instance will be added and available on the page.
For more details, visit the page.
Before proceeding, ensure you've completed all necessary AI Code Review Agent.
: Ideal for developers seeking a simple, interactive way to conduct code reviews from the command line.
: Perfect for teams looking to automate code reviews through external events, enhancing their CI/CD workflow.
: A great option for GitHub users to seamlessly integrate automated code reviews into their GitHub Actions workflows.
Prerequisites: Before proceeding, ensure you've completed all necessary AI Code Review Agent.
Repository Download: GitHub repository to your machine.
Open the bito-cra.properties file in a text editor from the “cra-scripts” folder. Detailed information for each property is provided on page.
Note: Detailed information for each property is provided on page.
Check the guide to learn more about creating the access tokens needed to configure the Agent.
Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please .
Bito Access Key: Obtain your Bito Access Key.
GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.
GitLab Personal Access Token: For GitLab PR code reviews, a token with API access is required.
Snyk API Token (Auth Token): For Snyk vulnerability reports, obtain a Snyk API Token.
If you no longer need an instance, you can delete it to keep your workspace organized. Follow the steps below to quickly remove any unused Agents.
and select a workspace to get started.
From the left sidebar, select .
The offers a suite of commands tailored to developers' needs. You can manually trigger a code review by entering any of these commands in the comment box below a pull/merge request on GitHub, GitLab, or Bitbucket and submitting the comment. Alternatively, if you are using the self-hosted version, you can configure these commands in the for automated code reviews.
The offers powerful filters to exclude specific files and folders from code reviews and enables skipping automated reviews for selected Git branches. These filters can be configured at the Agent instance level, overriding the default behavior.
You can configure filters using the Agent configuration page. For detailed instructions, please refer to the documentation page.
You can configure filters using the . Check the options exclude_branches
, exclude_files
, and exclude_draft_pr
for more details.
You can configure filters using the GitHub Actions repository variables: EXCLUDE_BRANCHES
, EXCLUDE_FILES
, and EXCLUDE_DRAFT_PR
. For detailed instructions, please refer to the documentation page.
Note: This file is only available for people who are using the version of AI Code Review Agent.
The bito-cra.properties file offers a comprehensive range of options for configuring the , enhancing its flexibility and adaptability to various workflow requirements.
CPU Cores
4
RAM
8 GB
Hard Disk Drive
80 GB
Exclude all properties files in all folders and subfolders
*.properties
resource/config.properties
, resource/server/server.properties
resource/config.yaml
, resource/config.json
Exclude all files, folders and subfolders in folder starting with resources
resources/
resources/application.properties
, resources/config/config.yaml
app/resources/file.txt
, config/resources/service.properties
Exclude all files, folders and subfolders in folder src/com/resources
src/com/resources/
resources/application.properties
, resources/config/config.yaml
app/resources/file.txt
, config/resources/service.properties
Exclude all files, folders and subfolders in subfolder resource
and in parent folder src
src/*/resource/*
src/com/resource/main.html
,
src/com/resource/script/file.css
, src/com/resource/app/script.js
src/resource/file.txt
, src/com/config/file.txt
, app/com/config/file.txt
Exclude non-css files from folder src/com/resource/
and subfolders
^src\/com\/resource\\/(?!.*\\.css$).*$
src/com/resource/main.html, src/com/resource/app/script.js
,
src/com/config/file.txt
src/com/resource/script/file.css
Exclude specific file controller/webhook_controller.go
controller/webhook_controller.go
controller/webhook_controller.go
controller/controller.go
, controller/webhook_service.go
Exclude non-css files from folder starting with config
and its subfolders
^config\\/(?!.*\\.css$).*$
config/server.yml
, config/util/conf.properties
config/profile.css
, config/styles/main.css
Exclude all files & folders
*
resource/file.txt
, config/file.properties
, app/folder/
-
Exclude all files & folders starting with name bito
in module
folder
module/bito*
module/bito123
, module/bitofile.js
, module/bito/file.js
module/filebito.js
, module/file2.txt
, module/util/file.txt
Exclude single-character folder names
*/?/*
src/a/file.txt
, app/b/folder/file.yaml
folder/file.txt
, ab/folder/file.txt
Exclude all folders, subfolders and files in those folders except folder starting with service
folder
^(?!service\\/).*$
config/file.txt
, resources/file.yaml
service/file.txt
, service/config/file.yaml
Exclude all files in all folders except .py
, .go
, and .java
files
^(?!.*\\.(py|go|java)$).*$
config/file.txt
, app/main.js
main.py
, module/service.go
, test/Example.java
Exclude non-css files from folder src/com/config
and its subfolders
^config\\/(?!.*\\.css$).*$
config/server.yml
, config/util/conf.properties
config/profile.css
, config/styles/main.css
Exclude any branch that starts with name BITO-
BITO-*
BITO-feature
, BITO-123
feature-BITO
, development
Exclude any branch that does not start with BITO-
^(?!BITO-).*
feature-123
, release-v1.0
BITO-feature
, BITO-123
Exclude any branch which is not BITO
^(?!BITO$).*
feature-BITO
, development
BITO
Exclude branches like release/v1.0
and release/v1.0.1
release/v\\d+\\.\\d+(\\.\\d+)?
release/v1.0
, release/v1.0.1
release/v1
, release/v1.0.x
Exclude any branch ending with -test
*-test
feature-test
, release-test
test-feature
, release-testing
Exclude the branch that has keyword main
main
main
, main-feature
, mainline
master
, development
Exclude the branch named main
^main$
main
main-feature
, mainline
, master
, development
Exclude any branch name that does not start with feature-
or release-
^(?!release-|feature-).*$
hotfix-123
, development
feature-123
, release-v1.0
Exclude branches with names containing digits
.*\\d+.*
feature-123
, release-v1.0
feature-abc
, main
Exclude branches with names ending with test
or testing
.*(test|testing)$
feature-test
, bugfix-testing
testing-feature
, test-branch
Exclude branches with names containing a specific substring test
*test*
feature-test
, test-branch
, testing
feature
, release
Exclude branches with names containing exactly three characters
^.{3}$
abc
, 123
abcd
, ab
Exclude branch names starting with release
, hotfix
, or development
but not starting with Bito
or feature
^(?!Bito|feature)(release|hotfix|development).*$
release-v1.0
, hotfix-123
, development-xyz
Bito-release
, feature-hotfix
, main-release
Exclude all branches where name do not contains version like 1.0
, 1.0.1
, etc.
^(?!.\\b\\d+\\.\\d+(\\.\\d+)?\\b).*
feature-xyz
, main
release-v1.0
, hotfix-1.0.1
Exclude all branches which are not alphanumeric
^.[^a-zA-Z0-9].$
feature-!abc
, release-@123
feature-123
, release-v1.0
Exclude all branches which contains space
.*\\s.*
feature 123
, release v1.0
feature-123
, release-v1.0
Linux
You will need:
Bash (minimum version 4.x)
For Debian and Ubuntu systems
sudo apt-get install bash
For CentOS and other RPM-based systems
sudo yum install bash
Docker (minimum version 20.x)
macOS
You will need:
Bash (minimum version 4.x)
brew install bash
Docker (minimum version 20.x)
Windows
You will need:
PowerShell (minimum version 5.x)
Note: In PowerShell version 7.x, run Set-ExecutionPolicy Unrestricted
command. It allows the execution of scripts without any constraints, which is essential for running scripts that are otherwise blocked by default security settings.
Docker (minimum version 20.x)
mode
cli
server
Yes
Whether to run the Docker container in CLI mode for a one-time code review or as a webhooks service to continuously monitor for code review requests.
pr_url
Pull request URL in GitLab, GitHub and Bitbucket
Yes, if the mode is CLI.
The pull request provides files with changes and the actual code modifications.
When the mode
is set to server
, the pr_url
is received either through a webhook call or via a REST API call.
This release only supports webhook calls; other REST API calls are not yet supported.
code_feedback
True
False
No
Setting it to True
activates general code review comments to identify functional issues. If set to False
, general code review will not be conducted.
bito_cli.bito.access_key
A valid Bito Access Key generated through Bito's web UI.
Yes
Bito Access Key is an alternative to standard email and OTP authentication.
git.provider
GITLAB
GITHUB
BITBUCKET
Yes, if the mode is CLI.
The name of git repository provider.
git.access_token
A valid Git access token provided by GITLAB or GITHUB or BITBUCKET
Yes
You can use a personal access token in place of a password when authenticating to GitHub/GitLab/BitBucket in the command line or with the API.
git.domain
A URL where Git is hosted.
No
It is used to enter the custom URL of self-hosted GitHub/GitLab Enterprise.
static_analysis
True
False
No
Enable or disable FBInfer based static code analysis, which is used to uncover functional issues in the code.
dependency_check
True
False
No
This feature is designed to identify security vulnerabilities in open-source dependency packages, specifically for JS/TS/Node.JS and GoLang. Without this input, reviews for security vulnerabilities will not be conducted.
dependency_check.snyk_auth_token
A valid authentication token for accessing Snyk's cloud-based security database.
No
If not provided, access to Snyk's cloud-based security database for checking security vulnerabilities in open-source dependency packages will not be available.
server_port
A valid and available TCP port number.
No
This is applicable when the mode
is set to server
. If not specified, the default value is 10051
.
review_comments
1
2
No
Set the value to 1
to display the code review in a single post, or 2
to show code review as inline comments, placing suggestions directly beneath the corresponding lines in each file for clearer guidance on improvements.
The default value is 2
.
review_scope
security
performance
scalability
codeorg
codeoptimize
No
exclude_branches
Glob/regex pattern.
No
This filter allows users to skip automated reviews for pull requests based on the source or target branch. This filter is useful in scenarios where automated reviews are unnecessary or could potentially disrupt the workflow.
You can specify additional branches using a comma-separated list or string patterns (glob/regex).
exclude_files
Glob/regex pattern.
No
A list of files/folders that the AI Code Review Agent will not review if they are present in the diff.
exclude_draft_pr
True
False
No
AI Coding agent that takes action
Bito Wingman is an AI coding agent designed to revolutionize the way you build software. Unlike traditional code assistants or autocomplete tools, Wingman acts as a virtual developer on your team capable of handling complex coding tasks from start to finish, with direction from you. Much of Bito Wingman was built by Bito Wingman.
Wingman understands high-level instructions, breaks them into actionable steps, researches relevant information, and executes tasks autonomously.
AI in its purest form frees us up to work much more iteratively and on many things at one time. But you need tools that work that way too. Wingman is designed to be nimble to meet your work habits, from the browser to your local IDE. Run as many tasks as you have browser tabs open. Work in your IDE too. Switch back and forth. It’s all possible with your Wingman.
Here are some real-world examples of tasks you can ask Wingman to handle, from coding and documentation to building and testing.
“Review jira ticket AI-5623, write the code, update the necessary files, and commit it. Mark the ticket as in testing”
“Document my repo and upload it to confluence. Please be sure to highlight the major modules and the key dependencies. Diagram out the system architecture in mermaid.”
“Update my build script, then build and run my code”
Write code: Generate high-quality, context-aware code to implement features, fix bugs, or even start entire projects from scratch.
Plan and execute: Understand your objectives, break them into smaller steps, and manage execution intelligently.
Research on demand: Use web browsing capabilities to gather information, research APIs, or solve challenges in real time.
Automate repetitive tasks: Handle the grunt work, from generating boilerplate code to managing Jira tickets and updating files.
Collaborate intelligently: Act as an AI pair programmer, offering proactive suggestions, reasoning about solutions, and scaling alongside your team’s needs.
Wingman uses a combination of large language models, planning algorithms, and integrations with your favorite tools to deliver results. When you give Wingman an instruction, it:
Understands: Processes the intent behind your high-level request.
Plans: Breaks down the task into actionable steps with a clear roadmap.
Researches: Fetches relevant information if needed, such as documentation or examples.
Executes: Writes, tests, and manages code or other assets to complete the task end-to-end.
Save time: Offload tedious and repetitive tasks, freeing you to focus on creative and strategic aspects of development.
Increase productivity: Tackle more in less time with an AI developer that handles projects autonomously. Many developers on our own team report being 50% to 300% more productive.
Boost quality: Generate clean, functional code with minimal errors thanks to Wingman’s intelligent reasoning.
Seamless collaboration: Work smarter with an AI assistant that integrates with your workflow and scales with your team.
Bito Wingman is available to use as part of our 10X Developer paid plan. Bito offers a 2 week free trial (no credit card required).
Explore the powerful capabilities of the Bito Wingman.
Wingman excels at understanding high-level instructions and breaking them into actionable steps. Unlike traditional assistants, it plans and executes tasks, making it an invaluable partner for complex projects.
The more detailed and specific your instructions, the better the results Wingman can deliver. You can also iterate with Wingman as the project evolves, refining its output step by step to meet your exact requirements.
Example use case: Provide a detailed prompt like:
Create an API for user authentication and integrate it into my backend. Please review my code thoroughly to suggest the key interfaces that should be created. Besides normal user registration and authentication, also include token management capabilities and risk-based scoring mechanisms to help alert us if a user might be trying to breach the system.
When tasked with challenges requiring additional information, Wingman conducts targeted research to gather relevant details and context for your project.
Example use case: If you're building a feature but need to confirm industry-standard practices, Wingman will gather up-to-date information and incorporate it into the solution.
Wingman can browse the web autonomously to find and retrieve useful data. This feature ensures that your projects benefit from the latest tools, libraries, or guidelines available.
Example use case: If you ask Wingman to implement a feature using a cutting-edge library, it will search for the library, understand its documentation, and integrate it into your code.
Wingman can generate high-quality code across a variety of programming languages. It understands your project’s requirements and provides context-aware solutions tailored to your tech stack.
Example use case: Ask Wingman to write a function in Python, JavaScript, or another language—it will deliver optimized and functional code.
Wingman integrates seamlessly with tools you already use, including:
Version control: Support for Git operations like git push
, git commit
, and git clone
for GitHub and GitLab workflows.
Project management: Jira, Linear
Documentation: Confluence
File operations: Manage and update files directly.
This integration ensures that Wingman fits naturally into your existing workflow.
Example use case: Assign a Jira ticket to Wingman, and it will complete the associated coding task, update the ticket, and link it to the appropriate pull request.
Wingman is designed with flexibility in mind, allowing you to easily integrate it with tools that fit your workflow. Its adaptable architecture ensures it evolves with your development needs.
Example use case: If your team starts using a new project management tool, Wingman’s flexibility lets you integrate it into your process effortlessly.
Wingman is designed to assist you in completing tasks efficiently, working alongside you and checking in as needed. Once provided with clear instructions, it handles everything from planning to execution while keeping you in the loop.
Example use case: Wingman can help implement a feature, document it in Confluence, and create a pull request—keeping you informed every step of the way.
Example use case: Type "Generate a REST API for user management and write tests for it"
in the chat, and Wingman will handle the implementation and testing.
Below is a list of developer tools available to Wingman. Each tool comes with unique parameters and capabilities, and Wingman is ready to assist you in configuring and using them effectively.
Jira
Issue tracking and project management tool
YES
Linear
Issue tracking and project management tool
YES
Confluence
Content management tool
YES
Shell/CLI
System command execution tool
YES
Web search
Web content retrieval and processing tool
YES
File operations
File system manipulation tool (CRUD operations)
YES
File search
File/directory search utility with pattern matching
YES
Read chunk
File reading utility for handling large files
YES
System info
System diagnostic tool for hardware/OS info
YES
Location info
Geolocation service based on IP address
YES
Weather info
Weather data service for locations
YES
Code symbol search
Pattern-based code search utility (like grep)
YES
Python code analyzer
Static code analysis tool for Python files
YES
Learn how to use Bito Wingman.
Bito 10X Developer Plan:
Install or update the VS Code extension:
The Bito Wingman will download automatically after you install or update the Bito IDE extension.
Once the download is complete, Wingman will prepare itself, and you'll be ready to use it.
Bito Wingman can be used in the following ways:
Open Bito Wingman:
Open Bito in your IDE and click on "Launch Bito Wingman" button from the Bito panel.
Start a session:
In the Wingman window, type your instructions in the chatbox and submit them.
Set or change the working directory:
Default Behavior:
If no project is open in the IDE, Wingman defaults to your home directory.
Set working directory:
Click "Select specific directory" to choose a working directory for a new session.
Change working directory:
For existing sessions, click the edit icon next to the folder path at the top of the Wingman screen.
Enter the complete path to your desired directory.
Manage sessions:
All active Wingman sessions are listed in the left sidebar.
You can run multiple sessions simultaneously, and Wingman will manage them in the background.
This section explains how to run Bito Wingman from the command line. The prerequisites are the same as above, and the CLI binary is installed automatically with the Bito IDE extension.
Locating the executable:
After installation, the Bito Wingman binary is located at:
<User home directory>/.bitowingman/bin
The executable file is named with the version number and target platform. For example:
macOS: bitowingman-1.0.9-darwin-arm64
Windows: bitowingman-1.0.9-win32-x64.exe
CLI usage modes:
Bito Wingman supports two modes for interacting via the CLI:
Interactive mode (recommended): Interactive mode provides a chat-like interface for real-time command execution.
On macOS: ~/.bitowingman/bin/bitowingman-1.0.9-darwin-arm64 -i
On Windows (PowerShell): & "$env:USERPROFILE\.bitowingman\bin\bitowingman-1.0.9-win32-x64.exe" -i
Note: After launching interactive mode, type help
and press Enter to view the list of supported commands.
Non-interactive mode: Non-interactive mode allows you to execute a command directly and receive the results without entering a full session.
On macOS: ~/.bitowingman/bin/bitowingman-1.0.9-darwin-arm64 "run git diff and summarize the changes"
On Windows (PowerShell): & "$env:USERPROFILE\.bitowingman\bin\bitowingman-1.0.9-win32-x64.exe" "run git diff and summarize the changes"
Bito Wingman seamlessly integrates with various tools such as Jira, Linear, Confluence, and more. Click the "Tools" button in the top-right corner of the Wingman screen to view all supported tools.
To configure a tool, simply ask Wingman, "How do I configure [Tool Name]?"
Wingman will provide detailed step-by-step instructions. Follow the instructions to complete the configuration process.
If a tool requires an API token, Wingman will guide you through the process of obtaining it. Once you provide the token, Wingman will handle the configuration automatically.
Answers to popular questions about the AI Code Review Agent.
List of IP addresses to whitelist:
18.188.201.104
3.23.173.30
18.216.64.170
The agent response can come from any of these IPs.
You should set a longer expiration period for your GitHub Personal Access Token (Classic) or GitLab Personal Access Token. We recommend setting the expiration to at least one year. This prevents the token from expiring early and avoids disruptions in the AI Code Review Agent's functionality.
Additionally, we highly recommend updating the token before expiry to maintain seamless integration and code review processes.
For more details on how to create tokens, follow these guides:
This is an estimate, on a scale of 1-5 (inclusive), of the time and effort required to review this Pull Request (PR) by an experienced and knowledgeable developer. A score of 1 means a short and easy review, while a score of 5 means a long and hard review. It takes into account the size, complexity, quality, and the needed changes of the PR code diff. The score is produced by AI.
Bito requires certain permissions to analyze pull requests and provide AI-powered code reviews. It never stores your code and only accesses the necessary data to deliver review insights.
Bito requires:
Read access to code and metadata: To analyze PRs and suggest improvements
Read and write access to issues and pull requests: To post AI-generated review comments
Read access to organization members: To provide better review context
If you don’t have admin access, you’ll need your administrator to install Bito on your organization’s Git account. Once installed, you can use it for PR reviews on allowed repositories. GitHub also sends a notification to the organization owner to request the organization owner to install the app.
No, Bito does not store or train models on your code. It only analyzes pull request data in real-time and provides suggestions directly within the PR.
Yes, after installation, you can select specific repositories instead of granting access to all. You can also manage repository access later through our web dashboard.
Once installed, you’ll be redirected to Bito, where you can:
Select repositories for AI-powered reviews
Customize review settings to fit your workflow
Open a pull request to start receiving AI-driven suggestions
Learn how to customize Bito’s view by switching from a side panel to a new tab or a separate window.
Let your friends see what you and Bito are creating together.
Whether you need to share AI-generated code suggestions, explanations, or any other chat insights, this feature allows you to create a public link that others can access. The link will remain active for 15 days and can be viewed by anyone with access to the URL, making collaboration and knowledge sharing seamless.
Additionally, you can quickly share your AI Chat session through a pre-written Tweet or an Email.
Let's see how it is done:
Open Bito in Visual Studio Code or any JetBrains IDE.
Start a conversation in Bito’s AI Chat user interface.
Locate the share button on the top right of the Bito extension side-panel.
Click the share button to open a menu with options, including X (Twitter), Email, and Link.
Share on X (Twitter):
Click on X (Twitter) from the menu, and a dialogue window will appear, asking whether you want to open the external site.
Simply click "Open" to proceed.
You will be redirected to the X (Twitter) website, with a pre-written tweet containing a link to your Chat Session ready to be published.
Click the "Post" button to send the tweet.
Share Through Email:
Click on Email from the menu, and you will be redirected to your email application.
Select your email account if needed.
The email will be pre-filled with all the necessary information, including the link to your Chat Session.
Add the receiver(s) of this email using the "To" input field.
Click the "Send" button to send the email.
Share the Link:
Click on Link from the menu.
A confirmation popup will appear. Click Share session to generate a unique URL for your chat session, which will automatically be copied to your clipboard for easy sharing.
Feel free to share this link with anyone you'd like to.
Bito automatically saves the chat session History. The session history is stored locally on your computer. You can return to any chat session and continue the AI conversation from where you left off. Bito will automatically maintain and restore the memory of the loaded chat session.
You can "Delete" any saved chat session or share a permalink to the session with your coworkers.
Here is the video overview of accessing and managing the session history.
Try Advanced AI Coding Assistant for Free
After you install the Bito extension, click the "Sign up or Sign-in" button on the Bito sign-up flow screen.
In the next screen, enter your work email address, and verify through a six-digit code sent to your email address.
Once your email is verified, you will get an option to create your profile. Enter your full name and set the language for the AI setup. Bito uses this setting to generate the output regardless of prompt language.
Bito UI in Visual Studio Code and JetBrains IDEs is entirely keyboard accessible. You can navigate Bito UI with standard keyboard actions such as TAB, SHIFT+TAB, ENTER, and ESC keys. Additionally, you can use the following shortcuts for quick operations.
The following video demonstrates important keyboard shortcuts.
The following keyboard shortcuts work after the Q/A block is selected.
Bito has carefully selected the keyboard shortcuts after thorough testing. However, it's possible that Bito selected key combination may conflict with IDE or other extensions shortcut. You can change the Bito default shortcut keys to avoid such conflicts.
To Open the Keyboards Shortcuts editor in VS Code, navigate to the menu under File > Preferences > Keyboard Shortcuts. (Code > Preferences > Keyboard Shortcuts on macOS)
Search for default available commands, keybindings, or Bito extension-specific commands in VSCode keyboard shortcut editor.
Finding a conflict in Key binding → Search for the key and take necessary action, e.g., Remove or Reset.
Add a new key binding or map the existing Bito extension command. Provide the necessary information (Command ID) to add the new key binding.
File > settings > keymaps > configure keymaps
Bito extension shortcuts can be overwritten by going into the File > Settings > Keymaps > configure keymaps > to the action you want to assign. It will also overwrite the Bito shortcut if there are conflicts.
Bito extension keyboard shortcuts can be changed from the IntelliJ settings. File > Settings > Keymaps > configure keymaps > plugins > Bito > action you want to change by right click.
Bito extension Keyboard shortcuts can be deleted from the IntelliJ settings. File > Settings > Keymaps > configure keymaps > plugins > Bito > action you want to delete by right click.
Work on your code with AI that knows your code!
AI that Understands Your Code
Bito has created the ability for our AI to understand your codebase, which produces dramatically better results that are personalized to you. This can help you write code, refactor code, explain code, debug, and generate test cases – all with the benefits of AI knowing your entire code base.
For now, this feature is only available for our 10X Developer Plan which costs $15 per user per month. We have plans to release it for our Free Plan soon. But it will be limited to repos of 10MB indexable size.
The major issue with these AI assistants, though, is that they have no idea about your entire codebase. Some tools take context from currently opened files in your IDE, while others enable you to manually enter code snippets in a chat-like interface and then ask questions about them.
But with Bito’s AI that understands your entire repository, this is a whole new capability. For example, what if you could ask questions like:
how can I add a button to mute and unmute the song to my code in my music player? By default, set this button to unmute. Also, use the same design as existing buttons in UI.
In my code list all the files and code changes needed to add column desc in table raw_data in dailyReport DB.
In my code suggest code refactoring for api.py and mention all other files that needs to be updated accordingly
Please write the frontend and backend code to take a user’s credentials, and authenticate the user. Use the authentication service in my code
This will definitely improve the way you build software.
Bito indexes your code locally using AI
The index is stored locally on your system to provide better performance while maintaining the security/privacy of your private code.
In case you ask a general question (not related to your codebase), then Bito will directly send your request to our LLM without first looking for the appropriate local context.
However, if you want to ask a question about your code no matter what, then you can use specific keywords such as "my code", "my repo", "my project", "my workspace", etc., in your question.
Once Bito sees any input containing these keywords, it will use the index to identify relevant portions of code or content in your folder and use it for processing your question, query, or task.
What Types of Questions Can be Asked?
You can try asking any question you may have in mind regarding your codebase. In most cases, Bito will give you an accurate answer. Bito uses AI to determine if you are asking about something in your codebase.
However, if you want to ask a question about your code no matter what, then you can use our pre-defined keywords such as "my code", "my repo", "my project", "my workspace", etc., in your question.
What a particular code file does
In my code what does code in sendgrid/sendemail.sh do?
What a particular function in my code does
In my repo explain what function message_tokens do
In my project rewrite the code of signup.php file in nodejs
In my workspace suggest code refactoring for api.py and mention all other files that need to be updated accordingly
In my code find runtime error possibilities in script.js
Find logical errors in scraper.py in my code
In my code detect code smells in /app/cart.php and give solution
Generate documentation for search.ts in my workspace in markdown format
In my code write unit tests for index.php
In my code generate test code for code coverage of cache.c
summarize recent code changes in my code
Any function to compute tokens in my project?
Any code or script to send emails in my workspace?
In my repo list all the line numbers where $alexa array is used in index.php.
In my code list all the files and code changes needed to add column desc in table raw_data in dailyReport DB.
The IDE customization settings are accessible through the new toolbar dropdown menu titled "Extension Settings".
In Visual Studio Code and JetBrains IDEs, you can choose between a light or dark theme for the Bito panel to match your coding environment preference. For VS Code users, Bito also offers an adaptive theme mode in which the Bito panel and font colors automatically adjust based on your selected VS Code theme, creating a seamless visual experience.
You can set the desired theme through the Theme dropdown.
Theme adapted from “Noctis Lux”:
Theme adapted from “Solarized Light”:
Theme adapted from “Tomorrow Night Blue”:
Theme adapted from “barn-cat”:
Take control of your code readability! Within the Bito extension settings, you can now adjust the font size for a comfortable viewing experience.
You can set the desired font size through the Font Size text field. However, if you check the Font Size (Match with IDE Font) checkbox, it will override the set font size with the Editor font size.
How to Update Bito Plugin on VS Code and JetBrains IDEs
Keeping your Bito plugin up to date ensures you have access to the latest features and improvements. In this article, we will guide you through the steps to update the Bito plugin on both VS Code and JetBrains IDEs. Let's dive in!
Updating Bito Plugin on VS Code
Open your VS Code IDE
Navigate to the Extensions view by clicking on the square icon in the left sidebar
In the search bar, type "Bito" to locate the Bito plugin
Once you locate the Bito plugin, click on the update button to initiate the update
Pro Tip 💡: Enable Auto-update for Bito Plugin on VS Code (as shown in the video)
Updating Bito Plugin on JetBrains IDEs
Open your JetBrains IDE (e.g., IntelliJ IDEA, PyCharm, etc.)
Go to Settings by clicking on "File" in the menu bar (Windows/Linux) or by clicking on "IntelliJ IDEA" in the menu bar (macOS).
In the Settings window, navigate to the "Plugins" section
Switch to the "Installed" tab to view the list of installed plugins
Locate the Bito plugin in the list and click on the update button to initiate the update
Bito doesn't read or store your code. Nor do we use your code for AI model training.
Security is top of mind at Bito, especially when it comes to your code. A fundamental approach we have taken is to allow you to decide where you want to store your code, either locally on your machine, in your cloud, or on Bito’s cloud (coming soon). We do not store any code, code snippets, indexes or embedding vectors on Bito’s servers unless you expressly allow that. Importantly, our AI partners do not store any of this information.
All requests are transmitted over HTTPS and are fully encrypted.
None of your code or AI requests are used for AI model training. None of your code or AI requests are stored by our AI partners. Our AI model partners are OpenAI, Anthropic, and Google. Here are their policies where they state that they do not store or train on data related to API access (we access all AI models via APIs):
The AI requests including code snippets you send to Bito are sent to Bito servers for processing so that we can respond with an answer.
Interactions with Bito AI are auto-moderated and managed for toxicity and harmful inputs and outputs.
Any response generated by the Bito IDE AI Assistant is stored locally on your machine to show the history in Bito UI. You can clear the history anytime you want from the Bito UI.
Bito is SOC 2 Type II compliant. This certification reinforces our commitment to safeguarding user data by adhering to strict security, availability, and confidentiality standards. SOC 2 Type II compliance is an independent, rigorous audit that evaluates how well an organization implements and follows these security practices over time.
Our SOC 2 Type II compliance means:
Enhanced Data Security: We consistently implement robust controls to protect your data from unauthorized access and ensure it remains secure.
Operational Excellence: Our processes are designed to maintain high availability and reliability, ensuring uninterrupted service.
Regular Monitoring and Testing: We conduct continuous monitoring and regular internal reviews to uphold the highest security standards.
This certification is an assurance that Bito operates with a high level of trust and transparency, providing you with a secure environment for your code and data.
When you use the self-hosted/docker version that you have setup in your VPC, in the docker image Bito checks out the diff and clones the repo for static analysis and also to determine relevant code context for code review. This context and the diff is passed to Bito's system. The request is then sent to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. No code is retained by the LLM. Bito then receives the response, processes it (such as formatting), and returns it to your self-hosted docker instance. This then posts it to your Git provider. However, the original query is not retained, nor are the results. After each code review is completed, the diff and the checked out repo are deleted.
If you use the Bito cloud to run the AI Code Review Agent, it runs similarly to the self-hosted version. Bito ephemerally checks out the diff and clones the repo for static analysis and to determine the relevant code context for code review. This context and the diff is passed to Bito's system. The request is then sent by Bito to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. No code is retained by the LLM. Bito then receives the response, processes it (such as formatting), and posts it to your Git provider. However, the original query is not retained, nor are the results. After each code review is completed, the diff and the checked out repo are deleted.
When we receive an AI request from a user, it is processed by Bito's system (such as adding relevant context and determining the Large Language Model (LLM) to use). However, the original query is not retained. The request is then sent to a third-party LLM (e.g., OpenAI, Google Cloud, etc.). The LLM processes the prompt and return the response to Bito. Bito then receives the response, processes it (such as formatting), and returns it to the user’s machine.
For enterprises, we have the ability to connect to your own private LLM accounts, including but not limited to OpenAI, Google Cloud, Anthropic, or third-party services such as AWS Bedrock, Azure OpenAI. This way all data goes through your own accounts or Virtual Private Cloud (VPC), ensuring enhanced control and security.
Our data retention policy is carefully designed to comply with legal standards and to respect our customers' privacy concerns. The policy is categorized into four levels of data:
Relationship and Usage Meta Data: This includes all data related to the customer's interaction with Bito, such as address, billing amounts, user account data (name and email), and usage metrics (number of queries made, time of day, length of query, etc.). This category of data is retained indefinitely for ongoing service improvement and customer support.
Bito Business Data: Includes customer-created templates and settings. This data is terminated 90 days after the end of the business relationship with Bito.
Confidential Customer Business Data: This includes code, code artifacts, and other organization-owned data such as Jira, Confluence, etc. This data is either stored on-prem/locally on the customer’s machines, or, if in the cloud, is terminated at the end of the business relationship with Bito.
AI Requests: Data in an AI request to Bito’s AI system. AI requests are neither retained nor viewed by Bito. We ensure the confidentiality of your AI queries; Bito and our LLM partners do not store your code, and none of your data is used for model training. All requests are transmitted via HTTPS and are fully encrypted.
Bito uses the following third-party services: Amazon AWS, Anthropic, Clearbit, Github, Google Analytics, Google Cloud, HelpScout, Hubspot, Microsoft Azure, Mixpanel, OpenAI, SendGrid, SiteGround, and Slack for infrastructure, support, and functional capabilities.
Bito follows industry standard practices for protecting your e-mail and other personal details. Our password-less login process - which requires one-time passcode sent to your e-mail for every login - ensures the complete security of your account.
Specialized commands to perform detailed analyses on specific aspects of your code. You can provide comma-separated values to perform multiple types of code analysis simultaneously.
By default, master
and main
branches are excluded.
By default, these files are excluded: *.xml
, *.json
, *.properties
, .gitignore
, *.yml
, *.md
A binary setting that enables/disables automated review of pull requests (PR) based on the draft status.
The default value is True
which skips automated review of draft PR.
Currently available only in and in private beta. Want early access? Contact us at
Wingman can handle everything from code generation to managing Jira tickets and updating files. It deeply understands your code, excels at reasoning and planning to handle complex tasks, and has access to apps such as file operations, Jira, Linear, Confluence, GitHub, GitLab, and .
Integrate seamlessly: Work across popular tools like Jira, Linear, Confluence, and to fit right into your workflow.
For more information about costs, please visit our .
Have a specific tool in mind? Drop us a note at to request adding support for it.
Communicate with Wingman in any language through a chat interface. Describe what you need, and Wingman will take care of the rest. Additionally, you can set your preferred AI output language on the . For example, if you set Spanish as your preferred language, Wingman will respond to you in Spanish.
seamlessly integrates with various tools such as Jira, Linear, Confluence, and more.
To get started with , ensure the following requirements are met:
A free 14-day trial of the 10X Developer Plan is available, no credit card required. to experience all premium features, including Bito Wingman.
Ensure you have Bito v1.4.7 or later installed in your editor.
To ensure the operates smoothly with your GitHub (Self-Managed) or GitLab (Self-Managed), please whitelist all of Bito's gateway IP addresses in your firewall to allow incoming traffic from Bito. This will enable Bito to access your self-hosted repository.
GitHub Personal Access Token (Classic):
GitLab Personal Access Token:
Contact for any assistance.
Easily share insights from any session by creating a unique shareable link directly from the Bito extension in VS Code or JetBrains IDEs.
You would need to create an account with your email to use Bito. You can sign up for Bito directly from the IDE extension or the Bito web interface at .
Now, let's learn to start using Bito.
JetBrains Document:
Bito AI automatically figures out if you're asking about something in your code. If it's confident, it grabs the relevant parts of your code from our and feeds them to the for accurate answers. But if it's unsure, Bito will ask you to confirm before proceeding.
Additional keywords for various languages are listed on the page.
Recent breakthroughs in and have helped make many AI Coding Assistant tools available, including Bito, to help you develop software faster.
When you open a project in Visual Studio Code or JetBrains IDEs, Bito lets you enable the of code files from that project’s folder. Basically, this indexing mechanism leverages our new that enables Bito to understand your entire codebase and answer any questions regarding it.
Once indexing is complete, you can ask any question in the Bito chatbox. Bito uses AI to determine if you are asking about something in your codebase. If Bito is confident, it grabs the relevant parts of your code from our and feeds them to the for accurate answers. But if it's unsure, Bito will ask you to confirm before proceeding.
The complete list of these keywords is given on our page.
As usual, security is top of mind at Bito, especially when it comes to your code. A fundamental approach we have taken is to keep all code on your machine, and not store any code, code snippets, indexes, or on Bito’s servers or our API partners. All code remains on your machine, Bito does not store it. In addition, none of your code is used for AI model training.
Learn more about .
The complete list of these keywords is given on our page.
This document explains some of Bito's privacy and security practices. Our outlines our various accreditations (SOC 2 Type II) and our various security policies. You can read our full Privacy Policy at .
OpenAI:
Anthropic:
Google Cloud: (5th paragraph)
For any further questions regarding our SOC 2 Type II compliance or to request a copy of the audit report, please reach out to
In line with Bito's commitment to transparency and adherence to data privacy standards, our comprehensive data and business privacy policy is integrated into our practices. Our complete Terms of Use, including the Privacy Policy, are available at , with our principal licensing information detailed at .
If you have any questions about our security and privacy, please email
Get started in VS Code
Get a demo
Getting Started
Account & Settings
Get Support & More
Billing and Plans
Privacy & Security
What's New
AI Code Review Agent
AI Chat in Bito
AI that Understands Your Code
AI Code Completions
Standard Templates
Custom Prompt Templates
Diff View
AI that Understands Your Code
Bito indexes your code locally using AI
Keywords to invoke AI that understands your code
What type of questions can be asked?
Sneak peek into the inner workings of Bito
AI that understands your code in VS Code
AI that understands your code in JetBrains IDEs (e.g., PyCharm)
Exclude unnecessary files and folders from repo to index faster!
Answers to popular questions
Supporting Over 35 Programming Languages Such as Python, SQL, C++, Go, JavaScript, and More
Bito can suggest code for these programming languages:
C
C++
C#
CSS
Clojure
Dart
Elixir
Erlang
Fortran
Go
GoogleSQL
Groovy
Haskell
HTML
Java
JavaScript
JavaServer Pages
Kotlin
Lean (proof assistant)
Lua
Objective-C
OCaml
Perl
PHP
Python
R
Ruby
Rust
Scala
Shell script
Solidity
SQL
Swift
TypeScript
XML
Verilog
YAML
Keywords to invoke AI that understands your code
Here is the list of keywords in different languages to ask questions regarding your entire codebase. Use any of these keywords in your prompts inside Bito chatbox.
my code
my repo
my project
my workspace
我的代码
我的仓库
我的代码库
我的项目
我的文件夹
我的程式碼
我的倉庫
我的項目
我的工作區
Mi código
Mi repo
Mi proyecto
Mi espacio de trabajo
私のコード
私のリポ
私のプロジェクト
私のワークスペース
Meu código
Meu repo
Meu projeto
Meu espaço de trabalho
Mój obszar roboczy
moje miejsce pracy
mój obszar roboczy
moj kod
mój kod
moim kodzie
moje repo
moje repozytorium
moim repo
moj projekt
mój projekt
moim projekcie
Open Bito Panel: Toggle Bito Panel on and off in the JetBrains IDE. In the Visual Studio Code, the shortcut opens the Bito panel if not already opened.
SHIFT + CTRL + O
Puts cursor in the chatbox when Bito panel is in focus.
SPACEBAR (Or start typing your question directly)
Execute the chat command
ENTER
Add a new line in the chatbox
CTRL + ENTER or SHIFT + ENTER
Modify the most recently executed prompt. This copies the last prompt in the chatbox for any edits.
CTRL + M
Expands and Collapse the "Shortcut" panel
Navigate between the Questions/Answers block.
Note: You must select the Q/A container with TAB/SHIFT+TAB.
Copy the answer to the clipboard.
CTRL + C
Insert the answer in the code editor
CTRL + I
Toggle the diff view (when Diff View is applicable)
CTRL + D
Expand/Collapse the code block in the question.
Regenerate the answer
CTRL + L
Modify the prompt for the selected Q&A. Bito copies the prompt in the chatbox that you can modify as needed.
CTRL + U
AI that understands your codebase in JetBrains IDEs (e.g., PyCharm)
Open your project’s folder using a JetBrains IDE. For this guide, we are using PyCharm.
When you open a new project, a popup box will appear through which Bito asks you whether you want to enable indexing of this project or not. Click on the “Enable” button to start the indexing process. You can also skip this step by clicking the “Maybe later” button. You can always index the project later if you want.
In the bottom-left of Bito plug-in pane, hover your mouse cursor over this icon. You can also enable indexing from here by clicking on the “Click to enable it” text.
Another option is to open the "Manage Repos" tab by clicking the laptop icon in the top-right corner of the Bito plugin pane.
Let’s start the indexing process by using any of the above-mentioned methods.
The status will now be updated to “Indexing in progress...” instead of “Not Indexed”. You will also see the real-time indexing progress for the current folder, based on the number of files indexed.
Once the indexing is complete, the status will be updated from “Indexing in progress...” to “Indexed”, and will look like this.
Now you can ask any question regarding your codebase by adding the keyword "my code" to your AI requests in the Bito chatbox. Bito is ready to answer them!
In case you ever want to delete an index then you can do that by clicking on this three dots button and then clicking the “Delete” button.
A warning popup box will open in the bottom of Bito’s plugin pane. You can either click on the “Delete” button to delete the project’s index from your system or click on the “Cancel” button to go back.
For the sake of this tutorial, we’ve created a clone of popular game “Wordle” using Python.
Here’s how it looks:
We have stored the list of words in files that are inside the “word_files” folder. A word is selected from these files randomly at the start of the game that the player has to guess.
Let’s ask Bito to understand my code and briefly write about what this game is all about and how to play it
Bito correctly described the game by just looking at its source code.
Our game (PyWordle) is working fine, but there is no count down timer to make it a bit more challenging.
So, let’s ask Bito to add this feature.
Here’s the question I used:
suggest code for main.py "class PyWordle" to add a count down timer for this game in my code. I'm using "self" in functions and variable names, so suggest the code accordingly. The player will lose the game if the time runs out. Set the time limit to 2 minutes (format like 02:00). The timer will start when the game starts. Also reset the timer when the game restarts, such as when player closes the "you won / you lost" popup. Display this real-time count down timer on the right-side of where the player score is displayed. Use the similar design as the player score UI. Also tell me exactly where to add your code. Make sure all of this functionality is working.
Bito suggested the code which looks good. But, it was a bit incomplete and needs some improvements. So, I further asked a series of questions to Bito (one-by-one) to fix the remaining issues.
After adding the code suggested by Bito, here's how the PyWordle game looks now. As you can see the countdown timer is accurately added where we want it to be added.
AI that understands your codebase in VS Code
Open your project’s folder using Visual Studio Code.
When you open a new project, a popup box will appear through which Bito asks you whether you want to enable indexing of this project or not. Click on the “Enable” button to start the indexing process. You can also skip this step by clicking the “Maybe later” button. You can always index the project later if you want.
In the bottom-left of Bito plug-in pane, hover your mouse cursor over this icon. You can also enable indexing from here by clicking on the “Click to enable it” text.
Another option is to open the "Manage Repos" tab by clicking the laptop icon in the top-right corner of the Bito plugin pane.
Let’s start the indexing process by using any of the above-mentioned methods.
The status will now be updated to “Indexing in progress...” instead of “Not Indexed”. You will also see the real-time indexing progress for the current folder, based on the number of files indexed.
Once the indexing is complete, the status will be updated from “Indexing in progress...” to “Indexed”, and will look like this.
Now you can ask any question regarding your codebase by adding the keyword "my code" to your AI requests in the Bito chatbox. Bito is ready to answer them!
In case you ever want to delete an index then you can do that by clicking on this three dots button and then clicking the “Delete” button.
A warning popup box will open in the bottom of Bito’s plugin pane. You can either click on the “Delete” button to delete the project’s index from your system or click on the “Cancel” button to go back.
For the sake of this tutorial, we’ve created a simple “Music Player using JavaScript”.
Here’s how it looks:
We have added a bunch of songs to this project. The song details like name, artist, image, and the music file name are stored in a file called music-list.js
Let’s ask Bito to list names of all song artists used in my code
As you can see, Bito gave the correct answer by utilizing its understanding of our repository.
Similarly, we can ask any coding-related question like find bugs, improve code, add new features, etc.
Our music player is working fine, but we don’t have any option to mute/unmute the song.
Let’s ask Bito to add this feature.
Here’s the question I used:
In my code how can i add a button to mute and unmute the song? By default, set this button to unmute. Also, use the same design as existing buttons in UI.
After adding the code suggested by Bito, here’s how the music player looks when it starts (unmuted).
And when muted:
Code Completions from AI that Understands Your Code
Sneak Peek into the Inner Workings of Bito
Then when you give it a function name or ask it a question, that query is converted into a vector and is compared to other vectors nearby. This returns the relevant search results. So, it's a way to perform search not on keywords, but on meaning. Vector Databases are able to do this kind of search very quickly.
Bito also uses an Agent Selection Framework that acts like an autonomous entity capable of perceiving its environment, making decisions, and taking actions to achieve certain goals. It figures out if it’s necessary to do an embeddings comparison on your codebase, do we need to perform an action against Jira, or do we do something else.
This is what makes us stand out from other AI tools like ChatGPT, GitHub Copilot, etc. that do not understand your entire codebase.
Get in-depth insights into your code review process.
Bito AI chat is the most versatile and flexible way to use AI assistance. You can type any technical question to generate the best possible response. Check out these Bito AI Examples to understand all you can do with Bito.
To use AI Chat, type the question in the chat box, and press 'Enter' to send. You can add a new line in the question with 'SHIFT+ ENTER'.
Bito starts streaming answers within a few seconds, depending on the size and complexity of the prompt.
Bito makes it super easy to use the answer generated by AI, and take a number of actions.
Copy the answer to the clipboard.
AI may not give the best answer on the first attempt every time. You can ask Bito AI to regenerate the answer by clicking "Regenerate" button next to the answer.
If the AI answer includes a code snippet, Bito automatically identifies and displays code in a separate block. This makes it easy to copy the code to the clipboard or insert it in the code editor.
Vote response "Up" or "Down". This feedback Bito improve the prompt handling.
Many of these commands can be executed with keyboard shortcuts documented here: Keyboard Shortcuts
Customize Bito’s AI Code Review Agent to enforce your coding practices.
We support two ways to customize AI Code Review Agent’s suggestions:
AI Code Review Agent refines its suggestions based on your feedback. When you provide negative feedback on Bito-reported issues in pull requests, the Agent automatically adapts by creating custom code review rules to prevent similar suggestions in the future.
Depending on your Git platform, you can provide negative feedback in the following ways:
GitHub: Select the checkbox given in feedback question at the end of each Bito suggestion or leave a negative comment explaining the issue with the suggestion.
GitLab: React with negative emojis (e.g., thumbs down) or leave a negative comment explaining the issue with the suggestion.
Bitbucket: Provide manual review feedback by leaving a negative comment explaining the issue with the suggestion.
These rules are applied at the repository level for the specific programming language.
By default, newly generated custom code review rules are disabled. Once negative feedback for a specific rule reaches a threshold of 3, the rule is automatically enabled. You can also manually enable or disable these rules at any time using the toggle button in the Status column.
After you provide negative feedback, Bito generates a new code review rule in your workspace. The next time the AI Code Review Agent reviews your pull requests, it will automatically filter out the unwanted suggestions.
We understand that different development teams have unique needs. To accommodate these needs, we offer the ability to implement custom code review rules in Bito’s AI Code Review Agent.
Here’s how the process works:
Implementation: Our team will create a custom prompt based on your guidelines and integrate it into the AI Code Review Agent for your Bito workspace.
By enabling custom code review rules, Bito helps your team maintain consistency and improve code quality. We look forward to partnering with you to enhance your code review experience!
To ensure we provide the best service and support, we have a few eligibility criteria for implementing custom code review guidelines:
Team size: This feature is available for teams with a minimum of 10 developers.
We can implement a wide range of custom code review rules, including:
Style and formatting guidelines
Security best practices
Performance optimization checks
Code complexity and maintainability standards
etc.
Typically, it takes 2-4 days from the time Bito receives your custom code review guidelines.
Seamlessly integrate automated code reviews into your GitHub Actions workflows.
Enable GitHub Actions:
Open your repository and click on the "Settings" tab.
Select "Actions" from the left sidebar, then click on "General".
Under "Actions permissions", choose "Allow all actions and reusable workflows" and click "Save".
Set Up Environment Variables:
Still in the "Settings" tab, navigate to "Secrets and variables" > "Actions" from the left sidebar.
Configure the following under the "Secrets" tab:
For each secret, click the New repository secret button, then enter the exact name and value of the secret in the form. Finally, click Add secret to save it.
Name: BITO_ACCESS_KEY
Name: GIT_ACCESS_TOKEN
Configure the following under the "Variables" tab:
For each variable, click the New repository variable button, then enter the exact name and value of the variable in the form. Finally, click Add variable to save it.
Name: STATIC_ANALYSIS_TOOL
Value: Enter the following text string as value: fb_infer,astral_ruff,mypy
Name: GIT_DOMAIN
Value: Enter the domain name of your Enterprise or self-hosted GitHub deployment or skip this if you are not using Enterprise or self-hosted GitHub deployment.
Example of domain name: https://your.company.git.com
Name: EXCLUDE_BRANCHES
Value: Specify branches to exclude from the review by name or valid glob/regex patterns. The agent will skip the pull request review if the source or target branch matches the exclusion list.
Name: EXCLUDE_FILES
Value: Specify files/folders to exclude from the review by name or glob/regex pattern. The agent will skip files/folders that match the exclusion list.
Name: EXCLUDE_DRAFT_PR
Value: Enter True
to disable automated review for draft pull requests, or False
to enable it.
Create the Workflow Directory:
In your repository, create a new directory path: .github/workflows
.
Add the Workflow File:
In your repository, upload this test_cra.yml
file inside the .github/workflows
directory either in your source branch of each PR or in a branch (e.g. main) from which all the source branches for PRs will be created.
Commit your changes.
Update test_cra.yml
as below:
Change line from:
runs-on: ubuntu-latest
to:
runs-on: <label of the self-hosted GitHub Runner> e.g. self-hosted, linux etc.
Update test_cra.yml
as below:
Replace all lines having below text:
uses: gitbito/codereviewagent@main
with:
uses: myorg/gitbito-bitocodereview@main
Commit and push your changes in test_cra.yml
.
After configuring the GitHub Actions, you can invoke the AI Code Review Agent in the following ways:
Automated Code Review: The agent will automatically review new pull requests as soon as they are created and post the review feedback as a comment within your PR.
Manually Trigger Code Review: To start the process, simply type /review
in the comment box on the pull request and submit it. This command prompts the agent to review the pull request and post its feedback directly in the PR as a comment.
Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
/review security
: Analyzes code to identify security vulnerabilities and ensure secure coding practices.
/review performance
: Evaluates code for performance issues, identifying slow or resource-heavy areas.
/review scalability
: Assesses the code's ability to handle increased usage and scale effectively.
/review codeorg
: Scans for readability and maintainability, promoting clear and efficient code organization.
/review codeoptimize
: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.
By default, the /review
command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False
The webhooks service is best suited for continuous, automated reviews.
A machine with the following minimum specifications is recommended for Docker image deployment and for obtaining optimal performance of the AI Code Review Agent.
CPU Cores
4
RAM
8 GB
Hard Disk Drive
80 GB
Windows
Linux
macOS
Linux
You will need:
Bash (minimum version 4.x)
For Debian and Ubuntu systems
sudo apt-get install bash
For CentOS and other RPM-based systems
sudo yum install bash
Docker (minimum version 20.x)
macOS
You will need:
Bash (minimum version 4.x)
brew install bash
Docker (minimum version 20.x)
Windows
You will need:
PowerShell (minimum version 5.x)
Note: In PowerShell version 7.x, run Set-ExecutionPolicy Unrestricted
command. It allows the execution of scripts without any constraints, which is essential for running scripts that are otherwise blocked by default security settings.
Docker (minimum version 20.x)
Server Requirement: Ensure you have a server with a domain name or IP address.
Start Docker: Initialize Docker on your server.
git clone https://github.com/gitbito/CodeReviewAgent.git
Open the repository folder:
Navigate to the repository folder and then to the “cra-scripts” subfolder.
Note the full path to the “cra-scripts” folder for later use.
Open Command Line:
Use Bash for Linux and macOS.
Use PowerShell for Windows.
Set Directory:
Change the current directory in Bash/PowerShell to the “cra-scripts” folder.
Example command: cd [Path to cra-scripts folder]
Note: Adjust the path based on where you cloned the repository on your system.
Configure Properties:
Set mandatory properties:
mode = server
bito_cli.bito.access_key
git.access_token
Optional properties (can be skipped or set as needed):
git.provider
git.domain
code_feedback
static_analysis
dependency_check
dependency_check.snyk_auth_token
server_port
review_scope
exclude_branches
exclude_files
exclude_draft_pr
Run the Agent:
On Linux/macOS in Bash:
Run ./bito-cra.sh service start bito-cra.properties
Note: It will provide the Git Webhook secret in encrypted format.
On Windows in PowerShell:
Install OpenSSL
Run ./bito-cra.ps1 service start bito-cra.properties
Note: It will provide the Git Webhook secret in encrypted format.
Provide Missing Property Values: The script may prompt for values of mandatory/optional properties if they are not preconfigured.
Copy Webhook Secret: During the script execution, a webhook secret is generated and displayed in the shell. Copy the secret displayed under "Use below as Gitlab and Github Webhook secret:" for use in GitHub or GitLab when setting up the webhook.
Navigate to the main page of the repository. Under your repository name, click Settings.
In the left sidebar, click Webhooks.
Click Add webhook.
Under Payload URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.
Note: The GitHub Payload URL should follow this format: https://<domain name/ip-address>/api/v1/github_webhooks
, where https://<domain name/ip-address>
should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/github_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.
For example, a typical webhook URL would be https://cra.example.com/api/v1/github_webhooks
Select the Content type “application/json” for JSON payloads.
In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.
Click on Let me select individual events to select the events that you want to trigger the webhook. For code review select these:
Issue comments - To enable Code Review on-demand by issuing a command in the PR comment.
Pull requests - To auto-trigger Code Review when a pull request is created.
Pull request review comments - So, you can share feedback on the review quality by answering the feedback question in the code review comment.
To make the webhook active immediately after adding the configuration, select Active.
Click Add webhook.
Select the repository where the webhook needs to be configured.
On the left sidebar, select Settings > Webhooks.
Select Add new webhook.
In URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.
Note: The GitLab webhook URL should follow this format: https://<domain name/ip-address>/api/v1/gitlab_webhooks
, where https://<domain name/ip-address>
should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/gitlab_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.
For example, a typical webhook URL would be https://cra.example.com/api/v1/gitlab_webhooks
In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.
In the Trigger section, select the events to trigger the webhook. For code review select these:
Comments - for on-demand code review.
Merge request events - for automatic code review when a merge request is created.
Emoji events - So, you can share feedback on the review quality using emoji reactions.
Select Add webhook.
Navigate to the main page of the repository. Under your repository name, click Repository Settings.
In the left sidebar, click Webhooks.
Click Add webhook.
Under URL, enter the URL of the webhook endpoint. This is the server's URL to receive webhook payloads.
Note: The BitBucket Payload URL should follow this format: https://<domain name/ip-address>/api/v1/bitbucket_webhooks
, where https://<domain name/ip-address>
should be mapped to Bito's AI Code Review Agent container, which runs as a service on a configured TCP port such as 10051. Essentially, you need to append the string "/api/v1/bitbucket_webhooks" (without quotes) to the URL where the AI Code Review Agent is running.
For example, a typical webhook URL would be https://cra.example.com/api/v1/bitbucket_webhooks
In Secret token, enter the webhook secret token that you copied above. It is used to validate payloads.
In the Triggers section, select the events to trigger the webhook. For code review select these:
Pull Request > Comment created - for on-demand code review.
Pull Request > Created - for automatic code review when a merge request is created.
Select Save.
After configuring the webhook, you can invoke the AI Code Review Agent in the following ways:
Automated Code Review: If the webhook is configured to be triggered on the Pull requests event (for GitHub) or Merge request event (for GitLab), the agent will automatically review new pull requests as soon as they are created and post the review feedback as a comment within your PR.
Manually Trigger Code Review: To start the process, simply type /review
in the comment box on the pull request and submit it. If the webhook is configured to be triggered on the Issue comments event (for GitHub) or Comments event (for GitLab), this action will initiate the code review process. The /review
command prompts the agent to review the pull request and post its feedback directly in the PR as a comment.
Bito also offers specialized commands that are designed to provide detailed insights into specific areas of your source code, including security, performance, scalability, code structure, and optimization.
/review security
: Analyzes code to identify security vulnerabilities and ensure secure coding practices.
/review performance
: Evaluates code for performance issues, identifying slow or resource-heavy areas.
/review scalability
: Assesses the code's ability to handle increased usage and scale effectively.
/review codeorg
: Scans for readability and maintainability, promoting clear and efficient code organization.
/review codeoptimize
: Identifies optimization opportunities to enhance code efficiency and reduce resource usage.
By default, the /review
command generates inline comments, meaning that code suggestions are inserted directly beneath the code diffs in each file. This approach provides a clearer view of the exact lines requiring improvement. However, if you prefer a code review in a single post rather than separate inline comments under the diffs, you can include the optional parameter: /review #inline_comment=False
Please follow these steps:
Update the Agent's repository:
git pull origin main
Restart the Docker container:
To restart the Docker container running as a service, use the below command.
On Linux/macOS in Bash: Run ./bito-cra.sh service restart bito-cra.properties
On Windows in PowerShell: Run ./bito-cra.ps1 service restart bito-cra.properties
To stop the Docker container running as a service, use the below command.
On Linux/macOS in Bash: Run ./bito-cra.sh service stop
On Windows in PowerShell: Run ./bito-cra.ps1 service stop
To check the status of Docker container running as a service, use the below command.
On Linux/macOS in Bash: Run ./bito-cra.sh service status
On Windows in PowerShell: Run ./bito-cra.ps1 service status
Exclude unnecessary files and folders from repo to index faster!
Indexable size is size of all code files, excluding following from the folder:
Directory/File based filtering
logs, node_modules, dist, target, bin, package-lock.json, data.json, build, .gradle, .idea, gradle, extension.js, vendor.js, ngsw.json, polyfills.js, ngsw-worker.js, runtime.js, runtime-main.js, service-worker.js, bundle.js, bundle.css
Extension based filtering
bin, exe, dll, log, aac, avif, bmp, cda, gif ,mp3, mp4, mpeg, weba, webm, webp, oga, ogv, png, jpeg, jpg, bmp, wpa, tif, tiff, svg, ico, wav, mov, avi, doc, docx, ppt, pptx, xls, xlsx, ods, odp, odt, pdf, epub, rar, tar, zip, vsix, 7z, bz, bz2, gzip, jar, war, gz, tgz, woff, woff2, eot, ttf, map, apk, app, ipa, lock, tmp, logs, gmo, pt
Hidden files are filtered i.e., files starting with "."
All Empty files are filtered.
All Binary files are also filtered.
For workspaces that have upgraded to Bito's 10X Developer Plan, we have set the indexable size limit to 120MB per repo. However, once we launch the "AI that Understands Your Code" feature for our Free Plan users, they will be restricted to repositories with an indexable size limit of 10MB.
If a repo hits 120MB limit, then the below error message will be displayed in the "Manage repos" tab and the repo's index status will be changed to "Not Indexed".
Sorry, we don’t currently support repos of this size. Please use .bitoignore to reduce the size of the repo you want Bito to index.
There are two ways to use .bitoignore
file:
Create a .bitoignore
file inside the folder where indexes created by Bito are stored. e.g. <user-home-directory>/.bito/localcodesearch/.bitoignore
On Windows, this path will be something like: C:\Users\<your username>\.bito\localcodesearch\.bitoignore
Note: The custom ignore rules you set in this .bitoignore
file will be applied to all the repositories where you have enabled indexing.
Create a .bitoignore
file inside your repository's root folder.
Changes to the .bitoignore
file are taken into account at the beginning of the indexing process, not during or after the indexing itself.
Understanding these patterns/rules is crucial for effectively managing the files and directories that Bito indexes and excludes in your projects.
# this is a comment.
Any line that starts with a #
symbol is considered as a comment and will not be processed.
*
(Wildcard character) Ignores all files
**
(Wildcard character) Match any number of directories.
todo.txt
Ignores a specific file named todo.txt
*.txt
Ignores all files ending with .txt
*.*
Ignores all files with any extension.
Engine/
or Engine/**
Ignores all files in the Engine
directory and their subdirectories (contents).
subdirectory1/example.html
Ignore the file named example.html
, specifically located in the directory named subdirectory1
.
!contacts.txt
(Negation Rule) Explicitly tracks contacts.txt
, even if all .txt
files are ignored.
!Engine/Batch/Builds
(Negation Rule) Tracks the Builds
directory inside Engine/Batch
, overriding a broader exclusion.
!Engine/Batch/Builds/**
(Negation Rule) Tracks the Builds
directory and all of its subdirectories inside Engine/Batch
, overriding a broader exclusion.
!.java
(Negation Rule) Ensures that all .java
files are included, overriding any previous ignore rules that might apply to them.
!subdirectory1/*.txt
(Negation Rule) Track files with the .txt
extension located specifically in the subdirectory1
directory, even if other rules might otherwise ignore .txt
files.
BitoUtil?.java
The ?
(question mark) matches any single character in a filename or directory name.
!
(exclamation mark)When a pattern starts with !
it negates the pattern, meaning it explicitly includes files or directories that would otherwise be ignored. For example, have a look at this sample .bitoignore file:
Here !Engine/Build/BatchFiles/**
pattern includes all files in the Engine/Build/BatchFiles
directory and its subdirectories, even though Engine/**
pattern would ignore them.
To exempt a file, ensure that the negation pattern !
appears afterward, thereby overriding any broader exclusions.
Answers to Popular Questions
Bito can index unlimited repositories for workspaces that have subscribed to our 10X Developer Plan. This feature is also coming soon for our Free Plan. But it will be limited to 10MB maximum indexable size of repository.
Bito usually takes around 12 minutes per each 10MB of code to understand your repo.
There is a limit on the amount of memory/context that can be used at a time to answer the question, so the answers sometimes may not cover all the code. To solve for this, restrict the questions by providing additional criteria like:
In my code explain message_tokens in ai/request.js
Open your project in VS Code or JetBrains IDEs. From the Bito plugin pane, click the laptop icon located in the top-right corner.
On this tab, you will see the status of your current project as well as the status of any other project that you indexed previously.
To delete an index, navigate to the "Manage repos" tab.
Next, click on the three dots button located in front of your project’s name, and then select the "Delete" option.
A warning popup box will appear at the bottom of Bito's plugin pane. You can choose to click the "Delete" button to remove the project's index from your system, or click the "Cancel" button to go back.
If for some reason you are struggling to index your project’s folder while using Visual Studio Code or JetBrains IDEs, then follow the below steps to delete the folder that contains all the indexes and try to re-index your project.
Close all JetBrains IDEs and VS Code instances where Bito is installed.
Go to your users directory. For example, on Windows it will be something like C:\Users\<your username>
Now, find .bito folder and delete it. (Note: All configuration settings and project indexes created by Bito will be deleted. You will also be logged out from Bito IDE plugin)
Once you have deleted the .bito folder, open your project in the IDE again.
After restarting the IDE, you will need to enter your email ID and a 6-digit code to log in. Once you're logged in, select the workspace that has an active paid subscription.
After that, when Bito asks if you wish to index the folder, you can select "Maybe later".
Then, navigate to the "Manage repos" tab in the Bito plugin pane, where you should see the folder name listed under the "Current project" along with its size, indicating that it is not indexed. Since you have deleted the .bito folder, the "Other projects" section will no longer display any entries.
Finally, click on "Start Indexing" and it should begin indexing the folder.
Get Real-Time Suggestions from Bito as You Type or Through Code Comments
Bito analyzes the file you are currently editing and your codebase to understand the context. It offers two types of AI Code Completions:
In this method, as you are writing a line of code, Bito will automatically predict what you will write next and generate relevant suggestions based on your codebase.
In this method, you can write any kind of requirements you have in natural language comments, and Bito will suggest the best code tailored to your codebase to fulfill those requirements – often writing the entire function.
Since Bito is familiar with your entire codebase, it can provide more accurate code suggestions than other AI Coding Assistants available today.
For example:
Bito can see your imports and predict what task you are trying to complete.
Bito can read the function you're inside and predict what you'll do next.
Bito can spot the APIs you've integrated and suggest possible endpoints to call.
Bito provides high-quality code completions that align with the code you are working on. However, if the suggested completions are not as accurate in your specific case, you can write additional code or provide explicit instructions in comments to help Bito better understand the context and generate more precise solutions.
Seamless Integration With Your Coding Workflow
To accept the entire code suggestion, simply press the "Tab" key on your keyboard. Alternatively, you can accept the code completion incrementally, word by word, by pressing "" (coming soon...). To accept one line at a time, click the three dots button in the code completion UI toolbar and then select "Accept Line" (coming soon...).
If you don’t like the suggestion, Bito does not force you to use it. You can simply dismiss it by pressing the “Esc” key on your keyboard or continue typing as normal.
Bito also provides alternative suggestions, which you can navigate using the arrow keys in the code completion UI toolbar or by using the shortcut keys mentioned below.
Show next suggestion
macOS: Option + ]
Windows: ALT + ]
Show previous suggestion
macOS: Option + [
Windows: Alt + [
AI Code Completions
Bito’s "AI Code Completions" capabilities offer real-time, personalized code suggestions as you type. Powered by the latest best-in-class Large Language Models (such as GPT-4o mini and Google PaLM 2 – 540B parameters compare to Copilot’s 12B parameter model), Bito understands your codebase and provides contextually accurate code suggestions right from within your IDE. Bito’s model is also trained on data until 2 months back, many other models are trained on 12-18 months old data.
Speed up your development workflow with AI-assisted code completion. Watch as lines of code, full functions, or even entire code blocks are generated for you on the fly.
Supporting a wide range of over 35 programming languages—from Python to SQL, from C++ to Go and JavaScript—this feature is designed to make coding faster, easier, and more efficient for developers like you.
Learn how to Enable or Disable AI Code Completions
Click the gear icon at the bottom left of the VS Code window. Then select “Settings” to open the main settings page.
In the search bar, type "bito" and then from the sidebar click on "Bito" under "Extensions" to access the Bito extension settings.
Here you will see three options that can be configured. These are:
Enable Auto Completion: Tick this checkbox to enable inline code suggestion in the editor. Uncheck it to disable this feature.
Enable Comment to Code: Tick this checkbox to enable generating code from comment in the editor. Uncheck it to disable this feature.
Set Auto Completion Trigger Logic: Decide how fast Bito makes suggestions by setting your preferred pause time. This input field allows you to set the pause time in milliseconds. Lower values make suggestions more often. Minimum and default value is 250 milliseconds.
Click the gear icon at the top right of the JetBrains IDE window, then select "Settings" to open the main settings window.
Now, in the sidebar click on "Tools" and then click "Bito" to access the Bito extension settings.
Here you will see three options that can be configured. These are:
Enable Auto Completion: Tick this checkbox to enable inline code suggestion in the editor. Uncheck it to disable this feature.
Enable Comment to Code: Tick this checkbox to enable generating code from comment in the editor. Uncheck it to disable this feature.
Set Auto Completion Trigger Logic: Decide how fast Bito makes suggestions by setting your preferred pause time. This input field allows you to set the pause time in milliseconds. Lower values make suggestions more often. Minimum and default value is 250 milliseconds.
After doing all of the above steps, you must click on "Apply" and then "OK" button to save your changes. Otherwise, your modifications will be lost.
Basic models are free, while advanced models provide best results.
Bito's 10X Developer Plan users can either start a conversation with Basic AI Models (e.g. GPT-4o mini, Claude Haiku, Nova Lite 1.0, and similar models) or more Advanced AI Models (e.g. o3-mini, DeepSeek-V3 (served from the US and Europe), GPT-4o, Claude Sonnet 3.5, and best in class AI models). In contrast, the Free Plan users are limited to using only Basic AI Models.
This guide will help you understand when to use Basic and when to use Advanced AI models. You will also learn how to select and chat with these models in the Bito chatbox.
These models are designed to provide essential AI capabilities for most everyday coding tasks. They offer a solid starting point for generating boilerplate code, writing documentation, explaining code snippets, and solving simple coding problems.
While using Basic AI models, your prompts and the memory of the chat are limited to 40,000 characters (about 18 single-spaced pages).
They are also less expensive in terms of API costs. So, if you are frequently asking less-important questions in the Bito chatbox, then these Basic AI models will definitely help you save costs.
These models are more suitable for high-complexity tasks that require long/complex prompts and advanced reasoning.
They provide more accurate and relevant code snippets, comments, or solutions to complex coding problems.
Additionally, when using Advanced AI models, your prompts and the chat memory can extend up to 240,000 characters (about 110 single-spaced pages). This means that these models can process your entire code files, leading to more accurate answers.
So, if you are looking for the best results for complex tasks, then go with Advanced AI models.
When you open the Bito plugin in VS Code or JetBrains IDEs, the "AI Chat" tab is displayed by default. This tab includes a drop-down menu at the bottom-right corner that allows you to select the AI model you want to chat with.
The available AI models are categorized under two sections "BASIC" and "ADVANCED". You can either let Bito auto-select an AI model or manually pick one that best suits your needs.
Once you select an AI model and start a chat with it, the drop-down menu will disappear, and your chosen model will handle the entire chat session.
If you want to change the AI model, click the New Chat icon located in the bottom-left corner of the Bito plugin pane. In the new chat session, select a different model from the drop-down menu.
Learn how to setup Bito CLI on your device (Mac, Linux, and Windows)
We recommend you use the following methods to install Bito CLI.
sudo curl https://alpha.bito.ai/downloads/cli/install.sh -fsSL | bash
Note: curl will always download the latest version.
yay -S bito-cli
or
paru -S bito-cli
From here, download the MSI file called Bito CLI.exe
and then install Bito CLI using this installer.
On Windows 11, you might get notification related to publisher verification. Click on "Show more" or "More info" and click on "Run anyway" (we are working on fixing this as soon as possible).
While it's not recommended, you can download the Bito CLI binary from our repository, and install it manually. The binary is available for Windows, Linux, and Mac OS (x86 and ARM architecture).
From here, download the Bito CLI binary specific to your OS platform.
Start the terminal, go to the location where you downloaded the binary, move the downloaded file (in the command below use bito-* filename you have downloaded) to filename bito.
mv bito-<os>-<arch> bito
Make the file executable using following command chmod +x ./bito
Copy the binary to /usr/local/bin
using following command sudo cp ./bito /usr/local/bin
Set PATH variable so that Bito CLI is always accessible. PATH=$PATH:/usr/local/bin
Run Bito CLI with bito
command. If PATH variable is not set, you will need to run command with the complete or relative path to the Bito executable binary.
From here, download the Bito CLI binary for Windows called bito.exe
.
For using Bito CLI, always move to the directory containing Bito CLI prior to running it.
Set PATH variable so that Bito CLI is always accessible.
Edit the "Path" variable and add a new path of the location where Bito CLI is installed on your machine.
sudo curl https://alpha.bito.ai/downloads/cli/uninstall.sh -fsSL | bash
Note: This will completely uninstall Bito CLI and all of its components.
For Windows, you can uninstall Bito CLI just like you uninstall any other software from the control panel. You can follow these steps:
Click on the Windows Start button and type "control panel" in the search box, and then open the Control Panel app.
Under the "Programs" option, click on "Uninstall a program".
Find "Bito CLI" in the list of installed programs and click on it.
Click on the "Uninstall" button (given at the top) to start the uninstallation process.
Follow the instructions provided by the uninstall wizard to complete the uninstallation process.
After completing these steps, Bito CLI should be completely removed from your Windows machine.
Discover Real-World Applications of AI Code Completions
Autocomplete regex patterns as you type or generate from comment.
Autocomplete SQL queries for CRUD operations, table structure definitions, SQL Joins, Wildcard Characters, etc. You can even ask Bito to write safer queries to prevent SQL Injection.
Effortlessly translate your user interface (UI) into any widely spoken language of your choice.
Populate arrays, variables, objects, and more with dummy data to facilitate thorough testing scenarios.
Bito is really good at writing custom functions. Just provide your requirements in comments and watch Bito generate entire function for you.
Quickly generate boilerplate code for class definitions, including properties, constructor, and getter/setter methods. You may need to provide additional comments to generate methods with custom functionality.
Automatically generate docstrings for functions and classes.
Generate try...catch blocks.
Test-driven development (TDD)
Writing unit tests.
Writing test double.
Generate code for Object-Relational Mapping (ORM).
Generate code for Object Document Mapper (ODM).
Autocomplete loops (for, while, do...while, foreach)
Autocomplete conditional statements (if...else, if...elseif...else, switch)
Suggest existing functions from your codebase that can be called in the current scope.
Autocomplete Dockerfile Commands
Get Code for Popular Algorithms (e.g. A*, Dijkstra, etc.)
etc.
Effortlessly Use AI Code Completions With Your Keyboard
Get/Trigger suggestions manually
macOS: Option + Shift + K
Windows: Alt + Shift + K
Accept entire suggestion
Tab
Accept single word from suggestion
Coming Soon...
Accept single line from suggestion
Coming Soon...
Dismiss suggestion
Esc
Show next suggestion
macOS: Option + ]
Windows: ALT + ]
Show previous suggestion
macOS: Option + [
Windows: Alt + [
Click the gear icon at the bottom left of the VS Code window. Then, select “Keyboard Shortcuts” to view all the keyboard shortcuts used by VS Code and its extensions.
In the search bar, type "bito" to view all the keyboard shortcuts used by the Bito extension.
Find the command for which you want to change the keyboard shortcut. Then, click on the edit icon in front of it.
A popup modal will appear. Enter your new key combination and press the Enter button to save it.
If you change a keyboard shortcut and want to revert to the original, just right-click on the specific command. A menu will pop up. Choose "Reset Keybinding" from this menu.
In JetBrains IDE settings, you can customize the keyboard shortcuts for the AI Code Completions feature according to your preferences. To do so, follow the steps below:
Click the gear icon at the top right of the JetBrains IDE window, then select "Settings" to open the settings window.
In the settings window, click on the "Keymap" button given in the left sidebar. Then, in the search bar, type "bito" to view all the keyboard shortcuts used by the Bito extension.
Find the command for which you want to change the keyboard shortcut and right-click on it. Then select "Add Keyboard Shortcut".
A popup modal will appear. Enter your new key combination and click the "OK" button to save it.
Now you will have more than one keyboard shortcuts assigned to a command. To remove the previously set keyboard shortcut, right-click on the command again. From here, you can remove the desired keyboard shortcut by clicking on the "Remove [keyboard_shortcut_here]" button.
After doing all of the above steps, you must click on "Apply" and then "OK" button to save your changes. Otherwise, your modifications will be lost.
If you change a keyboard shortcut and want to revert to the original, just right-click on the specific command. A menu will pop up. Choose "Reset Shortcuts" from this menu.
After resetting the shortcut, you must click on "Apply" and then "OK" button to save your changes. Otherwise, your modifications will be lost.
Command Line Interface (Powered by Bito AI Chat) to Automate Your Tasks
Learn about all the powerful commands to use Bito CLI
Run any one of the below commands.
bito --help
or
bito config –help
Run any one of the below commands to print the version number of Bito CLI installed currently.
bito -v
or
bito --version
The below commands can help you automate repetitive tasks like software documentation, test case generation, writing pull request description, pull request review, release notes generation, writing commit message, and much more.
Run the below command for non-interactive mode in Bito (where writedocprompt.txt
will contain your prompt text such as Explain the code below in brief
and mycode.js
will contain the actual code on which the action is to be performed).
Run the below command to read the content at standard input in Bito (where writedocprompt.txt
will contain your prompt text such as Explain the code below in brief
and input provided will have the actual content on which the action is to be performed).
Run the below command to directly concatenate a file and pipe it to bito
and get instant result for your query.
Run the below command to redirect your output directly to a file (where -p
can be used along with cat
to perform prompt related action on the given content).
Run the below command to redirect your output directly to a file (where -p
can be used along with type
to perform prompt related action on the given content).
Run the below command to store context/conversation history in non-interactive mode in file runcontext.txt
to use for next set of commands in case prior context is needed. If runcontext.txt
is not present it will be created. Please provide a new file or an existing context file created by bito
using -c
option. With -c
option now context is supported in non-interactive mode
Run the below command to instantly get response for your queries using Bito CLI.
Anything after #
symbol in your prompt file will be considered as a comment by Bito CLI and won't be part of your prompt.
You can use \#
as an escape sequence to make #
as a part of your prompt and to not use it for commenting anymore.
Give me an example of bubble sort in python # everything written here will be considered as a comment now.
Explain what this part of the code do: \#include<stdio.h>
In the example above \#
can be used as an escape sequence to include # as a part of your prompt.
#This will be considered as a comment as it contains # at the start of the line itself.
Use {{%input%}}
macro in the prompt file to refer to the contents of the file provided via -f
option.
Example: To check if a file contains JS code or not, you can create a prompt file checkifjscode.txt
with following prompt:
Answers to popular questions
Once above is done then you can use following commands to install Bito CLI using homebrew:
First tap the CLI repo using brew tap gitbito/bitocli
command, this should be a one time action and not required every time.
Now you can install Bito CLI using following command:
brew install bito-cli
- this should install Bito CLI based upon your machine architecture.
To update Bito CLI to the latest version, use following commands:
Please make sure you always do brew update
before upgrading to avoid any errors.
brew update
- this will update all the required packages before upgrading.
brew upgrade bito-cli
- once above is done, this will update Bito CLI to the latest version.
brew uninstall bito-cli
- this should uninstall Bito CLI completely from your system.
Click on the Templates button to expand or collapse Templates menu.
/
Command in Bito Chat BoxType a forward slash /
right at the start in the Bito chat box. Once you do, the template menu will open from where you can quickly select and use the template you want.
Want to narrow down your choices? Simply start typing after the /
slash, and it'll only show you templates that match your words. And hey, you can also use the arrow keys, or Tab and Shift + Tab, to navigate the templates menu.
Select code, right click, and click Bito AI to access shortcuts
Go to View -> Command Palette -> Type "Bito" to access the templates
The following Loom demonstrates Standard Templates in Bito:
Bito includes the following standard templates out of the box.
Think of a huge, never-ending stream of information like photos, tweets, and songs pouring in every second. We need special storage boxes to keep all this info organized and find what we need quickly. One of the new, cool storage boxes people are talking about is called a “Vector Database”. So, what's this Vector Database thing, and why is it something you might want to know about? Let's unwrap this mystery and make it super easy to understand.
A vector database is designed to handle vectorized data - that is, data represented as vectors. A vector, in this context, is a mathematical construct that embeds information into a high-dimensional space, with each dimension representing a different feature of the data.
Traditionally, databases have been adept at handling structured data (like rows and columns in a spreadsheet) or even semi-structured data (like JSON documents). However, with the rise of machine learning and artificial intelligence, there is an increasing need to efficiently store and query data that isn't just numbers or text but is represented in multi-dimensional space.
Vector database fills this gap by excelling at managing and querying data in the form of vectors. This is particularly useful for tasks that involve similarity search, like finding the most similar images, text, or even audio clips, in a process known as "nearest neighbor search".
Imagine trying to search for a song that sounds like another song or finding images that are visually similar to a given image. These tasks are non-trivial because they involve understanding the content at a deeper, more abstract level. Vector databases allow us to convert these abstract, complex features into a mathematical space where 'similarity' can be computed and searched efficiently.
Efficient Similarity Search: They use specialized indexing and search algorithms to perform fast and efficient nearest neighbor searches.
Scalability: They are designed to handle large volumes of data and high-dimensional vectors without sacrificing performance.
Machine Learning Integration: They are often integrated with machine learning models and pipelines to enable real-time embedding and querying.
Language Agnosticism: Vector databases can handle any data that can be vectorized, whether it's images, text, audio, or any other form of media.
Recommendation Systems: Vector databases can power recommendation engines that suggest products, movies, or songs by finding items that are similar to a user’s past behavior.
Image Retrieval: They are used in image search engines to find images that are visually similar to a query image.
Natural Language Processing: In the field of NLP, vector databases enable searching through large corpora of text for documents or entries that are contextually similar to a given piece of text.
Fraud Detection: They can be used to detect anomalies or patterns in transaction data that signify fraudulent activity by comparing against typical transaction vectors.
Let's look at some top players:
When you're picking out the perfect vector database, think about these things:
Do you need someone else to handle the techy database stuff, or do you have wizards in-house?
Got your vectors ready, or do you need the database to make them for you?
How fast do you need the data – right now, or can it wait?
How much experience does your team have with this kind of tech?
Is the database easy to learn, or is it going to be lots of late nights?
Can you trust the database to be up and running when you need it?
What's the price tag for setting it up and keeping it going?
How secure is it, and does it check all the legal boxes?
While vector databases are powerful, they come with challenges. The management and querying of high-dimensional data can be resource-intensive. The efficiency of a vector database often depends on the underlying infrastructure and the effectiveness of its indexing and compression algorithms.
Furthermore, security and privacy are crucial, especially when handling sensitive data. Vector databases must ensure that they incorporate robust security measures to protect against unauthorized access and data breaches.
As data continues to grow in volume and complexity, the importance of vector databases will only increase. Their integration with AI and machine learning is a match set for the future where almost every digital interaction may involve some form of similarity search or content-based retrieval.
Vector Databases are a cutting-edge solution designed to handle the complexity of modern data needs, particularly in the realm of similarity search and AI applications. Understanding and leveraging vector databases can unlock a plethora of opportunities across industries, making them an exciting area of development in the database technology landscape.
As companies and developers keep using AI more and more, the use of vector databases is expected to increase a lot. This signals the start of a new period in how we handle data, where the way we sort and keep information is as complex and varied as the data itself.
Learn About AI Technologies & Concepts Powering Bito
In the steps below, we'll show you how Bito indexes your code, ensuring that each query you have is met with precise and contextually relevant information. From breaking down code into digestible chunks to leveraging advanced AI models for nuanced understanding, Bito transforms the daunting task of code analysis into a seamless and efficient experience.
Here's how the magic happens:
Dividing Code into Pieces
Bito starts by breaking down your source code files into smaller sections, known as 'chunks'. It’s like cutting up a long text into paragraphs to make it more manageable. Each chunk represents a piece of your code that can be individually indexed and analyzed.
Creating a Searchable Reference
After breaking down the file, each chunk is indexed, similar to creating a catalog entry. This step is crucial as it allows for the efficient location of the code segment later on.
Translating Code into Numeric Vectors
Saving the Essential Data
These embeddings are then stored in an index file on your machine. This index file is like a detailed directory, listing the file name, the location of the chunk within the file (start and end), and the embedding vector for each piece of code.
Understanding Your Questions
When you ask a question in Bito's chatbox, the AI checks whether it has some specific keywords like "my code", "my project", etc. If so, Bito generates a numeric vector for your query, mirroring the process used for code chunks.
Matching Your Query with Code
Using the query's vector, Bito searches the index to find the code chunk with the closest matching embedding. This step identifies the relevant sections of your codebase that can answer your question.
Building a Bigger Picture
Identifying chunks is just part of the process. Bito ensures that these chunks make sense in the broader context of your code. If necessary, it expands the search to include complete functions or related code segments, creating a fuller, more accurate context.
Consulting the AI Experts
With the context in hand, Bito consults with language models – either basic (GPT-4o mini and similar models) or advanced (GPT-4o, Claude Sonnet 3.5, and best in class AI models) – to interpret the code within the context and provide an accurate response to your query.
Keeping Your Data Local
All the indexing and querying happens on your local machine. The index files are stored in the user’s home folder, for example on Windows the path will be something like C:\Users\Furqan\.bito\localcodesearch folder. It ensures that your code and session history remain private and secure.
Ensuring Confidentiality
Bito is committed to privacy. All LLM accounts it uses are under strict agreements to prevent your data from being used for training, recorded, or logged.
Reducing AI Fabrication
Bito is designed to minimize AI 'hallucinations' or fabrications, ensuring the answers you receive are based on your actual code. Although complete elimination of hallucination isn't feasible, as it sometimes aids in constructing beyond seen data, Bito strives to keep it in check, especially when dealing with your local code.
With these steps, Bito provides a robust and privacy-conscious method for indexing and understanding your code, simplifying navigation and enhancing productivity in your development projects.
Any Shortcut such as "Performance Check" or "Improve Readability" that proposes changes to your existing code automatically opens a "Diff View" between the proposed and actual code. This allows you to review the changes before accepting them into your code. The diff view opens automatically when Bito AI returns the proposed changes. You can also view the diff at any point through the "Diff" action.
Video showing side-by-side diff view
If you are curious to know, this guide is for you!
Embeddings, at their essence, are like magic translators. They convert data—whether words, images, or, in Bito's case, code—into vectors in a dense numerical space. These vectors encapsulate meaning or semantics. Basically, these vectors help computers understand and work with data more efficiently.
Imagine an embedding as a vector (list) of floating-point numbers. If two vectors are close, they're similar. If they're far apart, they're different. Simple as that!
In this section, we'll explore the most common and impactful ways embeddings are used in everyday tech and applications.
Word Similarity & Semantics: Word embeddings, like Word2Vec, map words to vectors such that semantically similar words are closer in the vector space. This allows algorithms to discern synonyms, antonyms, and more based on their vector representations.
Sentiment Analysis: By converting text into embeddings, machine learning models can be trained to detect and classify the sentiment of a text, such as determining if a product review is positive or negative.
Recommendation Systems: Embeddings can represent items (like movies, books, or products) and users. By comparing these embeddings, recommendation systems can suggest items similar to a user's preferences. For example, by converting audio or video data into embeddings, systems can recommend content based on similarity in the embedded space, leading to personalized user recommendations.
Document Clustering & Categorization: Text documents can be turned into embeddings using models like Doc2Vec. These embeddings can then be used to cluster or categorize documents based on their content.
Translation & Language Models: Models like BERT and GPT use embeddings to understand the context within sentences. This contextual understanding aids in tasks like translation and text generation.
Image Recognition: Images can be converted into embeddings using convolutional neural networks (CNNs). These embeddings can then be used to recognize and classify objects within the images.
Anomaly Detection: By converting data points into embeddings, algorithms can identify outliers or anomalies by measuring the distance between data points in the embedded space.
Chatbots & Virtual Assistants: Conversational models turn user inputs into embeddings to understand intent and context, enabling more natural and relevant responses.
Search Engines: Text queries can be converted into embeddings, which are then used to find relevant documents or information in a database by comparing embeddings.
Suppose you have two functions in your codebase:
Function # 1:
Function # 2:
Using embeddings, Bito might convert these functions into two vectors. Because these functions perform different operations, their embeddings would be at a certain distance apart. Now, if you had another function that also performed addition but with a slight variation, its embedding would be closer to the add
function than the subtract
function.
Let's oversimplify and imagine these embeddings visually:
Embedding for Function # 1 (add):
[0.9, 0.2, 0.1]
Embedding for Function # 2 (subtract):
[0.2, 0.9, 0.1]
Notice the numbers? The first positions in these lists are quite different: 0.9 for addition and 0.2 for subtraction. This difference signifies the varied operations these functions perform.
Now, let's add a twist. Suppose you wrote another addition function, but with an extra print statement:
Function # 3:
Bito might give an embedding like:
[0.85, 0.3, 0.15]
If you compare, this new list is more similar to the add
function's list than the subtract
one, especially in the first position. But it's not exactly the same as the pure add
function because of the added print operation.
This distance or difference between lists is what Bito uses to determine how similar functions or chunks of code are to one another. So, when you ask Bito about a piece of code, it quickly checks these number lists, finds the closest match, and guides you accordingly!
When you ask Bito a question or seek assistance with a certain piece of code, Bito doesn't read the code the way we do. Instead, it refers to these vector representations (embeddings). By doing so, it can quickly find related pieces of code in your repository or understand the essence of your query.
For example, if you ask Bito, "Where did I implement addition logic?", Bito will convert your question into an embedding and then look for the most related (or closest) embeddings in its index. Since it already knows the add
function's embedding represents addition, it can swiftly point you to that function.
When we talk about turning data into these nifty lists of numbers (embeddings), several models and techniques come into play. These models have been designed to extract meaningful patterns from vast amounts of data and represent them as compact vectors. Here are some of the standout models:
Word2Vec: One of the pioneers in the world of embeddings, this model, developed by researchers at Google, primarily focuses on words. Given a large amount of text, Word2Vec can produce a vector for each word, capturing its context and meaning.
Doc2Vec: An extension of Word2Vec, this model is designed to represent entire documents or paragraphs as vectors, making it suitable for larger chunks of text.
GloVe (Global Vectors for Word Representation): Developed by Stanford, GloVe is another method to generate word embeddings. It stands out because it combines both global statistical information and local semantic details from a text.
BERT (Bidirectional Encoder Representations from Transformers): A more recent and advanced model from Google, BERT captures context from both left and right (hence, bidirectional) of a word in all layers. This deep understanding allows for more accurate embeddings, especially in complex linguistic scenarios.
FastText: Developed by Facebook’s AI Research lab, FastText enhances Word2Vec by considering sub-words. This means it can generate embeddings even for misspelled words or words not seen during training by breaking them into smaller chunks.
ELMo (Embeddings from Language Models): This model dynamically generates embeddings based on the context in which words appear, allowing for richer representations.
Universal Sentence Encoder: This model, developed by Google, is designed to embed entire sentences, making it especially useful for tasks that deal with larger text chunks or require understanding the nuances of entire sentences.
GPT (Generative Pre-trained Transformer): Developed by OpenAI, GPT is a series of models (from GPT-1 to GPT-4o) that use the Transformer architecture to generate text. While GPT models are famous for generating text, they can also produce vector embeddings. Their latest embeddings model is text-embedding-ada-002 which can generate embeddings for text search, code search, sentence similarity, and text classification tasks.
These models, among many others, power a wide range of applications, from natural language processing tasks like sentiment analysis and machine translation to aiding assistants like Bito in understanding and processing code or any other form of data.
While embeddings might seem like just another technical term or a mere list of numbers, they are crucial bridges that connect human logic and machine understanding. The ability to convert complex data, be it code, images, or even human language, into such vectors, and then use the 'distance' between these vectors to find relatedness, is nothing short of magic.
In the context of Bito, embeddings aren't just a feature—it's the core that powers its deep understanding of your code, making it an indispensable tool for developers. So, the next time you think of Bito's answers as magical, remember, it's the power of embeddings at work!
Learn how to work with Bito CLI (including examples)
Terminal
Bash (for Mac and Linux)
CMD (for Windows)
Execute Chat: Run bito
command on command prompt to get started. Ask anything you want help with such as awk command to print first and last column
.
Note: Bito CLI supports long prompts through multiline input. To complete and submit the prompt, press Ctrl+D
. Enter/Return
key adds a new line to the input.
Exit Bito CLI: To quit/exit from Bito CLI, type quit
and press Ctrl+D
.
Terminate: Press Ctrl+C
to terminate Bito CLI.
Check out the video below to get started with Bito CLI.
Here are two examples for you to see My Prompt in action:
How to Create Git Commit Messages and Markdown Documentation with Ease using Bito CLI My Prompt:
How to generate test data using Bito CLI My Prompt:
Manage Bito CLI settings
run bito config -l
or bito config --list
to list all config variables and values.
run bito config -e
or bito config --edit
to open the config file in default editor.
Access Key can be persisted in Bito CLI by adding it in the config file using bito config -e
. Such persisted Access Key can be over-ridden by running bito -k <access-key>
or bito --key <access-key>
for the transient session (sessions that last only for a short time).
By default AI Model Type is set to ADVANCED
and it can be overridden by running bito -m <BASIC/ADVANCED>
. Model type is used for AI query in the current session. Model type can be set to BASIC
or ADVANCED
, which is case insensitive.
"ADVANCED" refers to AI models like GPT-4o, Claude Sonnet 3.5, and best in class AI models, while "BASIC" refers to AI models like GPT-4o mini and similar models.
When using Basic AI models, your prompts and the chat's memory are limited to 40,000 characters (about 18 single-spaced pages). However, with Advanced AI models, your prompts and the chat memory can go up to 240,000 characters (about 110 single-spaced pages). This means that Advanced models can process your entire code files, leading to more accurate answers.
If you are seeking the best results for complex tasks, then choose Advanced AI models.
Bito CLI (Command Line Interface)
For example, you can run a command like bito –p writedocprompt.txt -f mycode.js
for non-interactive mode in Bito CLI (where writedocprompt.txt
will contain your prompt text such as Explain the code below in brief
and mycode.js
will contain the actual code on which the action is to be performed).
Download Bito CLI from GitHub:
With support for 50+ programming languages (Python, JavaScript, SQL, etc.) and 50+ spoken languages (English, German, Chinese, etc.), Bito CLI is versatile and adaptable to different project needs. Furthermore, it's designed to be compatible across multiple operating systems, including Windows, Mac, and Linux, ensuring a wide range of usability.
You can either use "ADVANCED" AI models like GPT-4o, Claude Sonnet 3.5, and best in class AI models, or "BASIC" AI models like GPT-4o mini and similar models inside Bito CLI.
When using Basic AI models, your prompts and the chat's memory are limited to 40,000 characters (about 18 single-spaced pages). However, with Advanced AI models, your prompts and the chat memory can go up to 240,000 characters (about 110 single-spaced pages). This means that Advanced models can process your entire code files, leading to more accurate answers.
If you are seeking the best results for complex tasks, then choose Advanced AI models.
WINDOWS: CTRL + / MAC: CTRL + SHIFT+ /
/
WINDOWS: CTRL + / MAC: CTRL + SHIFT+ /
This feature is only available for our 10X Developer Plan. Visit the or to learn more about our paid plans.
Bito uses AI to create an of your project’s codebase. It enables Bito to understand the code and provide relevant answers. There are three ways to start the indexing process:
From here you can start the by clicking on the “Start Indexing” button given in front of your current project. Here, you will also see the total indexable size of the repository. Read more about
Additional keywords for various languages are listed on the page. Also, here are some .
Here is the with Bito AI.
Here’s a with Bito AI.
You can for this PyWordle game from GitHub.
This feature is only available for our 10X Developer Plan. Visit the or to learn more about our paid plans.
Bito uses AI to create an of your project’s codebase. It enables Bito to understand the code and provide relevant answers. There are three ways to start the indexing process:
From here you can start the by clicking on the “Start Indexing” button. Here, you will also see the total indexable size of the repository. Read more about
Additional keywords for various languages are listed on the page. Also, here are some .
Here’s a with Bito AI. You can also for this JavaScript Music Player from GitHub or try the .
Bito deploys a locally on the user’s machine, bundled as part of the Bito IDE plug-in. This database uses (a vector with over 1,000 dimensions) to retrieve text, function names, objects, etc. from the codebase and then transform them into multi-dimensional vector space.
Learn more about so that it can understand it.
Finally, Bito utilizes from Open AI, Anthropic, and others that actually provide the answer to the question by leveraging the context provided by the Agent Selection Framework and the embeddings.
We’re making significant innovations in our to simplify coding for everyone. To learn more about this head over to .
The user-friendly dashboards help you track key metrics such as pull requests reviewed, issues found, lines of code reviewed, and understand individual contributions.
By default, the 10X Developer Plan utilizes to process queries. You can easily AI models anytime.
Bito’s offers a flexible solution for teams looking to enforce custom code review rules, standards, and guidelines tailored to their unique development practices. Whether your team follows specific coding conventions or industry best practices, you can customize the Agent to suite your needs.
, and the AI Code Review Agent automatically adapts by creating code review rules to prevent similar suggestions in the future.
, and we will implement them within your Bito workspace.
The custom code review rules are displayed on the dashboard in Bito Cloud.
Email your code review guidelines to : Provide us with your team’s code review guidelines, standards, or any specific rules you want the AI to enforce.
Billing plan requirement: Custom code review guidelines are part of our , which includes personalized support and advanced customization options. The Team plan comes with custom pricing based on your team’s size and specific requirements. For a detailed quote, please contact us at
Yes, this feature is available exclusively on the and comes with an additional charge. For more details on pricing and implementation, please contact our support team at .
Bito Access Key: Obtain your Bito Access Key.
GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.
Login to your account.
Secret: Enter your Bito Access Key here. Refer to the .
Secret: Enter your GitHub Personal Access Token (Classic) with repo access. We do not support fine-grained tokens currently. For more information, see the section.
Check the above section to learn more about creating the access tokens needed to configure the Agent.
Note: For more information, see .
Note: For more information, see .
Note: For more information, see .
from AI Code Review Agent's GitHub repo.
Create a self-hosted Runner using Linux image and x64 architecture as described in the .
Create a copy of Bito's repository main branch into your self-hosted GitHub organization e.g. "myorg" under the required name e.g. "gitbito-bitocodereview". In this example, now this repository will be accessible as "myorg/gitbito-bitocodereview".
Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please .
For more details, refer to .
Bito Access Key: Obtain your Bito Access Key.
GitHub Personal Access Token (Classic): For GitHub PR code reviews, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.
GitLab Personal Access Token: For GitLab PR code reviews, a token with API access is required.
Snyk API Token (Auth Token): For Snyk vulnerability reports, obtain a Snyk API Token.
Prerequisites: Before proceeding, ensure you've completed all necessary AI Code Review Agent.
Clone the repository: GitHub repository to your server using the following command:
Note: It is recommended to clone the repository instead of downloading the .zip file. This approach allows you to easily later using the git pull
command.
Open the bito-cra.properties file in a text editor from the “cra-scripts” folder. Detailed information for each property is provided on page.
Note: Detailed information for each property is provided on page.
Check the guide to learn more about creating the access tokens needed to configure the Agent.
Reference-1:
Reference-2:
:
Login to your account.
:
Login to your account.
:
Login to your account.
Note: To improve efficiency, the AI Code Review Agent is disabled by default for pull requests involving the "main" branch. This prevents unnecessary processing and token usage, as changes to the "main" branch are typically already reviewed in release or feature branches. To change this default behavior and include the "main" branch, please .
For more details, refer to .
Pull the latest changes from the repository by running the following command in your terminal, ensuring you are inside the repository folder:
Learn more about above and see which files and folders are excluded by default.
You can reduce your repo's indexable size by excluding certain files and folders from indexing using file and remain within the limit.
To fix this issue, follow our instructions regarding and reduce your repo's size and bring it under the max limit of 120MB.
After that, you must and then restart the indexing by clicking on the "Start Indexing" button shown for the repo folder. You can also follow our step-by-step guides to and IDEs.
A .bitoignore
file is a plain text file where each line contains a pattern or rules that tells Bito which files or directories to ignore and not index. In other words, it's a way to reduce your repo's indexable size. You can also see .
Therefore, to implement changes made to the .bitoignore
file, you'll need to and then restart the indexing by clicking on the "Start Indexing" button shown for the repo folder. You can also follow our step-by-step guides to and IDEs.
Bito takes time to thoroughly read your entire repository and understand it. This is completely normal. If your repository is a bit large, then it can take several hours to get .
In case you close the Visual Studio Code or JetBrains IDE (e.g., PyCharm) while the is in progress then don’t worry. The indexing will be paused and will automatically continue from where it left off when you reopen the IDE. Currently, the indexing will resume 5-10 minutes after reopening the IDE.
Use to accept, reject, or navigate through multiple suggestions.
Use to accept, reject, or navigate through multiple suggestions.
Bito can understand both single-line and multi-line comments in the . Therefore, if you have a bit lengthy requirements, simply use multi-line comments for ease!
Explore some feature.
After gathering the context, Bito uses different to come up with some options that you will most likely want to write next. So, if one solution doesn’t work, there are more you can try.
Bito's AI Code Completions doesn't interfere with your coding process. It offers code suggestions only after you have paused typing for 250 milliseconds (you can adjust this in ), or if you explicitly ask for AI Code Completions by typing Alt + Shift + K on Windows or Option + Shift + K on macOS, and the suggested code is merely displayed as a placeholder.
AI Code Completions are disabled by default. Learn how to in settings.
Let's dive in to see .
Users on Bito's Free Plan receive 300 free AI Code Completions per month, with a daily limit of 75 completions. In contrast, paid users can enjoy unlimited AI Code Completions each month, subject to the .
Learn more about Bito's paid plans on our .
are disabled by default. To enable them follow the steps below.
From the settings sidebar, click on "Text Editor" and then select "Suggestions". Now, on the right-side, tick the checkbox given in front of "Inline Suggest: Enabled" option. Please note that if this option is disabled then functionality will not work.
You can also continue your previous chat sessions by selecting them from the tab.
Advanced AI Models are only available in Bito's 10X Developer Plan. If you have not subscribed to it yet, then head over to our to learn more about it. One of the key features of 10X Developer Plan is .
To use Advanced AI Models, you need a Bito 10X Developer Plan. For details about the costs, please visit our .
Tip: Instead of starting a new conversation each time you want to switch between the Basic and Advanced AI models, you can revisit and continue your previous chats by navigating to the tab in the plugin. This allows you to pick up where you left off with any AI model.
Arch and Arch based distro users can install it from
Note for the Mac Users: You might face issues related to verification for which you will have to manually do the steps from (we are working on fixing it as soon as possible).
In the , open the folder that has the latest version number.
In the , open the folder that has the latest version number.
In the , open the folder that has the latest version number.
Follow the instructions as per this
Say goodbye to the endless searches on Google or Stack Overflow for answers to your coding dilemmas. Discover the numerous advantages offered by the Bito's feature outlined below, designed to streamline your coding process and boost productivity efficiently.
Here, the first screenshot displays an example of . The other two screenshots are examples of .
In VS Code settings, you can customize the keyboard shortcuts for feature according to your preferences. To do so, follow the below steps:
Explore some we've created using , which you can implement in your projects right now. These automations showcase the powerful capabilities of Bito CLI.
Unicode characters (using other languages) might not be readily supported on command prompt if you are on Windows 10 or below. You can run command chcp 936
in cmd prior to using bito
to support unicode characters in Windows 10 or below (To undo the settings done here you can follow ).
Before using homebrew, please make sure that you uninstall any previously installed versions of Bito CLI using the .
To uninstall Bito CLI you can either use the or use following commands:
Generating the best possible response is as much science as art. The are built on the same technology as ChatGPT handles the science part. Crafting a good is the art part. The Templates in Bito takes the burden off your shoulder in being crafty. You can select a piece of code and use one of the eight prompts, whether you want to check the code for performance or add error handling. Behind-the-scene actor "Bito Prompt Manager" crafts a well-versed prompt that squeezes the best response out of the machine. You can also save your favorite prompts for quick access anytime. Check out .
For instance, in the world of machine learning, models like neural networks can convert images or text into vectors during their processing stages. These vectors, known as , capture the essence of the data. When you query a vector database with another vector, it retrieves the most similar items based on the vector's position and distance in that high-dimensional space.
A cloud-native, managed vector database that doesn't require infrastructure management. Pinecone offers fast data processing and quality relevance features like metadata filters and supports both sparse and dense vectors. Key offerings include duplicate detection, rank tracking, and deduplication.
An open-source vector database tailored for AI applications and similarity search, it provides fast search capabilities across trillions of vector datasets and boasts high scalability and reliability. Its use cases span across image and chatbot applications to chemical structure search.
Aimed at building LLM applications, Chroma is an open-source, AI-native embedding database offering features like filtering and intelligent grouping. It positions itself as a database that combines document retrieval capabilities with AI to enhance data querying processes.
This is a cloud-native, open-source vector database that stands out with its AI modules and ability to handle text, images, and other data conversions into searchable vectors. It offers quick neighbor search and is designed with scalability and security in mind.
Designed for deep learning and LLM-based applications, Deep Lake supports a wide array of data types and integrates with various tools to facilitate model training and versioning. It emphasizes ease in deploying enterprise-grade products.
A versatile open-source vector search engine and database that supports payload-based storage and extensive filtering. It is well-suited for semantic matching and faceted search, with a focus on efficiency and configuration simplicity.
A highly scalable open-source analytics engine capable of handling diverse data types, Elasticsearch is part of the Elastic Stack, offering fast search, fine-tuned relevance, and sophisticated analytics.
Vespa is an open-source data serving engine that enables machine-learned decisions on massive datasets at serving time. It's built for high-performance and high-availability use cases, facilitating a variety of complex query operations.
Focused on dense vector search, Vald is a distributed, cloud-native search engine that uses the ANN Algorithm NGT for neighbor searches. It features automatic indexing, index backup, and horizontal scaling.
A Google-developed method that improves search accuracy and performance for vector similarity, ScaNN is known for its effective compression techniques and support for different distance functions.
As a PostgreSQL extension, pgvector brings vector similarity search to the robust, feature-rich environment of PostgreSQL, enabling embeddings to be stored and searched alongside other application data.
Developed by Facebook AI Research, Faiss is a library for efficient similarity search and clustering of dense vectors. It's versatile, supporting various distances and batch processing, and it can operate on datasets larger than available RAM.
Indexing involves breaking down a source code file into smaller chunks and converting these chunks into that can be stored in a . Bito indexes your entire codebase locally (on your machine) to understand it and provide answers tailored to your code.
Learn more about Bito's feature.
For every chunk, Bito generates a numeric vector or . This process, which can be done using OpenAI or alternative open-source embedding models, translates the code into a mathematical representation. The idea is to create a form that can be easily compared and matched with other code chunks.
The complete list of these keywords is given on our page.
Bito leverages the power of embeddings to . But WTF are these embeddings, and how do they help Bito understand your code?
Bito uses text-embedding-ada-002 from OpenAI and we’re also trying out some open-source embedding models for our feature.
Before you can use Bito CLI, you need to and it. Once the setup is done, follow the steps below:
Here is the complete list of .
is an alternate authentication mechanism to Email & OTP based authentication. You can use an Access Key in Bito CLI to access various functionalities such as Bito AI Chat. Here’s a guide on . Basically, after creating the Access Key, you have to use it in the config file mentioned above. For example, access_key: “YOUR_ACCESS_KEY_HERE”
Access to Advanced AI models is only available in Bito's . However, Basic AI models can be used by both free and paid users.
To see how many Advanced AI requests you have left, please visit the page. On this page, you can also set to control usage of Advanced AI model requests for your workspace and avoid unexpected expenses.
Also note that even if you have set preferred_ai_model: ADVANCED
in Bito CLI config but your Advanced AI model requests quota is finished (or your self-imposed is reached) then Bito CLI will start using Basic AI models instead of Advanced AI models.
is an innovative tool that harnesses the power of functionality to automate software development workflows. It can automate repetitive tasks like software documentation, test case generation, pull request review, release notes generation, writing commit message or pull request description, and much more.
Here is the complete list of .
Access to Advanced AI models is only available in Bito's . However, Basic AI models can be used by both free and paid users.
Bito CLI is an invaluable asset for developers looking to increase efficiency and productivity in their workflows. It allows developers to save time and focus on more complex and creative aspects of their work. Additionally, Bito CLI plays a crucial role in supporting continuous integration and deployment (CI/CD) processes. Explore some we've created using Bito CLI, which you can implement in your projects right now. These automations showcase the powerful capabilities of Bito CLI.
To get started, check out our guide on , ensuring you make the most out of it.
Bito CLI (Command Line Interface)
Learn how to setup Bito CLI on your device (Mac, Linux, and Windows)
Manage Bito CLI settings
Learn how to work with Bito CLI (including examples)
Learn about all the powerful commands to use Bito CLI
Answers to popular questions
Explain Code
Explains what the code does and how it works.
Generate Comment
Generate a comment for the selected code
Performance Check
Checks code for the performance, and rewrites the code with suggested optimization.
Security Check
Check code for the basic security checks, and rewrites the code with suggested fixes.
Style Check
Check the code for the common style issues, and rewrites with suggested fixes.
Improve Readability
Refactor the code for better readability
Clean Code
Remove debug statements
Generate Unit Tests
Generate the unit tests for the selected code,
AI Code Completions
Get real-time suggestions from Bito as you type or through code comments
Learn how to enable or disable AI Code Completions
Effortlessly use AI Code Completions with your keyboard
Seamless integration with your coding workflow
Supporting over 35 programming languages such as Python, SQL, C++, Go, JavaScript, and more
Discover real-world applications of AI Code Completions
At the heart of every LLM, from GPT-3.5 Turbo to the latest GPT-4o, are tokens. These are not your arcade game coins but the fundamental units of language that these models understand and process. Imagine tokens as the DNA of digital language—their sequence dictates how an LLM interprets and responds to text.
A token is created when we break down a massive text corpus into digestible bits. Think of it like slicing a cake into pieces; each slice, or token, can vary from a single word to a punctuation mark or even a part of a word. The process of creating tokens, known as tokenization, simplifies complex input text, making it manageable for LLMs to analyze.
Here’s a quick reference to understand token equivalents:
1 token ≈ 4 characters in English
1 token ≈ ¾ of a word
100 tokens ≈ 75 words or about 1–2 sentences
Imagine you have a sentence: "The quick brown fox jumps over the lazy dog." An LLM would use tokenization to chop this sentence into manageable pieces. Depending on the chosen method (we’ve discussed it in the next section below), this could result in a variety of tokens, such as:
Word-level: ["The", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"]
Subword-level: ["The", "quick", "brown", "fox", "jumps", "over", "the", "la", "zy", "dog"]
Character-level: ["T", "h", "e", " ", "q", "u", "i", "c", "k", " ", ...]
Each method has its own advantages and disadvantages.
Word-level tokenization is straightforward and aligns with the way humans naturally read and write text. It is effective for languages with clear word boundaries and for tasks where the meaning is heavily dependent on the use of specific words. However, this method can lead to very large vocabularies, especially in languages with rich morphology or in cases where the text contains a lot of different proper nouns or technical terms. This large vocabulary can become a problem when trying to manage memory and computational efficiency.
Subword-level tokenization, often implemented through methods like Byte Pair Encoding (BPE) or SentencePiece, addresses some of the issues of word-level tokenization. By breaking down words into more frequently occurring subunits, this method allows the model to handle rare or out-of-vocabulary (OOV) words more gracefully. It balances the vocabulary size and the ability to represent the full range of text seen during training. It can also be more effective for agglutinative languages (like Turkish or Finnish), where you can combine many suffixes with a base word, leading to an explosion of possible word forms if using word-level tokenization.
Character-level tokenization has the advantage of the smallest possible vocabulary. Since it deals with characters, it is very robust to misspellings and OOV words. However, because it operates at such a fine-grained level, it may require more complex models to understand higher-level abstractions in the text. Models may need to be larger or more complex to learn the same concepts that could be learned with fewer parameters at higher levels of tokenization.
Beyond these, there are other tokenization methods such as:
Byte-level: Similar to character-level, but treats the text as a sequence of bytes, which can be useful for handling multilingual text uniformly.
Morpheme-level: Breaks words down into morphemes, which are the smallest meaningful units of language. This can be useful for capturing linguistic nuances but requires sophisticated algorithms to implement effectively.
Hybrid approaches: Some models use a combination of the above methods, often starting with a larger unit and then falling back to smaller units when the first approach does not work.
The choice of tokenization can affect not just the performance of an LLM but also its understanding of the text. For example, using a subword tokenizer that never breaks down "dog" into smaller pieces ensures that the model always considers "dog" as a semantic unit. In contrast, if "dog" could be broken down into "d" and "og", the model might lose the understanding that "dog" represents an animal.
The complexity and number of tokens directly impact the computational horsepower needed to run AI models. More tokens generally mean more memory and processing power, which translates to higher costs.
When you use services like OpenAI's GPT models, you're charged based on the number of tokens processed. With different rates for different models (like Davinci or Ada), budgeting for AI usage can get tricky. This makes the choice of tokenization method not just a technical decision but also a financial one.
A crucial point about LLMs is that they can only handle a limited number of tokens at once—this is their token limit. The more tokens they can process, the more complex the tasks they can handle.
Imagine asking an AI to write a novel in one go. If the token limit is low, it might only manage a chapter. If it's high, you could get a full book, but it might take ages to write. It's all about finding the balance between performance and practicality.
Here’s the token limits chart of popular LLMs.
GPT-3.5 Turbo
16,385 tokens
4,096 tokens
GPT-3.5 Turbo Instruct
4,096 tokens
4,096 tokens
GPT-4
8,192 tokens
8,192 tokens
GPT-4o
128,000 tokens
4,096 tokens
GPT-4o mini
128,000 tokens
16,384 tokens
Claude Sonnet 3.5
200,000 tokens
8192 tokens
But what happens when you have more to say than the token limit allows?
Truncation: The most straightforward approach is to cut the text down until it fits the token budget. However, this is like trimming a picture; you lose some of the scenes.
Chunk Processing: Break your text into smaller pieces, process each chunk separately, and stitch the results together. It's like watching a series of short clips instead of a full movie.
Summarization: Distill your text to its essence. For example, "It's sunny today. What will the weather be like tomorrow?" can be shortened to "Tell me tomorrow's weather."
Remove Redundant Terms: Cut out the fluff—words that don't add significant meaning (like "the" or "and"). This streamlines the text but beware, over-pruning can alter the message.
Fine-Tuning Language Models: Custom-train your model on specific data to get better results with fewer tokens. It’s like prepping a chef to make a dish they can cook blindfolded.
Tokens are much more than jargon—they're central to how language models process and understand our queries and commands.
Understanding tokens and their role in AI language processing is fundamental for anyone looking to leverage the power of LLMs in their work or business. By grasping the basics of tokenization and its impact on computational requirements and costs, users can make informed decisions to balance performance with budget.
Retrieval Augmented Generation (RAG) is a paradigm-shifting methodology within natural language processing that bridges the divide between information retrieval and language synthesis. By enabling AI systems to draw from an external corpus of data in real-time, RAG models promise a leap towards a more informed and contextually aware generation of text.
RAG fuses in-depth data retrieval with creative language synthesis in AI. It's like having an incredibly knowledgeable friend who can not only recall factual information but also weave it into a story seamlessly, in real-time.
To understand RAG, let's break it down:
Retrieval: Before generating any new text, the RAG model retrieves information from a large dataset or database. This could be anything from a simple database of facts to an extensive library of books and articles.
Augmented: The retrieved information is then fed into a generative model to "augment" its knowledge. This means the generative model doesn't have to rely solely on what it has been trained on; it can access external data for a more informative output.
Generation: Finally, the model generates text using both its pre-trained knowledge and the newly retrieved information, leading to more accurate, detailed, and relevant responses.
A RAG model typically involves two major components:
Document Retriever: This is a neural network or an algorithm designed to sift through the database and retrieve the most relevant documents based on the query it receives.
Sequence-to-Sequence Model: After retrieval, a Seq2Seq model, often a transformer-based model like BERT or GPT, takes the retrieved documents and the initial query to generate a coherent and relevant piece of text.
Let's imagine we want to build a RAG model that, when given a prompt about a historical figure or event, can generate a detailed and accurate paragraph.
First, you need a database from which the model can retrieve information. For historical facts, this could be a curated dataset like Wikipedia articles, historical texts, or a database of historical records.
Before you can retrieve information, you need to index your data source to make it searchable. You can use software like Elasticsearch for efficient indexing and searching of text documents.
You then need a retrieval model that can take a query and find the most relevant documents in your database. This could be a simple TF-IDF (Term Frequency-Inverse Document Frequency) retriever or a more sophisticated neural network-based approach like a Dense Retriever that maps text to embeddings.
The retrieved documents are then fed into a generative AI model, like GPT-4o or BERT. This model is responsible for synthesizing the information from the documents with the original query to generate coherent text.
If you're training a RAG model from scratch, you'd need to fine-tune your generative AI model on a task-specific dataset. You’d need to:
Provide pairs of queries and the correct responses.
Allow the model to retrieve documents during training and learn which documents help it generate the best responses.
After initial training, you can refine your model through further iterations, improving the retriever or the generator based on the quality of outputs and user feedback.
Building such a RAG system would be a significant engineering effort, requiring expertise in machine learning, NLP, and software engineering.
RAG significantly enhances the relevance and factual accuracy of text generated by AI systems. This is due to its ability to access current databases, allowing the AI to provide information that is not only accurate but also reflects the latest updates.
Moreover, RAG reduces the amount of training data needed for language models. By leveraging external databases for knowledge, these models do not need to be fed as much initial data to become functional.
RAG also offers the capability to tailor responses more specifically, as the source of the retrieved data can be customized to suit the particular information requirement. This functionality signifies a leap forward in making AI interactions more precise and valuable for users seeking information.
The applications of RAG are vast and varied. Here are a few examples:
Customer Support: RAG can pull up customer data or FAQs to provide personalized and accurate support.
Content Creation: Journalists and writers can use RAG to automatically gather information on a topic and generate a draft article.
Educational Tools: RAG can be used to create tutoring systems that provide students with detailed explanations and up-to-date knowledge.
Despite its advantages, RAG also comes with its set of challenges:
Quality of Data: The retrieved information is only as good as the database it comes from. Inaccurate or biased data sources can lead to flawed outputs.
Latency: Retrieval from large databases can be time-consuming, leading to slower response times.
Complexity: Combining retrieval and generation systems requires sophisticated machinery and expertise, making it complex to implement.
Retrieval Augmented Generation is a significant step forward in the NLP field. By allowing machines to access a vast array of information and create something meaningful from it, RAG opens up a world of possibilities for AI applications.
Whether you're a developer looking to build smarter AI systems, a business aiming to improve customer experience, or just an AI enthusiast, understanding RAG is crucial for advancing in the dynamic field of artificial intelligence.
Parameters are the individual elements of a Large Language Model that are learned from the training data. Think of them as the synapses in a human brain—tiny connections that store learned information.
Each parameter in an LLM holds a tiny piece of information about the language patterns the model has seen during training. They are the fundamental elements that determine the behavior of the model when it generates text.
For example, imagine teaching a child what a cat is by showing them pictures of different cats. Each picture tweaks the child's understanding and definition of a cat. In LLMs, each training example tweaks the parameters to better understand and generate language.
Parameters are crucial because they allow the model to perform tasks such as translation, write articles, and even generate source code. When you ask an AI a question, the parameters work together to sift through the learned patterns and generate a response that makes sense based on the training it received.
For instance, if you ask an AI to write a poem, the parameters will determine how to structure the poem, what words to use, and how to create rhyme or rhythm, all based on the data it was trained on.
When we say "Large" in LLM, we're not kidding. The size of a language model is directly related to the number of parameters it has.
Take GPT-4, for example, with its 1.76 trillion parameters. That's like 1.76 trillion different dials the model can tweak to get language just right. Each parameter holds a piece of information that can contribute to understanding a sentence's structure, the meaning of a word, or even the tone of a text.
Earlier models had significantly fewer parameters. GPT-1, for instance, had only 117 million parameters. With each new generation, the number of parameters has grown exponentially, leading to more sophisticated and nuanced language generation.
Training an LLM involves a process called "backpropagation" where the model makes predictions, checks how far off it is, and adjusts the parameters accordingly.
Let's say we're training an LLM to recognize the sentiment of a sentence. We show it the sentence "I love sunny days!" tagged as positive sentiment. The LLM predicts positive but isn't very confident. During backpropagation, it adjusts the parameters to increase the confidence for future similar sentences.
This process is repeated millions of times with millions of examples, gradually fine-tuning the parameters so that the model's predictions become more accurate over time.
The number of parameters is one of the key factors influencing an AI model's performance. However, more parameters can mean a model requires more computational power and data to train effectively, which can lead to increased costs and longer training times.
With great power comes great responsibility—and greater chances of making mistakes. More parameters can sometimes mean that the model starts seeing patterns where there aren't any, a phenomenon known as "overfitting" where the model performs well on training data but poorly on new, unseen data.
The future of LLMs might not just be about adding more parameters, but also about making better use of them. Innovations in how parameters are structured and how they learn are ongoing.
AI researchers are exploring ways to make LLMs more parameter-efficient, meaning they can achieve the same or better performance with fewer parameters. Techniques like "parameter sharing" and "sparse activation" are part of this cutting-edge research.
Parameters in LLMs are the core elements that allow these models to understand and generate human-like text. While the sheer number of parameters can be overwhelming, it's their intricate training and fine-tuning that empower AI to interact with us in increasingly complex ways.
As AI continues to evolve, the focus is shifting from simply ramping up parameters to refining how they're used, ensuring that the future of AI is not just smarter but also more efficient and accessible.
A prompt, in the simplest terms, is the initial input or instruction given to an AI model to elicit a response or generate content. It's the human touchpoint for machine intelligence, a cue that sets the AI's gears in motion.
Prompts are more than mere commands; they are the seeds from which vast trees of potential conversations and content grow. Think of them as the opening line of a story, the question in a quiz, or the problem statement in a mathematical conundrum – the prompt is the genesis of the AI's creative or analytical output.
For example, when you ask GPT-4o "What's the best way to learn a new language?" you've given it a prompt. The AI then processes this and generates advice based on its training data.
Prompt engineering is a discipline in itself, evolving as an art and science within AI communities. Crafting effective prompts is akin to programming without code; it's about phrasing and framing your request to the AI in a way that maximizes the quality and precision of its output.
Good prompt engineering can involve:
Being specific: Clearly defining what you want the AI to do.
Setting the tone: Informing the AI of the style or mood of the content you expect.
Contextualizing: Providing background information to guide the AI's responses.
Example: Instead of saying, "Tell me about France," a well-engineered prompt would be, "Write a short travel guide for first-time visitors to France, highlighting top attractions, cultural etiquette, and local cuisine."
Generative AI, which includes everything from text to image generation models, relies heavily on prompts to determine the direction of content creation. Prompts for generative AI act as a blueprint from which the model can conjure up entirely new pieces of content – whether that's an article, a poem, a piece of art, or a musical composition.
Prompts tell the AI not just what to create, but can also suggest how to create it, influencing creativity, tone, structure, and detail. As generative AI grows more sophisticated, the potential for complex and nuanced prompts increases, allowing for more customized and high-fidelity outputs.
Example: Prompting an AI with "Create a poem in the style of Edgar Allan Poe about the sea" instructs the model to adopt a specific literary voice and thematic focus.
Crafting the perfect prompt isn't always straightforward. One of the challenges lies in the AI's interpretation of the prompt. Ambiguity can lead to unexpected or unwanted results, while overly restrictive prompts may stifle the AI's creative capabilities.
Moreover, ethical considerations arise when prompts are designed to elicit biased or harmful content. The AI's response is contingent upon its training data, and if that data includes prejudiced or false information, the output may reflect those biases. Responsible prompt engineering thus also involves an awareness of potential harm and the implementation of safeguards against it.
Example: To avoid bias in AI-generated news summaries, prompts should be engineered to require neutrality and fact-checking.
Prompts are the simple commands or questions we use to kickstart a conversation with AI, guiding it to understand and generate the responses or content we seek. They're like the steering wheel for the AI's capabilities, crucial for navigating the vast landscape of information and creativity the AI models offer.
As we continue to interact with and shape AI technology, mastering the use of prompts becomes our way of ensuring that the conversation flows in the right direction. Simply put, the better we become at asking, the better AI gets at answering.
So, the next time you interact with a language model, remember that the quality of the output is often a direct reflection of your input - your prompt is the key.
Prompt Engineering is the art and science of crafting inputs (prompts) that guide AI to produce the desired outputs. It's about understanding how to communicate with an AI in a way that leverages its capabilities to the fullest. Think of it as giving directions to a supremely intelligent genie without any misunderstandings.
Generative AI, like OpenAI’s GPT models, are revolutionizing industries from content creation to coding. But their utility hinges on the quality of the prompts they receive. A well-engineered prompt can yield rich, accurate, and nuanced responses, while a poor one can lead to irrelevant or even nonsensical answers.
AI models are literal. If you ask for an article, you'll get an article. If you ask for a poem about dogs in space, you’ll get exactly that. The specificity of your request can significantly alter the output.
Example:
Vague Prompt: Write about health.
Engineered Prompt: Write a comprehensive guide on adopting a Mediterranean diet for improving heart health, tailored for beginners.
Providing context helps the AI understand the nuance of the request. This could include tone, purpose, or background information.
Example:
Without Context: Explain quantum computing.
With Context: Explain quantum computing in simple terms for a blog aimed at high school students interested in physics.
Closed prompts lead to specific answers, while open prompts allow for more creativity. Depending on your goal, you may need one over the other.
Example:
Closed Prompt: What is the capital of France?
Open Prompt: Describe a day in the life of a Parisian.
Prompt engineering is not a "get it right the first time" kind of task. It involves iterating prompts based on the responses received. Tweaking, refining, and even overhauling prompts based on output can lead to more accurate and relevant results.
A significant part of prompt engineering is experimentation. By testing different prompts and studying the outputs, engineers learn the nuances of the AI's language understanding and generation capabilities.
Keywords are the bread and butter of prompt engineering. Identifying the right keywords can steer the AI in the desired direction.
Example:
Without Keyword Emphasis: Write about the internet.
With Keyword Emphasis: Write an article focused on the evolution of internet privacy policies.
These prompts mimic a human thought process, providing a step-by-step explanation that leads to an answer or conclusion. This can be especially useful for complex problem-solving.
Example:
Chain of Thought Prompt: To calculate the gravitational force on an apple on Earth, first, we determine the mass of the apple and the distance from the center of the Earth...
In zero-shot learning, the AI is given a task without previous examples. In few-shot learning, it’s provided with a few examples to guide the response. Both techniques can be leveraged in prompt engineering for better results.
Example:
Zero-Shot Prompt: What are five innovative ways to use drones in agriculture?
Few-Shot Prompt: Here are two ways to use drones in agriculture: 1) Crop monitoring, 2) Automated planting. List three more innovative ways.
Bias and Sensitivity: Prompt engineers must be mindful of inherent biases and ethical considerations. This includes avoiding prompts that could lead to harmful outputs or perpetuate stereotypes.
Realistic Expectations: LLMs and Generative AI are powerful but not omnipotent. Understanding their limitations is crucial in setting realistic expectations for what prompt engineering can achieve.
Data Privacy and Security: As prompts often contain information that may be sensitive, engineers must consider data privacy and security in their designs.
Prompt engineering is more than a technical skill—it’s a new form of linguistic artistry. As we continue to integrate AI into our daily lives, becoming adept at communicating with these systems will become as essential as coding is today.
Whether you’re a writer, a developer, or just an AI enthusiast, mastering the craft of prompt engineering will place you at the forefront of this exciting conversational frontier. So go ahead, start crafting those prompts, and unlock the full potential of your AI companions.
Large Language Models (LLMs) are advanced AI algorithms trained to understand, generate, and sometimes translate human language. They are called “large” for a good reason: they consist of millions or even billions of parameters, which are the fundamental data points the model uses to make predictions and decisions.
Imagine teaching a child language by reading every book you can find. That’s essentially what LLMs go through. They are fed vast amounts of text data and use statistical methods to find patterns and learn from context. Through a process known as machine learning, these models become adept at predicting the next word in a sentence, answering questions, summarizing texts, and more.
Data, Data, and More Data: LLMs are the heavyweight champions of the data world. They are trained on diverse datasets comprising encyclopedias, books, articles, and websites to learn a wide range of language patterns and concepts.
Supervised and Unsupervised Learning: Some LLMs learn through supervised learning, meaning they learn from datasets that have been labeled or corrected by humans. Others use unsupervised learning, meaning they infer patterns and rules from raw data without human annotation.
Fine-Tuning: After the initial training, LLMs can be fine-tuned for specific tasks, like legal document analysis or medical diagnosis, by training them further on specialized data.
Writing Assistance: Grammarly or the autocomplete in your email are powered by LLMs. They predict what you’re trying to say and help you say it better.
Translation Services: Services like Google Translate use LLMs to convert text from one language to another, learning from vast amounts of bilingual text to improve their accuracy.
Neural Networks: The core technology behind LLMs is artificial neural networks, particularly a type called Transformer models. These mimic some aspects of human brain function and are particularly good at handling sequential data like text.
Training Hardware: Training LLMs requires significant computational power, often involving hundreds of GPUs or specialized TPUs that work in tandem for weeks or months.
Continuous Learning: LLMs don’t stop learning after their initial training. They can continue to learn from new data, improving their performance over time.
The GPT series by OpenAI has been a trailblazer in the field of LLMs. Each version of the Generative Pre-trained Transformer has been more powerful than the last, with GPT-4o as a staggering leap forward. Boasting over 200 billion parameters, this model is not just about size; it’s about the nuanced understanding and generation of human-like text. GPT-4o can craft essays that are indistinguishable from those written by humans, compose complex poetry, and even generate functional computer code across several languages, which is a testament to its versatility.
GPT-4o's influence extends across industries. For instance, it can simulate conversations, create educational content, and even assist programmers by converting natural language descriptions into code snippets. Its advanced capabilities are being integrated into various software applications and tools, enhancing productivity and sparking creative new approaches to problem-solving.
BERT stands for Bidirectional Encoder Representations from Transformers. It's a complicated name, but really, it's just Google's method for making search engines smarter. Unlike earlier models, BERT examines the context of a word in both directions (left and right of the word) within a sentence, leading to a far more nuanced interpretation of the query. This ability means that BERT can grasp the full intent behind your searches, so the results you get are closer to what you actually need.
Since its integration into Google's search engine, BERT has significantly improved the relevance of results for millions of queries every day. For users, this often translates to finding answers more quickly and accurately, sometimes in subtle ways that may go unnoticed but are nonetheless powerful. Beyond search, BERT is also revolutionizing natural language processing tasks such as language translation, question answering, and text summarization.
In summary, both the GPT series and BERT are not just steps but giant leaps forward in our ability to interface with machines in a more natural, intuitive way. They are redefining what's possible in the realm of AI and continuing to pave the way for smarter, more responsive technology.
Bias in AI: Since LLMs learn from existing data, they can perpetuate and amplify biases present in that data. It’s an ongoing challenge to ensure that LLMs are fair and unbiased.
Privacy: Training LLMs on personal data raises privacy concerns. Ensuring data used is anonymized and secure is paramount.
Environmental Impact: The energy consumption of training and running LLMs is significant. Researchers are working on more efficient models to mitigate this.
Evolving Intelligence: LLMs are getting more sophisticated, with future models expected to handle more complex tasks and exhibit greater understanding of human language.
Interdisciplinary Integration: LLMs are beginning to intersect with other fields, such as bioinformatics and climatology, providing unique insights and accelerating research.
Democratization of AI: As LLMs become more user-friendly, their use is expanding beyond tech companies to schools, small businesses, and individual creators.
Large Language Models are transforming how we interact with machines, making them more human-like than ever. They're a blend of colossal data, computing power, and intelligent algorithms, pushing the boundaries of what machines can understand and accomplish. As they evolve, LLMs will continue to shape our digital landscape in unpredictable and exciting ways. Just remember, the next time you type out a sentence and your phone suggests the end of it, there’s a little bit of LLM magic at work.
Generative AI has been making waves across various sectors, from art to technology, leaving many people scratching their heads and wondering: WTF is Generative AI? In this guide, we'll unpack the buzzword and provide you with a clear understanding of what Generative AI is, how it works, and why it's becoming increasingly important in our digital world.
At its core, Generative AI refers to the subset of artificial intelligence where the systems are designed to generate new content. It’s like giving an artist a canvas, but the artist is an algorithm that can create images, compose music, write text, generate programming source code, and much more.
Generative AI systems are typically powered by machine learning models that have been trained on vast datasets. They learn patterns, structures, and features from this data and use this understanding to generate new, original creations that are often indistinguishable from content created by humans.
Generative AI works using advanced machine learning models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
These models involve two key components:
Generative Models: These are the AI algorithms that create the new data. For example, a generative model might create new images of animals it has never seen before by learning from a dataset of animal pictures.
Discriminative Models: In the case of GANs, the discriminative model evaluates the data generated by the generative model. This is like an art critic who tells the artist if their work is believable or not.
The two models work together in a sort of AI tug-of-war, with the generative model trying to produce better and better outputs and the discriminative model trying to get better at telling the difference between generated and real data.
Generative AI has a plethora of applications, here are a few:
Art: Apps like DeepArt and platforms like DALL-E generate original visuals and art based on user prompts.
Music: AI like OpenAI's Jukebox can generate music, complete with lyrics and melody, in various styles and genres.
Text: Tools like ChatGPT can write articles, poetry, and even code based on text prompts. Bito also falls in this category as an AI Coding Assistant.
Design: Generative AI can suggest design layouts for everything from websites to interior decorating.
Deepfakes: This controversial use involves generating realistic video and audio recordings that can mimic real people.
Efficiency: Generative AI can produce content much faster than humans.
Creativity: It has the potential to create novel combinations that might not occur to human creators.
Personalization: AI can tailor content to individual tastes and preferences.
Ethics: Generative AI raises questions about authenticity and the ownership of AI-generated content.
Quality Control: Ensuring consistent quality of AI-generated content can be challenging.
Misuse: There’s a risk of its use in creating misleading information or deepfakes.
The future of Generative AI is both exciting and uncertain. It could revolutionize how we create and consume content. For instance, imagine personalized movies generated in real-time to match your mood, or educational content adapted perfectly to each student's learning style.
As technology advances, so too will the capabilities of Generative AI. It's not just about the ‘WTF’ factor; it's about recognizing the potential and preparing for the transformation it will bring about.
Generative AI is at the frontier of innovation, standing at the crossroads of creativity and computation. It is transforming the conventional processes of creation across various fields and presenting us with a future where the line between human and machine-made is increasingly blurred. While it brings with it a host of benefits, we must tread carefully to navigate the ethical considerations and harness its power for the greater good.
As with any transformative technology, the question isn’t just 'WTF is Generative AI?' but also 'How do we responsibly integrate it into our society?' That is the real challenge and opportunity ahead.
🤯 Sick of typing out long prompts every time? 😩 Bito's got your back! Now, create custom prompt templates for all your frequently used prompts and save yourself some stress.
With "Create Prompt Template," you can create and save custom prompt templates for use in your IDE. By defining a custom template with a template name and prompt, Bito can execute the prompt as is on the selected code. With this feature, you can save time and streamline your workflow by quickly executing frequently used prompts without inputting them manually each time.
The custom prompt templates feature and standard prompt templates are located below the chatbox.
Here is a quick overview of the Custom Prompt Templates in Bito
Open Bito Plugin in your IDE
Below the chatbox, click on "New Template".
Enter the "Template Name" and "Prompt" for your custom template. You can use {{%code%}} as a macro to insert the selected code in your prompt. If this macro is not used, Bito will insert the selected code at the end of your prompt. Next, select the "Output Format". You currently have two options:
Display in Bito panel (Default)
Output in diff view
Click on "Create Template" to save your new custom template. All custom templates will appear below the chatbox alongside standard templates. You can create up to four custom templates.
You can edit or remove the templates anytime by clicking on the three dots over the template that you want to edit or remove. Note that you can edit or remove the standard templates provided by Bito.
Select any code that you want to execute the prompt.
Run the Custom Template by clicking it in the Bito Templates panel or from the IDE context menu.
Bito starts generating output
Here’s an example of how to execute prompts using your custom templates.
Let’s say you want to create a custom template to add a comment describing the logic behind the code. Here's how you can do it:
Below the chatbox, click on "New template".
Enter a name for your custom template, e.g. "Add Comment"
In the "Prompt" field, enter the following: "Please add a comment describing the logic behind the code. " and then click on "Create Template" to save your new custom template
Now, select the code to which you want to comment and click on the "Add Comment" template.
Bito adds the selected code at the end of the prompt and executes it via Bito AI.
Bring your team together
In Bito, team members collaborate by joining a workspace. In most cases, every organization would create one Workspace. Anyone can install Bito, create a workspace for their team, and invite their coworkers to the Workspace.
You can use Bito in a single-player mode for all the use cases. However, it works best when your coworkers join the Workspace to collaborate with Bito. There are three ways you can invite your coworkers.
Option 1 - Allow your work e-mail domain for the Workspace. This setting is turned on by default, and all users with the same e-mail domain as yours will automatically see the Workspace under "Pending Invitations" when signing up in Bito. You can manage this setting after you create the Workspace through the "Settings" page in your Bito account.
Option 2 - Invite your coworkers via e-mail when you create your Workspace or later from your workspace setting.
Option 3- Share a web link specific to your Workspace via the channel of your choice: e-mail, Slack, or Teams. The link is automatically created and shown when creating a workspace or on the workspace settings page.
If you are the Owner or Admin of the Workspace, you can take the following actions:-
Deactivate any user to remove them from the given Workspace. Once the user is deactivated, they can't access the workspace. They can request to join the Workspace, which requires approval from the admin or owner.
A deactivated user can be activated again by the admin/owner.
Activate the user who was previously deactivated.
A Bito user can check "Remember Me" to auto log-in. Admin/Owner can force the user to re-authenticate if needed for security.
Bito has primarily three user types - Owners, Admind, and Users as defined in the following page:Managing User Access Levels. Admins/Owners can change the user's access level.
The following Loom demonstrates managing the workspace and its members.
Learn How to Create, Join, or Change Workspace
A workspace is a dedicated environment or space where teams can collaborate and use Bito services. After logging into your Bito account, you can either create a new workspace or join an existing one you've been invited to.
The link to create a new workspace will appear at the bottom of the sign-up flow screen. Click on "Create Workspace" to get started.
Now, enter the name of the workspace. You can also choose to make this workspace discoverable by the users with the same domain email as your email. Finally, click on the "Next" button to proceed creating a new workspace.
Once you complete the Workspace setup, Bito will be ready to use.
If your email domain is allowed for the Workspace, or your coworker invited you, you will see the Workspace listed during the sign-up flow under the "Workspaces Available to Join" list.
Simply click on the "Join" button given in front of the workspace you want to join. Joining your company or team Workspace takes less than a minute.
Alternatively, you can join the Workspace through the Workspace link shared by your coworker.
Follow the below steps to switch to a different workspace:
First log out of your Bito account.
Then, log back in and choose the workspace you want from the available list.
In the IDE extension, place your mouse cursor over the workspace icon. The workspace name will show up as a tooltip.
Learn How to Pay and Manage Your Payment Methods
Credit and Debit Cards (Visa, Mastercard, American Express, Diners, Discover, JCB, and China Union Pay)
Google Pay, Apple Pay, Alipay, Cash App Pay
Bank Accounts in the US and many other countries
Payment methods for Bito are managed securely by Stripe. You can add or delete payment methods if you want.
Click on the "Edit payment methods" button.
On this page, you will see your currently active plan as well as your existing payments method attached to your account.
Click on the "Add payment method" button.
A form will open through which you can add any of our supported payment methods mentioned above.
Fill in the form and press the "Add" button to add a new payment method.
Click on the "Edit payment methods" button.
On this page, you will see your currently active plan as well as your existing payments method attached to your account.
Click on the three dots button in front of the payment method you want to delete.
Now click "Delete" from the popup menu.
A warning popup box will open, asking you to confirm whether you really want to delete the payment method.
Simply click the "Delete payment method" button on this warning popup to remove this payment method from your account.
As you can see in the below screenshot, the "Visa" payment method is removed successfully.
Learn About Subscription Plans, Payment Methods, and Refunds.
Communicate in Your Preferred Language
Bito users come from all over the world, and it makes it super easy to set the AI output language. Bito will automatically generate the text output in the language in your user profile setting, regardless of the prompt input language.
Supported Languages:
Bito offers 20+ languages for you to choose from. Here is the list of currently supported languages:
English (Default Language)
Bulgarian (български)
Chinese (Simplified) (简体中文)
Chinese (Traditional) (繁體中文)
Czech (čeština)
French (français)
German (Deutsch)
Hungarian (magyar)
Italian (italiano)
Japanese (日本語)
Korean (한국어)
Polish (polski)
Portuguese (português)
Russian (русский)
Spanish (español)
Turkish (Türkçe)
Vietnamese (Tiếng Việt)
Dutch (Nederlands)
Hebrew (עִברִית)
Arabic (عربي)
Malay (Melayu)
Hindi (हिंदी)
Using the Language Support Feature
Once you have selected your preferred language, Bito will communicate with you in your selected language. Take full advantage of this feature by:
Asking questions or giving commands to Bito in your selected language
Receiving responses and outputs from Bito in the language you've selected
Note: All responses from Bito will appear in the selected language, regardless of the input language
Enjoy the convenience of conversing with Bito in your native language and take your coding experience to a new level!
An alternative to standard email and OTP authentication
Follow these steps to create a Bito Access Key:
Click the Create new key button.
Enter a name for your Bito Access Key to make it easily identifiable.
Click Create Bito Access Key to generate your key.
Copy the key immediately, as it will not be displayed again after you close the popup.
To delete an existing Bito Access Key, follow these steps:
Click the trash icon next to the Bito Access Key you want to delete.
A confirmation popup will appear asking if you are sure you want to delete the key. Click Yes to proceed.
Guide to Billing and Paid Plans
Bito offers two plans:
Free Plan
10X Developer Plan
The 10X Developer Plan includes 600 Advanced AI Requests per month, additional requests (or overages) are charged at US $0.03 per request. For example, if you used 650 Advanced AI Requests in the month, 600 are included already in your 10X Developer Plan, and 50 are additional at US $0.03 per request (equals to US $1.5 additional), for a total of US$16.5.
A submitter is any unique user who creates a pull request (PR) in repositories where the AI Code Review Agent is configured. Submitters can include both workspace members and external contributors.
Workspaces are billed based on the greater of:
The total number of active users in the workspace.
The total number of unique PR submitters (including external contributors).
For example, if a workspace has 4 active users, but only 2 unique submitters (1 active user and 1 external contributor), the workspace will be billed for 4 users. If the number of submitters exceeds the active users, billing adjusts to reflect the higher number of submitters.
Bito automatically checks each week to determine if the number of submitters has surpassed the active users. If it has, billing seats for the workspace are updated to match the total submitters.
Once the number of billing seats increases (based on the submitters), it remains at that level and does not decrease in the future.
If additional active users join the workspace and their number surpasses the submitters, billing adjusts to reflect the increased number of active users.
When billing seats increase mid-month due to additional submitters or active users, the charges for those seats are prorated. For instance, if a new seat is added halfway through the month, the charge will be 50% of the monthly rate ($7.50 instead of $15).
Bito uses Stripe to handle all payment processing and securely store your credit card/payment information. Bito itself does not store your credit card/payment information.
Bito bills at the Workspace level. All users within a given Workspace will be billed on the same plan. You cannot have some users on the 10X Developer Plan and some users on the Free Plan, within the same workspace.
Within each Workspace, Bito bills at the seat (sometimes referred to as “user”) level from the first of the month to the last day of the month. So, if you have 12 users in your Workspace (let’s call it the “MyCompany” workspace), when an Admin signs up for the 10X Developer Plan, the “MyCompany” workspace will be billed for all 12 users. Bito’s 10X Developer Plan costs $15 per user per month. So, you will pay $180 per month for 12 users, and that will be charged on the 1st of the month for the next month. To give an example, on September 1, you would be charged $180 for the month of September. Any overages you had in terms of accessing Advanced AI models for the month of August, would also be charged on September 1.
Your first month when you sign up, you will be billed for the current month in a prorated fashion. For example, if you signed up in the middle of March, you would be billed $7.50 per seat (half of the $15 full month fee).
Manage your Bito workspace, members and the personal settings
Chatbots: If you've ever chatted with and noticed that it sounds almost like a real person, that's because it is powered by several state-of-the-art Large Language Models.
You can always switch this feature off later by visiting the page.
We use as our trusted payment handler to ensure seamless and secure transactions. We offer a variety of convenient payment methods to cater to your preferences.
Go to the page.
Go to the page.
Bito allows setting this language when creating an account, as described in .
You can also set or change this setting anytime by going to in Bito Cloud. Here is a quick video walkthrough.
Bito Access Key allows for an alternate authentication mechanism in contrast to the standard Email & OTP based authentication. Access Keys can be created via the and utilized within the . This guide outlines the process of creating or deleting an Access Key.
Log in to your account at:
Navigate to Settings > Advanced settings by .
Log in to your account at:
Navigate to Settings > Advanced settings by .
Read more details on page or watch the video below to learn how Billing and Paid Plans work in Bito.
In summary, the Free Plan is available at no cost and provides a powerful set of capabilities for individual and hobbyist developers. It includes up to using and , subject to our . Developers can work with 50+ programming languages, communicate in 20+ spoken languages (including English, Chinese, and Spanish), work with , and access the for AI-powered assistance.
The 10X Developer Plan is $15 per user per month (billed monthly starting on the 1st of the month) and includes all the features of the Free Plan, and also provides access to such as OpenAI's GPT-4o, Anthropic's Claude Sonnet 3.5, and best in class AI models. It also provides much longer chat context, files, and discussions (up to 110 single-spaced pages), and .
In addition to the above details, Bito offers a simplified pricing model for its , designed to provide flexibility and fairness in billing based on actual usage. The AI Code Review Agent is billed at a flat rate of $15 per pull request (PR) submitter per month.
All billing and plans functionality is available at by logging in with your email. Additionally, from Bito's IDE plug-in, click on the hamburger menu icon in the top-right corner (denoted by three horizontal lines) and select Account Settings to redirect to Bito's website.
From there you can go to the page to access the billing functionality.
When you , you . Primary Owners, Owners, and Admins can change the billing plan for a workspace. You can see your workspace by going to or page. You can see your Role by going to page. You can change your workspace by logging out, and when you log-in, you will choose which workspace you want to be a part of. It’s similar to Slack, where you can access different workspaces.
For any additional questions, please review the documentation we have. In addition, please feel free to contact Bito at with any questions.
Guide to billing and paid plans
Learn how to pay and manage your payment methods
Upgrade or downgrade your subscription anytime!
Learn how payments work when you invite a coworker to the workspace
Manage your spending to avoid unexpected expenses
Access your payment records
Manage your billing address and other details
How stripe protects your sensitive information
We currently do not give refunds
We currently do not offer discounts
Learn how to create, join, or change workspace
Invite coworkers and manage their workspace membership
Personalize Bito to speak your language
Learn about different access levels and permissions
An alternative to standard email and OTP authentication
Understanding User Roles in Bito Workspaces
A Bito Workspace represents your organization. It is the highest level of organization in Bito.
In a Bito Workspace, different user types play distinct roles in managing and collaborating within the workspace. Here is an overview of the three user types: Owner, Admin, and User. Understanding these roles will help you effectively manage your workspace and optimize team collaboration.
Owner: The Owner holds the highest level of authority within the workspace
Admin: Admins have a significant role in managing the workspace alongside the Owner
User: Users have access to the workspace with limited administrative privileges
Here's a table summarizing the roles of the different user types in a Bito Workspace:
Make or Remove Other Owner
Yes
No
No
Promote another user to admin or remove admin
Yes
Yes
No
Manage Subs and Billing
Yes
Yes
No
Manage Overage Limits
Yes
Yes
No
Add Member by E-mail
Yes
Yes
No
Access and Share Join workspace link
Yes
Yes
Yes
Deactivate Member
Yes
Yes
No
Edit WS Settings - Name, Discovery
Yes
Yes
No
Approve Member [When joining from the "Invite Workspace" web link]
Yes
Yes
No
Force Reauthentication
Yes
Yes
No
Upgrade or Downgrade Your Subscription Anytime!
Follow the below steps to upgrade from Free Plan to 10X Developer Plan:
In the Current plan section, click the "Change plan" button.
On this page, you will see your current plan (i.e., Free Plan) as well as the 10X Developer Plan.
Click the "Upgrade" button provided in the 10X Developer Plan. This action will redirect you to the secure Stripe Checkout page.
Select your preferred payment method, fill in the form, and click on the “Subscribe” button.
After completing the transaction, you will be redirected to Bito where a confirmation message will be shown.
As you can see in the below screenshot, your workspace plan has successfully upgraded from Free Plan to 10X Developer Plan.
That's it! You can now start using the features of 10X Developer Plan.
You can cancel your paid plan at any time and move back to the Free Plan. When you do cancel, you will retain the use of your paid plan until the end of the billing cycle as you have already paid for that. At this time, we are unable to offer refunds. Also, any additional usage beyond the allocated limit of your paid plan will be charged in the next billing cycle.
Follow the below steps to downgrade from 10X Developer Plan to Free Plan:
In the Current plan section, click the "Cancel plan" button.
A popup will appear. It has two steps.
In the first step, you have to select a reason for canceling your plan. It will help us improve Bito. After selecting a reason, click “Continue to cancel”.
The second step provides you with some information about what to expect after your plan is canceled. If all looks good press the “I want to cancel” button to cancel your subscription.
To renew your Workspace plan, follow these steps:
In the “Current plan” section, click the "Renew plan" button. You will be redirected to a secure page powered by Stripe.
If you had previously cancelled your subscription plan, you will see the details of your cancelled plan on this page, along with a “Renew plan” button.
Click on the “Renew plan” button to proceed with the renewal process.
On this page, you will find all the details of your previously cancelled plan before you renew it.
Click on the “Renew plan” button to complete the renewal of your subscription or press the “Go back” button to return to the previous screen.
Go to the page.
On the Stripe Checkout page, you can see the price you have to pay, as well as the form where you can enter your payment details. The price you will see for the 10X Developer Plan will depend on the number of days and time left in the current month as well as the total number of members in your workspace. Here's a list of we accept.
Go to the page.
Go to the page.