Install AI Architect (self-hosted)
Deploy AI Architect in your own infrastructure for complete data control and enhanced security
This guide walks you through installing Bito's AI Architect as a self-hosted service in your own infrastructure. Self-hosting gives you complete control over where your code knowledge graph resides and how AI Architect accesses your repositories.
Why choose self-hosted deployment? Organizations with strict data governance requirements, air-gapped environments, or specific compliance needs benefit from running AI Architect within their own infrastructure. Your codebase analysis and knowledge graph stay entirely within your control, while still providing the same powerful context-aware capabilities to your AI coding tools.
What you'll accomplish: By the end of this guide, you'll have AI Architect running on your infrastructure, connected to your Git repositories, and ready to integrate with AI coding tools like Claude Code, Cursor, Windsurf, and GitHub Copilot through the Model Context Protocol (MCP).
Deployment options
AI Architect can be deployed in three different configurations depending on your team size, infrastructure, and security requirements:
a. Personal use (with your LLM keys)
Set up AI Architect on your local machine for individual development work. You'll provide your own LLM API keys for indexing, giving you complete control over the AI models used and associated costs.
Best for: Individual developers who want codebase understanding on their personal machine.
b. Team / shared access (with your LLM keys)
Deploy AI Architect on a shared server within your infrastructure, allowing multiple team members to connect their AI coding tools to the same MCP server. Each team member can configure AI Architect with their preferred AI coding agent while sharing the same indexed codebase knowledge graph.
Best for: Development teams that want to share codebase intelligence across the team while managing their own LLM costs.
c. Enterprise deployment (requires Bito Enterprise Plan)
Deploy AI Architect on your infrastructure (local machine or shared server) with indexing managed by Bito. Instead of providing your own LLM keys, Bito handles the repository indexing process, simplifying setup and cost management.
Best for: Organizations that prefer managed indexing without handling individual LLM API keys and costs.
Prerequisites
a. Required accounts and tokens
Bito API Key (aka Bito Access Key)
You'll need a Bito account and a Bito Access Key to authenticate AI Architect. You can sign up for a Bito account at https://alpha.bito.ai, and create an access key from Settings -> Advanced Settings
Git Access Token
A personal access token from your chosen Git provider is required. You'll use this token to allow AI Architect to read and index your repositories.
GitHub Personal Access Token (Classic): To use GitHub repositories with AI Architect, ensure you have a CLASSIC personal access token with repo access. We do not support fine-grained tokens currently.
GitLab Personal Access Token: To use GitLab repositories with AI Architect, a token with API access is required.
Bitbucket Access Token: To use Bitbucket repositories with AI Architect, you need API Token or HTTP Access Token depending on your Bitbucket setup.
Bitbucket Cloud (
API Token): You must provide both your token and email address.Bitbucket Self-Hosted (
HTTP Access Token): You must provide both your token and username.
LLM API keys
Bito's AI Architect uses Large Language Models (LLMs) to build a knowledge graph of your codebase.
We suggest you provide API keys for both Anthropic and Grok LLMs, as that provides the best coverage and the best cost of indexing.
Bito will use Claude Haiku and Grok Code Fast together to index your codebase. It will cost you approximately USD $0.20 - $0.40 per MB of indexable code (we do not index binaries, TARs, zips, images, etc). If you provide only an Anthropic key without Grok, your indexing costs will be significantly higher, approximately USD $1.00 - $1.50 per MB of indexable code.
b. System requirements
The AI Architect supports the following operating systems:
macOS
Unix-based systems
Windows (via WSL2)
Docker Desktop / Docker Service (required)
Docker Compose is required to run AI Architect.
The easiest and recommended way to get Docker Compose is to install Docker Desktop.
Docker Desktop includes Docker Compose along with Docker Engine and Docker CLI which are Compose prerequisites.
Configuration for Windows (WSL2):
If you're using Windows with WSL2, you need to enable Docker integration with your WSL distribution:
Open Docker Desktop
Go to Settings > Resources > WSL Integration
Enable integration for your WSL distribution (e.g., Ubuntu)
Click Apply
Installation guide
Download AI Architect
Download the latest version of AI Architect package from our GitHub repository.
Extract the downloaded AI Architect package
Open your terminal:
Linux/macOS: Use your standard terminal application
Windows (WSL2): Launch the Ubuntu application from the Start menu
Navigate to the folder where you downloaded the file. You can either work directly in your Downloads folder or move the file to any preferred location first, then navigate there in the terminal.
Linux/macOS:
cd /path/to/your/folderWindows (WSL2):
cd /mnt/c/Users/YourUsername/path/to/folder
Create a directory for AI Architect and extract the downloaded package into it:
Navigate to the extracted folder:
Run setup
The setup script will guide you through configuring AI Architect with your Git provider and LLM credentials. The process is interactive and will prompt you for the necessary information step by step.
To begin setup, run:
You'll need to provide the following details when prompted:
Bito API Key (required) - Enter your Bito Access key and press Enter.
Git provider (required):
You'll be prompted to choose your Git provider:
GitLab
GitHub
Bitbucket
Enter the number corresponding to your Git provider and press Enter.
Is your Git provider self-hosted or cloud-based?
Type
yfor enterprise/self-hosted instances (likehttps://github.company.com) and enter your custom domain URLType
nfor standard cloud providers (github.com, gitlab.com, bitbucket.org)
Press Enter to continue.
Git Access Token (required) - Enter personal access token for your Git provider and press Enter.
Configure LLM API keys (required) - Choose which AI model provider(s) to configure:
Anthropic
Grok
OpenAI
Enter the number corresponding to your AI model provider, then provide your API key when prompted.
We suggest you provide API keys for both Anthropic and Grok LLMs, as that provides the best coverage and the best cost of indexing.
After adding a provider, you'll be asked: "Do you want to configure another provider?"
Type
yto add additional providers (recommended for better coverage and fallback options).Type
nwhen you're done adding LLM providers.
Press Enter to continue.
Generate a secure MCP access token? - You'll be asked if you want Bito to create a secure token to prevent unauthorized access to your MCP server:
Type
yto generate a secure access token (recommended)Type
nto skip token generation
Press Enter to continue.
Add repositories
Once your Git account is connected successfully, Bito automatically detects your repositories and populates the .bitoarch-config.yaml file with an initial list. Review this file to confirm which repositories you want to index β feel free to remove any that should be excluded or add others as needed. Once the list looks correct, save the file, and continue with the steps below.
Below is an example of how the .bitoarch-config.yaml file is structured:
After updating the .bitoarch-config.yaml file, you have two options to proceed with adding your repositories for indexing:
Auto Configure (recommended)
Automatically saves the repositories and starts indexing
If needed, edit the repo list before selecting this option
Manual Setup
You have to manually update the configuration file and then start the indexing. Below we have provided complete details of the manual process.
Once you select an option, your Bito MCP URL and Bito MCP Access Token will be displayed. Make sure to store them in a safe place, you'll need them later when configuring MCP server in your AI coding agent (e.g., Claude Code, Cursor, Windsurf, GitHub Copilot (VS Code), etc.).
To manually apply the configuration, run this command:
Start indexing
Once your repositories are configured, AI Architect needs to analyze and index them to build the knowledge graph. This process scans your codebase structure, dependencies, and relationships to enable context-aware AI assistance.
Start the indexing process by running:
Once the indexing is complete, you can configure AI Architect MCP server in any coding or chat agent that supports MCP.
Update repository list and re-index
Edit .bitoarch-config.yaml file to add/remove repositories.
To apply the changes, run this command:
Start the re-indexing process using this command:
Setting up AI Architect MCP in coding agents
Configure MCP server in supported AI coding tools such as Claude Code, Cursor, Windsurf, GitHub Copilot, more.
Select your AI coding tool from the options below and follow the step-by-step installation guide to seamlessly set up AI Architect.
Configuring AI Architect for Bito AI Code Review Agent
Now that you have AI Architect set up, you can take your code quality to the next level by integrating it with Bito's AI Code Review Agent. This powerful combination delivers significantly more accurate and context-aware code reviews by leveraging the deep codebase knowledge graph that AI Architect has built.
Why integrate AI Architect with AI Code Review Agent?
When the AI Code Review Agent has access to AI Architect's knowledge graph, it gains a comprehensive understanding of your entire codebase architecture β including microservices, modules, APIs, dependencies, and design patterns.
This enables the AI Code Review Agent to:
Provide system-aware code reviews - Understand how changes in one service or module impact other parts of your system
Catch architectural inconsistencies - Identify when new code doesn't align with your established patterns and conventions
Detect cross-repository issues - Spot problems that span multiple repositories or services
Deliver more accurate suggestions - Generate fixes that are grounded in your actual codebase structure and usage patterns
Reduce false positives - Better understand context to avoid flagging valid code as problematic
Getting started with AI Architect-powered code reviews
Log in to Bito Cloud
Open the AI Architect Settings dashboard.
In the Server URL field, enter your Bito MCP URL
In the Auth token field, enter your Bito MCP Access Token
Need help getting started? Contact our team at [email protected] to request a trial. We'll help you configure the integration and get your team up and running quickly.
Upgrading AI Architect
Upgrade your AI Architect installation to the latest version while preserving your data and configuration. The upgrade process:
Automatically detects your current version
Downloads and extracts the new version
Migrates your configuration and data
Seamlessly transitions to the new version
Preserves all indexed repositories and settings
Upgrade instructions
Option 1: Upgrade from within your installation (Recommended)
If you're running version 1.1.0 or higher, navigate to your current installation directory and run:
Option 2: Upgrade from external location
If you need to run the upgrade from outside your installation directory (useful for version 1.0.0), use the --old-path parameter:
Upgrade parameters
The upgrade script supports the following parameters:
Available commands
For complete reference of AI Architect CLI commands, refer to Available commands.
Last updated

