# Key features

AI Architect is a full engineering workflow platform built around a knowledge graph of your codebase, business context, tribal knowledge, docs, tickets, and observability data. It operates across three solution areas — **technical design & scoping**, **grounded coding**, and **code review** — surfacing system-level intelligence at every stage of development.

This page covers the core features that power those workflows.

## Knowledge graph

The knowledge graph is the context engine at the core of AI Architect. It ingests your codebase, Git history, issue tracker data (Jira, Linear), documentation (Confluence), observability signals, and Slack conversations into a unified, continuously updated graph.

Rather than treating your codebase as searchable text, the knowledge graph models the relationships between services, APIs, dependencies, past decisions, and recurring incident patterns — capturing how your system actually fits together.

This shared context powers every capability in AI Architect. When an agent runs a feasibility analysis, generates a technical design, or reviews a pull request, it draws from the same knowledge graph — so output is grounded in your real architecture, not generic patterns.

The knowledge graph also enables cross-repo reasoning: it understands blast radius, tracks instability histories, and connects a Jira incident from six months ago to the service a developer is about to change today.

<a href="knowledge-graph" class="button primary">Learn more</a>

## Supported integrations

AI Architect connects to the tools your team already uses across planning, coding, review, and communication.

* [**Jira**](https://docs.bito.ai/ai-architect/integrating-ai-architect-with-your-tools/integrating-with-jira) is the primary surface for technical design and scoping. AI Architect listens for new or updated Epics and Stories, posts implementation plans as ticket comments, and can be triggered on demand via `@bito` in any comment. Plans include feasibility assessments, story breakdowns, effort estimates, and risk flags — all grounded in your knowledge graph.
* [**Slack**](https://docs.bito.ai/ai-architect/integrating-ai-architect-with-your-tools/integrating-with-slack) brings AI Architect into team discussions. Mention `@Bito` in any channel thread and the assistant reads the full conversation context, resolves referenced Jira tickets and Confluence pages, and responds with context-aware answers, task breakdowns, or implementation plans — directly in the thread.
* [**Coding agents**](https://docs.bito.ai/ai-architect/integrating-ai-architect-with-your-tools/integrating-with-coding-agents) connect to AI Architect via MCP (Model Context Protocol). A one-command installer automatically configures all supported tools detected on your system. Supported agents include Claude Code, Cursor, Windsurf, GitHub Copilot (VS Code), Junie, and JetBrains AI Assistant. Once connected, Agent Skills are available inside each tool, giving developers access to the full knowledge graph while they build.
* [**Chat agents**](https://docs.bito.ai/ai-architect/integrating-ai-architect-with-your-tools/integrating-with-chat-agents) — including Claude.ai (Web & Desktop) and ChatGPT (Web & Desktop) — can also be connected to AI Architect via MCP for codebase-aware conversational assistance outside of a dedicated coding environment.
* [**Bito's AI Code Review Agent**](https://docs.bito.ai/ai-architect/integrating-ai-architect-with-your-tools/integrating-with-bitos-ai-code-review-agent) integrates with AI Architect to bring knowledge graph context into every pull request review across GitHub, GitLab, and Bitbucket. Reviews go beyond the diff — they include cross-repo impact analysis, architectural consistency checks, and blast radius detection, catching issues before they reach production.

## Auto triage

When auto-analysis is enabled for Jira, AI Architect evaluates every new or updated Epic and Story before generating an implementation plan. It reads the full ticket — title, description, comments, attachments, and any linked Confluence pages — and assigns a complexity score from 1 to 10. Implementation plans are only generated for tickets that meet or exceed the complexity threshold (default: 7). Tickets below the threshold receive a short note explaining why a plan was skipped, along with a prompt to request one manually if needed.

This keeps your ticket history clean and ensures that AI Architect's output is directed at the work that actually benefits from it — complex, ambiguous, or cross-cutting changes — rather than straightforward tasks that engineers can act on immediately.

<a href="../integrating-ai-architect-with-your-tools/integrating-with-jira#auto-triage" class="button primary">Learn more</a>

## Flexible trigger

AI Architect can be invoked in two ways: automatically when a ticket is created or updated, or on demand by any team member. On-demand triggering is done by commenting `@bito`, `/bito`, or `#bito` directly on the Jira ticket, or by adding a `bito`, `bito-analyse`, or `bito-analyze` label.

The same comment syntax also works for follow-up requests — you can ask AI Architect to revise its analysis, focus on a specific aspect, or regenerate the plan after requirements change.

This gives teams full control over when AI Architect runs. You can start with on-demand mode to evaluate output quality on specific tickets, then graduate to automatic analysis once you're confident in the results.

<a href="../integrating-ai-architect-with-your-tools/integrating-with-jira#how-to-trigger-ai-architect-in-jira" class="button primary">Learn more</a>

## Iterative refinement

Requirements rarely stay fixed. AI Architect supports iterative refinement by generating new versions of any artifact — technical designs, scope breakdowns, feasibility assessments — as the ticket evolves. As engineers update descriptions, add comments, or link new context, they can re-trigger AI Architect to produce a revised plan that reflects the current state of the work.

This means technical planning stays in sync with changing requirements without manual rework. Each iteration builds on the ticket's full history, so AI Architect understands what has changed and why.

<a href="../integrating-ai-architect-with-your-tools/integrating-with-jira#how-to-trigger-ai-architect-in-jira" class="button primary">Learn more</a>

## Custom templates

By default, AI Architect formats its output using Bito's standard structure. Custom templates let you override that with your team's own format — whether that's a specific TDD layout, an internal RFC structure, or a planning format tied to your sprint process. Output then matches what your engineers expect to see and what your workflow tooling can consume.

Custom templates are configured per workspace. To get started, contact the Bito team at <support@bito.ai>.

## Custom prompts

AI Architect supports free-form prompting directly inside the Jira ticket, coding agents, or Slack. Engineers can mention Bito followed by a question/instruction in natural-language, such as:

#### In Jira:

* `@bito analyze technical feasibility`
* `@bito focus on database migration risk`
* `@bito write this as a spike document`
* `@bito break this into frontend and backend workstreams`

#### In Slack:

* `@Bito PROJ-456 is causing errors in production — here are the logs. Identify the root cause?`
* `@Bito help us break this feature into smaller tasks`
* `@Bito explain the difference between these two approaches`
* `@Bito what are the action items from this thread?`
* `@Bito review the code changes in this thread and suggest improvements`&#x20;

#### In coding agents:

Open a chat or conversation in your AI tool and try a test query to confirm AI Architect is working:

* `"What repositories are available in my organization?"`
* `"Show me all Python repositories"`
* `"List the available tools"`
* `"What are the dependencies of [repo-name]?"`
* `"Find all microservices using Redis"`
* `"Show me repository clusters in our organization"`
* `"Plan a new feature for [component]"`
  * *(uses **bito-feature-plan** skill)*
* `"Write a PRD for [feature]"`
  * *(uses **bito-prd** skill)*
* `"Help me triage this production issue"`
  * *(uses **bito-production-triage** skill)*

Custom prompts work alongside the knowledge graph context AI Architect already has. You're directing the output, not replacing the grounding.

## Web research

AI Architect can pull external technical context alongside your internal knowledge graph when generating plans. When web research is enabled, it incorporates relevant industry patterns, library documentation, and external best practices into its analysis — useful for greenfield work, third-party integrations, or areas where your codebase doesn't yet have established patterns.

Web research is applied selectively. It supplements internal context rather than replacing it, so the output remains grounded in your actual system.

## Slack agent

The Bito AI Assistant brings AI Architect directly into Slack. You can ask system-level questions, trigger implementation plans, iterate on technical designs, and triage production issues — all without leaving the channel where the discussion is already happening. Mention `@Bito` in any thread, and the assistant reads the full conversation context, including any referenced Jira tickets or Confluence pages, before responding.

Common use cases include summarizing long planning threads, generating task breakdowns from a discussion, and pulling context from a specific ticket mid-conversation. The Slack agent is available in both public and private channels and supports file attachments including code files, configs, and logs.

<a href="integrating-ai-architect-with-your-tools/integrating-with-slack" class="button primary">Learn more</a>

## Agent skills

Agent Skills are structured instruction files that define how AI Architect approaches specific engineering tasks. Each skill is purpose-built for a different type of work: feasibility analysis, epic planning, spike investigations, production triage, PRD or TRD generation, pre-commit reviews, and more. Skills can be triggered in Jira and Slack by commenting with natural language (`@bito is this feasible?`, `@bito turn this epic into tasks`), or invoked directly inside your coding agent via MCP.

Skills have full access to your knowledge graph — codebase, Jira history, Confluence docs, and observability data — so their output is always grounded in how your system actually works, not a generic template. In coding agents, skills are installed automatically and discoverable via `/` in the chat interface.

<a href="agent-skills" class="button primary">Learn more</a>

## Detailed agent specs

After a technical plan is approved, AI Architect can transform it into self-contained workstream agent specs — structured documents that give a coding agent everything it needs to implement a single workstream without further clarification. Each spec includes file paths, relevant patterns from your codebase, verification gates, and a dependency contract describing what other workstreams it relies on or produces.

Agent specs are designed to be passed directly into Cursor, Claude Code, Codex, or any MCP-compatible coding agent. They eliminate the back-and-forth between planning and implementation by encoding architectural decisions into the spec itself.

<a href="agent-skills" class="button primary">Learn more</a>

## Automated implementation

Once a workstream plan is approved, AI Architect can execute it directly. It creates a new branch from your default branch, implements the code changes across the relevant repositories, and follows the patterns and conventions already present in your codebase. Implementation is guided by the agent spec, with verification gates checked at each step.

This capability is available via the `bito-agent-spec-executor` skill in your connected coding agent. It is designed for well-scoped workstreams with clear acceptance criteria and works best when paired with the detailed agent spec output.

<a href="agent-skills" class="button primary">Learn more</a>

## Automated pull request creation

After implementation is complete, AI Architect opens a pull request per workstream, linked back to the originating Jira ticket. PRs include a summary of the changes made, the workstream spec they were generated from, and any verification results from the implementation run. This closes the loop between planning and review, giving engineers and reviewers immediate context on what was built and why.

Automated PRs work across GitHub, GitLab, and Bitbucket. They follow your existing branch naming and PR conventions.

<a href="agent-skills" class="button primary">Learn more</a>

## Single Sign-On (SSO)

SSO integration lets your team authenticate with AI Architect through your organization's existing identity provider instead of managing shared access tokens. When someone joins or leaves, access is granted or revoked through the same system that controls their email, Slack, and other business tools — no separate credential rotation required.

AI Architect's SSO implementation supports a broad range of identity providers including Google Workspace, Okta, Azure Entra ID, Microsoft AD FS, Auth0, Keycloak, and others, as well as any custom SAML 2.0 or OIDC configuration.

For self-hosted deployments, SSO runs entirely on-premises — no authentication traffic leaves your environment except for identity provider federation (if Enterprise IdP is configured) and Bito API calls for SSO configurations.

Session duration, refresh token TTL, and concurrent session limits are all configurable per workspace.

<a href="single-sign-on-sso-integration" class="button primary">Learn more</a><a href="../installation/install-ai-architect-self-hosted#sso-authentication" class="button primary">SSO for self-hosted deployments</a>
