Skip to main content

Frequently Asked Questions

Everything you need to know about Attune AI

General

What is Attune AI?

Attune AI is a production-ready framework for AI-powered developer workflows with cost optimization and multi-agent orchestration. It includes 13 agent templates, dynamic team composition, persistent agent state, 10+ integrated workflows, and a tier-based LLM routing system that saves 34-86% on API costs.

What is multi-agent orchestration?

Multi-agent orchestration lets you compose teams of specialized AI agents that collaborate on complex tasks. Attune AI supports parallel, sequential, two-phase, and delegation strategies with quality gates to ensure results meet your standards. The MetaOrchestrator analyzes tasks and automatically selects optimal agent teams.

What is Agent State Persistence?

Agent State Persistence stores execution history, checkpoints, and accumulated metrics for each agent across sessions. This enables recovery from interruptions, performance tracking over time, and pattern learning. State is stored locally in JSON files under .attune/agents/state/.

What is Dynamic Team Composition?

Dynamic Team Composition allows you to build agent teams at runtime from 13 pre-built templates or custom configurations. Teams can execute with different strategies (parallel, sequential, two-phase, delegation) and enforce quality gates. You can also compose entire workflows into teams using the WorkflowComposer.

Licensing & Pricing

How much does Attune AI cost?

Attune AI is completely free and open source under the Apache License 2.0. Use it in personal projects, startups, or large enterprises at no cost. No license keys, no restrictions, no hidden fees.

What is the Apache 2.0 License?

Apache 2.0 is a permissive open source license approved by the OSI. It allows you to use, modify, distribute, and sell products built with Attune AI. It includes patent protection and is approved by most enterprise legal teams.

Can I use Attune AI for commercial projects?

Yes! Apache 2.0 explicitly permits commercial use. Build and sell products using Attune AI without any licensing fees or restrictions.

Technical

Which LLM providers are supported?

Attune AI is built exclusively for Anthropic Claude with a Claude-native architecture. This enables automatic prompt caching (up to 90% cost savings on cached tokens), extended thinking, and optimized tool use. Agent and team creation features (dynamic composition, SDK integration) work with the Anthropic Agent SDK for enhanced capabilities.

What are agent templates?

Agent templates are pre-built agent archetypes with defined roles, capabilities, tier preferences, and quality gates. Attune AI includes 13 templates: security auditor, code reviewer, test coverage analyzer, documentation writer, performance optimizer, architecture analyst, refactoring specialist, dependency checker, bug predictor, release coordinator, integration tester, API designer, and DevOps engineer.

How is the framework tested?

Attune AI has 14,800+ comprehensive tests covering unit tests, integration tests, and cross-platform compatibility. Core modules are rigorously tested for security, agent state persistence, dynamic team orchestration, workflow coordination, and meta-workflow functionality.

How does prompt caching save money?

Attune AI leverages Anthropic's automatic prompt caching out of the box — no configuration required. Cached input tokens cost just 10% of the standard price, delivering up to 90% cost savings and up to 85% faster responses. The framework's Claude-native architecture is designed around caching: system prompts, tool definitions, and conversation history are automatically cached and reused across requests.

What is semantic caching?

On top of Anthropic's API-level caching, Attune AI includes an optional HybridCache powered by sentence-transformers. It detects semantically similar prompts (95%+ cosine similarity) and reuses cached responses, avoiding redundant API calls entirely. Hash-only caching achieves ~30% hit rate; with semantic matching enabled, measured hit rates reach ~57% on workflows like security audits. Install with pip install attune-ai[developer] to enable it automatically.

What platforms are supported?

The framework is cross-platform and runs on macOS, Linux, and Windows. It requires Python 3.10+ and works with all major development environments including VS Code, JetBrains IDEs, and terminal-based workflows.

Wizards

What are wizards?

Wizards are guided, multi-step AI workflows that walk you through complex tasks like debugging, security audits, refactoring, and test generation. Each wizard collects context via questions, runs AI analysis, decomposes work into tasks, and previews results before acting. Attune AI ships with 10 built-in wizards: security-audit, code-review, bug-predict, perf-audit, refactor-plan, test-gen, doc-gen, dependency-check, release-prep, and research.

How do I run a wizard?

From Claude Code, type /wizard run debug (or any wizard ID). From Python: from attune.wizards import get_wizard; wizard = get_wizard("debug")(); result = await wizard.run(). See the Getting Started guide for a full walkthrough.

Can I create custom wizards?

Yes! Two approaches: (1) YAML-based — create a .attune/wizards/my-wizard.yaml file with step definitions, no Python required. (2) Python-based — subclass BaseWizard for advanced logic like workflow delegation, conditional steps, and custom result processing. See the Custom Wizard Development guide.

Use Cases

What can I build with Attune AI?

You can build AI-powered developer workflows for software development (bug prediction, security scanning, test generation, code review), orchestrate multi-agent teams for complex analysis tasks, and compose workflows into coordinated pipelines. The framework also supports healthcare use cases with clinical decision support.

What happened to the Fair Source License?

As of January 28, 2026, we switched from Fair Source 0.9 to Apache 2.0. We realized the licensing restrictions were limiting adoption without generating revenue. Going fully open source lets us focus on building the best framework and growing a community.

Is the framework production-ready?

Yes! The framework is v3.6.x Production/Stable with comprehensive tests, extensive documentation, persistent agent state, dynamic team composition, and is being used in production software development tools and AI workflows.

Support & Community

Where can I get help?

Get community support via GitHub Discussions. Report bugs via GitHub Issues. Enterprise users can reach out for dedicated support options.

How do I report bugs?

Report bugs via GitHub Issues. Include your environment details, steps to reproduce, and expected vs actual behavior.

Can I contribute to the project?

Yes! We welcome contributions. Check out our GitHub repository for contribution guidelines. The framework is fully open source under Apache 2.0, making it easy to fork, modify, and contribute back.

Still Have Questions?

We're here to help. Reach out to our team or join the community.