Introduction
One objective in. Reviewed, tested, merged code out.
Tack is an open-source orchestrator that runs AI coding agents at scale. You describe what you want built. Tack decomposes it into parallel work streams, assigns each to an isolated agent, enforces quality gates, reviews the output, merges everything, and opens a PR.
You go from being the developer to being the team lead.
$ tack plan "Add pagination to all list endpoints and a PATCH /expenses/:id endpoint"
Created objective fb68742b. Planner agent will decompose.
$ tack approve e2b12fee
Plan approved. Execution will begin.
Stream 1: PATCH expenses endpoint ██████████ merged
Stream 2: Pagination for list endpoints ██████████ merged
Objective completed. PR created: github.com/you/project/pull/42Why Tack
The harness matters more than the model. LangChain improved from 52.8% to 66.5% on Terminal Bench 2.0 by modifying only the harness, not the model. Tack is that harness — open source and configurable.
Everything in Tack that isn't an LLM decision is deterministic infrastructure: the blueprint engine, quality gates, merge queue, scoped rules, file scope enforcement, timeout management. The LLM is the horse. Tack is the equipment.
What Tack Is Not
- Not an agent framework. Pi and Claude Code do the thinking. Tack orchestrates them.
- Not a sandbox provider. Local worktrees, Daytona, Docker provide isolation. Tack manages their lifecycle.
- Not an IDE. Agents edit code. You review their work.
- Not a CI system. Tack runs quality gates locally in sandboxes. CI is your existing pipeline.
- Not locked to any model. Bring your own runtime, provider, and model.