autopod
Autonomous AI agent orchestration. Containerized. Validated. Human-approved.
You describe a task. autopod spins up an isolated container, lets an AI agent work, validates the output in a real browser, and only bothers you when there's something worth reviewing. Run dozens of agents in parallel — across repos, models, and runtimes — without babysitting a single one.
$ ap run my-app "Add a dark mode toggle to the settings page" --model opus
Session a1b2c3d4 created (profile: my-app · model: claude-opus)
Provisioning container...
Agent running...
# Go grab coffee. Come back to a Teams notification with screenshots.
How It Works
autopod wraps autonomous coding agents in a control plane that provisions isolated containers, runs validation, and manages the full lifecycle from task to merged PR.
Session Lifecycle (15 states)
queued → provisioning → running → validating → validated → approved → merging → complete
│ │ ↓
│ └─→ failed (retry with feedback) merge_pending
│ │ ↓
│ └─→ review_required fix session spawned
│ ├─→ running on CI failure or review comments
│ └─→ running (up to maxPrFixAttempts, default 3)
│
├─→ paused (ap pause)
└─→ awaiting_input (agent escalated via ask_human)
Any non-terminal state → killing → killed
Getting Started
Prerequisites
- Node.js 22+
- pnpm (or use
npx pnpmeverywhere) - Docker (for local execution target)
- Azure Entra ID app registration (for auth)
1. Clone & install
git clone https://github.com/esbenwiberg/autopod.git
cd autopod
npx pnpm install
npx pnpm run build
2. Configure environment
cp .env.example .env
# Required — from your Entra ID app registration
ENTRA_CLIENT_ID=<application-client-id>
ENTRA_TENANT_ID=<directory-tenant-id>
# AI provider (direct Anthropic API)
ANTHROPIC_API_KEY=sk-ant-...
# For private repos
GITHUB_PAT=ghp_...
# Optional — Teams notifications
TEAMS_WEBHOOK_URL=https://...
3. Start the daemon
# Recommended — Docker Compose with hot-reload
docker compose up -d
# Or run directly
npx pnpm --filter @autopod/daemon run dev
4. Connect the CLI
ap connect http://localhost:3100
ap login
5. Create your first profile
ap profile create my-app \
--repo owner/my-app \
--template node22-pw \
--build "npm ci && npm run build" \
--start "npm run preview -- --host 0.0.0.0 --port \$PORT" \
--health "/" \
--test "npm test" \
--model opus
6. Run your first session
ap run my-app "Add a contact form to the about page"
# Check status
ap ls
# See what the agent did
ap diff a1b2c3d4
# Approve and merge
ap approve a1b2c3d4
# Or reject with feedback
ap reject a1b2c3d4 "Needs client-side validation"
CLI Reference
Authentication
ap login # Interactive login (Entra ID PKCE)
ap login --device # Device code flow (headless/SSH)
ap logout
ap whoami
Daemon
ap connect <url> # Point CLI at a daemon
ap disconnect
ap daemon start --local
ap daemon stop
Sessions
# Create
ap run <profile> "<task>" # Start a session
ap run <profile> "<task>" --model opus # Override model
ap run <profile> "<task>" --runtime codex # Codex runtime
ap run <profile> "<task>" --runtime copilot # Copilot runtime
ap run <profile> "<task>" --branch feat/x # Custom branch
ap run <profile> "<task>" --no-validate # Skip auto-validation
ap run <profile> "<task>" --ac "criterion" # Acceptance criteria (repeatable)
ap run <profile> "<task>" --base-branch feat/plan # Branch from workspace output
ap run <profile> "<task>" --ac-from specs/ac.md # ACs from file
# Monitor
ap ls # List sessions
ap ls --status running
ap ls --json
ap status <id> # Full session details
ap logs <id> # Stream agent activity
# Interact
ap tell <id> "<message>" # Send message / resume paused session
ap pause <id>
ap nudge <id> "<message>" # Async message (agent picks up mid-task)
# Validate & Preview
ap validate <id> # Trigger validation
ap revalidate <id> # Pull latest + re-validate (no agent rework)
ap interrupt <id> # Stop in-flight validation, get partial result
ap open <id> # Spin up live preview (real browser)
ap report <id> # Open HTML validation report
ap diff <id>
# Validation overrides (dismiss recurring false-positive findings)
ap override <id> <finding-id> --dismiss
ap override <id> <finding-id> --guidance "<note>"
# review_required resolution
ap extend-attempts <id> # Grant more validation attempts
ap fix-manually <id> # Create linked workspace for human edits
# Complete
ap approve <id>
ap approve <id> --squash
ap reject <id> "<feedback>"
ap kill <id>
# Bulk
ap approve --all-validated
ap kill --all-failed
# Stats
ap stats # Aggregate counts, avg duration, total cost
Profiles
ap profile create <name>
ap profile ls
ap profile show <name>
ap profile edit <name> # Open in $EDITOR
ap profile delete <name>
ap profile warm <name> # Pre-bake deps into Docker image
ap profile auth-copilot <name> # Interactive Copilot OAuth setup
# Per-action approval overrides
ap profile action-override list <name>
ap profile action-override set <name> <action> --approval
ap profile action-override remove <name> <action>
Workspace & History
# Interactive workspace pod (no agent)
ap workspace <profile> [description]
ap workspace <profile> -b feat/my-branch
ap workspace <profile> --pim-group "Contributor on prod-rg:60m"
ap attach <id> # Shell in (auto-pushes on exit)
# History analysis workspace
ap history <profile> # Load last 30d of sessions
ap history <profile> --since 7d
ap history <profile> --failures # Only failed/review_required
ap history <profile> --limit 50
Memory
ap memory list # List approved memories
ap memory list --scope profile # Filter: global | profile | session
ap memory show <id>
ap memory approve <id>
ap memory reject <id>
ap memory delete <id>
Profile Configuration
Profiles define everything autopod needs to run, validate, and merge a session for a specific repo. They support inheritance — define a base profile and extend it per-app.
ap profile create my-app \
--repo owner/my-app \
--branch main \
--template node22-pw \
--build "npm ci && npm run build" \
--start "npm run preview -- --host 0.0.0.0 --port \$PORT" \
--test "npm test" \
--health "/" \
--health-timeout 30000 \
--model opus \
--runtime claude \
--pr-provider github \
--max-validation-attempts 3 \
--instructions "Use TypeScript. Prefer Tailwind CSS." \
--extends frontend-base
Additional YAML-only fields:
containerMemoryGb: 4
buildTimeout: 300 # seconds
testTimeout: 600
branchPrefix: "feat" # branch names become feat/<sessionId>
executionTarget: local # "local" (Docker) or "aci" (Azure Container Instances)
workerProfile: "my-app" # for workspace pods: which profile runs the follow-up agent
PR providers
prProvider: github # or "ado"
# Azure DevOps
prProvider: ado
adoPat: <your-ado-pat> # encrypted at rest
Stack Templates
| Template | Stack | Includes |
|---|---|---|
node22 | Node.js 22 | npm / pnpm / yarn |
node22-pw | Node.js 22 + Playwright | Chromium for browser validation |
dotnet9 | .NET 9 SDK | dotnet CLI |
dotnet10 | .NET 10 + Node.js 22 | Mixed stack |
python312 | Python 3.12 | pip / poetry |
custom | Bring your own | Custom Dockerfile |
Smoke Pages & Acceptance Criteria
smokePages:
- path: "/"
assertions:
- selector: ".dark-mode-toggle"
type: exists
- selector: "h1"
type: text_contains
value: "Welcome"
- path: "/about"
assertions:
- selector: ".contact-form"
type: visible # exists | visible | text_contains | count
ap run my-app "Add dark mode" \
--ac "Settings page has a dark mode toggle" \
--ac "Toggle persists after page refresh" \
--ac-from specs/acceptance.md # one criterion per line
Network Policy
networkPolicy:
enabled: true
mode: restricted # restricted | deny-all | allow-all
allowedHosts:
- "api.stripe.com"
- "*.my-company.com"
replaceDefaults: false # set true to replace built-in default allowlist
allow_package_managers: true # auto-adds npm, pypi, crates.io, nuget, golang, rubygems, debian…
replaceDefaults: true): api.anthropic.com, api.openai.com, registry.npmjs.org, pypi.org, github.com, pkgs.dev.azure.com, GitHub Copilot endpoints.Escalation Settings
escalation:
askHuman: true
askAi:
enabled: true
model: sonnet # also used as the AI task reviewer model
maxCalls: 5
autoPauseAfter: 3 # auto-escalate after N consecutive failures
humanResponseTimeout: 3600000 # 1 hour
Model Providers
| Provider | Auth method | Use case |
|---|---|---|
anthropic | API key (ANTHROPIC_API_KEY) | Default — direct Anthropic API |
max | OAuth (access + refresh tokens) | Claude MAX/PRO consumer subscriptions |
foundry | Managed identity + project config | Azure-hosted Foundry deployments |
copilot | GitHub token (OAuth / fine-grained PAT) | GitHub Copilot runtime |
modelProvider: max # anthropic | max | foundry | copilot
# Foundry-specific
foundryConfig:
baseUrl: "https://your-foundry.azure.com"
project: "my-project"
Private Registries
privateRegistries:
- type: npm
url: "https://pkgs.dev.azure.com/{org}/_packaging/{feed}/npm/registry/"
scope: "@myorg" # optional scoped packages
- type: nuget
url: "https://pkgs.dev.azure.com/{org}/_packaging/{feed}/nuget/v3/index.json"
registryPat: "<ado-pat>" # encrypted at rest, injected into .npmrc / NuGet.Config
MCP Servers & Skills
mcpServers:
- name: prism
url: "https://prism.internal/mcp"
headers:
Authorization: "Bearer ${PRISM_API_KEY}"
description: "Codebase context powered by Prism."
toolHints:
- "Call get_file_context before modifying any file"
skills:
- name: security-check
description: "OWASP-aware security review"
source:
type: github
repo: myorg/claude-skills
path: security-check.md
ref: main
token: "${GITHUB_TOKEN}"
claudeMdSections:
- heading: "Coding Standards"
priority: 20
content: "Always use TypeScript strict mode. Never use any."
- heading: "Codebase Architecture"
priority: 10
maxTokens: 4000
fetch:
url: "https://prism.internal/api/context"
authorization: "Bearer token"
timeoutMs: 10000
Profile Versioning
Every profile update auto-increments profile.version. Sessions snapshot the full resolved profile at creation — you can always audit exactly which config produced a given session's output.
ap profile show my-app # Shows: version: 7, last updated: ...
ap status a1b2c3d4 # Shows: profile: my-app v7 · branch: autopod/a1b2c3d4
Profile Inheritance
# base.yaml
name: frontend-base
template: node22-pw
build: "npm ci && npm run build"
escalation:
askHuman: true
# my-app.yaml
name: my-app
extends: frontend-base
repo: owner/my-app
model: opus
# All frontend-base fields are inherited; arrays merge by name
Session Lifecycle
Every session follows a strict 15-state machine. Invalid transitions are rejected by the daemon.
review_required
When maxValidationAttempts is exhausted, the session moves to review_required instead of hard-failing. From here:
ap extend-attempts <id> # Grant N more attempts — agent retries with all accumulated feedback
ap fix-manually <id> # Create a linked workspace pod for human edits, then re-validate
ap reject <id> "<note>" # Restart fresh from scratch
Token & Cost Tracking
Each session tracks inputTokens, outputTokens, and costUsd captured from the runtime's completion events. View with ap status <id> or ap stats for aggregate totals.
Validation Pipeline
Validation is a 7-phase pipeline. Each phase gates the next. Failed validation feeds structured feedback to the agent for retry.
| Phase | What happens | Config |
|---|---|---|
| 1. Build | Runs your build command inside the container | profile.build |
| 2. Test | Runs test suite (skipped if not configured) | profile.testCommand |
| 3. Health check | Polls URL until HTTP 200 | profile.health |
| 4. Smoke pages | Playwright visits pages, checks assertions. Runs on daemon host. | profile.smokePages |
| 5. AC validation | LLM evaluates each acceptance criterion in a real browser | session.acceptanceCriteria |
| 6. AI task review | Separate model reviews diff against original task + prior findings (tiered context) | escalation.askAi.model |
| 7. Overall | Pass only if all required phases pass | — |
Agent Self-Validation
During development, agents call the validate_in_browser MCP tool. An LLM generates a Playwright script, it executes on the daemon host, screenshots are captured, and results feed back to the agent — all before the independent reviewer runs.
Interrupt & Per-Finding Overrides
# Stop a running validation immediately
ap interrupt <id> # Returns partial results, session returns to previous state
# Dismiss a recurring false-positive finding (persists across retries)
ap override <id> <finding-id> --dismiss
ap override <id> <finding-id> --guidance "Use our date-fns helper instead"
PendingOverrideRepository and flushed into each validation pass before the AI reviewer runs. They don't disable checks — they annotate findings so the reviewer can make an informed decision.Escalation System
An MCP server is injected into every agent container at provisioning time. It provides 13+ tools for structured communication between agents and humans.
| Tool | Blocking | Description |
|---|---|---|
ask_human | ✅ Yes | Pause agent, send question to human, wait for response |
ask_ai | ✅ Yes | Consult reviewer model (rate-limited, max N per session) |
report_blocker | Conditional | Report a hard stop; auto-pauses after threshold |
report_plan | ❌ No | Declare implementation plan + steps before starting |
report_progress | ❌ No | Report phase transitions (currentPhase / totalPhases) |
report_task_summary | ❌ No | Capture actual work vs plan — deviation tracking |
check_messages | ❌ No | Poll for queued nudge/tell messages from operators |
validate_in_browser | ✅ Yes | LLM generates Playwright script → runs on host → screenshots |
trigger_revalidation | ❌ No | Workspace pods: re-run validation on linked worker |
memory_suggest | ❌ No | Propose a memory for human approval (global/profile/session) |
memory_list | ❌ No | List approved memories available to this session |
memory_read | ❌ No | Get full content of a specific memory |
memory_search | ❌ No | Keyword/semantic search across available memories |
| action tools | Optional | One tool per action definition from the profile's action policy |
How Blocking Works
Blocking tools use PendingRequests — a Promise-based map keyed by escalation ID. The agent awaits resolution; the daemon resolves it when a human responds via API, CLI, or desktop.
validate_in_browser tool runs Playwright on the daemon host (not inside the container), so it works regardless of container network policy. Screenshots are returned as base64 PNGs embedded in the tool result.Memory Stores
Agents lose context between sessions. Memory stores let agents accumulate institutional knowledge — team conventions, known gotchas, recurring patterns — that persists and gets injected into future sessions.
Scopes
| Scope | Available to | Use case |
|---|---|---|
global | All sessions on this daemon | Company-wide conventions, tooling rules |
profile | Sessions using the same profile | Repo-specific patterns, known bugs |
session | This session only | Agent's own mid-task notes |
Workflow
# Agent suggests a memory during a session (via MCP tool)
# → memory.suggestion_created event fires
# → OverviewTab shows a suggestion card in the desktop app
# Human reviews and approves
ap memory approve <id>
# Future sessions automatically get the memory injected into CLAUDE.md
# under a "Team Knowledge" section at provisioning time
REST API
GET /memory # List memories (filter: ?scope=profile&profileName=my-app)
POST /memory # Create memory directly (skips suggest-approve flow)
PATCH /memory/:id # Approve, reject, or update content
DELETE /memory/:id
Action Control Plane
Agents can read from external systems (GitHub, ADO, Azure) through a gated, audited pipeline. Every response is scanned for prompt injection and PII before reaching the agent.
Agent calls MCP tool (e.g. read_issue)
→ Daemon validates against action policy + resource restrictions
→ Handler executes (GitHub API / ADO REST / Azure / HTTP)
→ Response pipeline:
1. Prompt injection quarantine (7 patterns, compound scoring)
→ <0.5: pass | 0.5–0.8: wrap with warning | >0.8: block
2. PII sanitization (API keys, AWS keys, Azure connections, emails)
3. Field whitelist (only configured fields pass through)
→ Clean result returned to agent + audit record written
Built-in Action Groups (8 groups, 22 actions)
| Group | Actions |
|---|---|
github-issues | read_issue, search_issues, read_issue_comments |
github-prs | read_pr, read_pr_comments, read_pr_diff |
github-code | read_file, search_code |
ado-workitems | read_workitem, search_workitems |
ado-prs | ado_read_pr, ado_read_pr_threads, ado_read_pr_changes |
ado-code | ado_read_file, ado_search_code |
azure-logs | query_logs, read_app_insights, read_container_logs |
azure-pim | activate_pim_group, deactivate_pim_group, list_pim_activations |
Configuration
actionPolicy:
enabledGroups:
- github-issues
- github-prs
- ado-workitems
- azure-pim
sanitization:
preset: standard # standard | strict | relaxed
quarantine:
enabled: true
threshold: 0.5
actionOverrides:
- action: read_issue
requiresApproval: false
allowedResources:
- "owner/repo1"
Azure PIM
For workspace pods that need elevated Azure access, configure PIM groups on the profile. They are activated at session start and deactivated when the session ends. Agents can only activate groups that were pre-configured — no escalation beyond declared scope.
pimGroups:
- groupId: "00000000-0000-0000-0000-000000000000"
displayName: "Contributor on prod-rg"
duration: 60 # minutes
justification: "autopod workspace session"
ap workspace my-app --pim-group "Contributor on prod-rg:60m"
Workspace Pods
Workspace pods are interactive containers — same image, network, and credentials as agent pods, but you drive. Use them to explore, prototype, or write specs before handing off to an automated agent.
# 1. Spin up a workspace
ap workspace my-app "Plan auth rewrite" -b feat/plan-auth
# 2. Shell in and do your work
ap attach <id>
# ... edit files, write specs, prototype ...
# Exit the shell — branch auto-pushes to origin
# 3. Hand off to an agent branching from your work
ap run my-app "Implement auth rewrite per spec" \
--base-branch feat/plan-auth \
--ac-from specs/acceptance-criteria.md
workerProfile
Set workerProfile on a workspace profile to name the profile that should handle the follow-up agent session. The desktop app can then offer a one-click "Hand off to agent" button.
name: my-app-workspace
extends: my-app
outputMode: workspace
workerProfile: my-app # profile to use for the follow-up worker
History Analysis
The history workspace creates an isolated container pre-loaded with a SQLite database of past sessions — events, validation results, escalations, token costs. Use it with an AI agent to identify patterns: recurring failures, common blockers, expensive prompts.
ap history my-app --since 30d --failures
# Inside the workspace, the agent gets:
# /workspace/history.db — SQLite with all session data
# /workspace/README.md — Analysis guide with example queries
Network Isolation
Every agent container gets its own Docker bridge network with per-container iptables OUTPUT chain rules. Networks are cleaned up in the session's finally block.
| Mode | Behaviour |
|---|---|
allow-all | Loopback + established/related only; no DROP. For trusted environments. |
deny-all | DNS (UDP/TCP 53) only. Everything else DROPped. |
restricted | Per-host allowlist + final DROP. Default when enabled: true. |
allow_package_managers
In restricted mode, set allow_package_managers: true to automatically expand the allowlist with 15 common package manager hosts:
npm · yarn · pypi · crates.io · pkgs.dev.azure.com (NuGet) · proxy.golang.org · rubygems.org · deb.debian.org · archive.ubuntu.com · dl.google.com · storage.googleapis.com · and more
Security Notes
- All hostnames are validated with
SAFE_HOST_REGEXbefore iptables injection to prevent shell injection - MCP server hosts are always allowed regardless of mode — injected automatically
- Private registry hosts are always allowed when
privateRegistriesis configured - Live updates — patching
networkPolicyvia API immediately re-applies rules to all running containers using that profile (no restart)
Auth Setup
autopod uses Azure Entra ID for operator authentication.
- Go to Azure Portal → Entra ID → App registrations
- Create a new registration
- Set redirect URI to
http://localhost(PKCE flow) - Enable "Allow public client flows" (for device code on headless machines)
- Note the Application (client) ID and Directory (tenant) ID
ENTRA_CLIENT_ID=<application-client-id>
ENTRA_TENANT_ID=<directory-tenant-id>
NODE_ENV=development, auth is stubbed — all tokens are accepted. Set NODE_ENV=production to enforce Entra ID.Execution Targets
autopod runs agent containers either on local Docker or Azure Container Instances (ACI). Configure per profile:
executionTarget: local # Docker socket on daemon host (default)
# or
executionTarget: aci # Azure Container Instances
| Local | ACI | |
|---|---|---|
| Setup | Docker socket | Azure subscription + ACR |
| Cost | Host CPU/memory | Pay-per-second |
| Isolation | Docker bridge per session | Per ACI container group |
| Scale | Host limits | Azure quota |
| Cold start | Fast (cached image) | ~30s (pull from ACR) |
ACI Setup
AZURE_SUBSCRIPTION_ID=...
AZURE_RESOURCE_GROUP=...
AZURE_LOCATION=westeurope
ACR_USERNAME=...
ACR_PASSWORD=...
# Push your profile image to ACR first
ap profile warm my-app
Deployment
Docker Compose (local)
docker compose up -d
# Daemon at http://localhost:3100 with hot-reload
Azure (production)
az deployment sub create \
--location westeurope \
--template-file infra/main.bicep \
--parameters infra/parameters/prod.bicepparam
| Resource | Purpose |
|---|---|
| Container Apps Environment | Runs daemon + agent pods |
| Container Registry (ACR) | Stores Docker images |
| Key Vault | API keys, PATs, webhook URLs |
| Log Analytics | Centralized structured logging |
| Managed Identity | No credentials in code, ever |
Health Endpoint
# Basic (load balancer / HEALTHCHECK)
GET /health
→ { "status": "ok", "version": "1.0.0" }
# Full diagnostics (ops / monitoring)
GET /health?detail=full
→ {
"status": "ok", "version": "1.0.0",
"uptime_seconds": 3600,
"docker": { "connected": true, "containers_running": 4 },
"database": { "connected": true, "migrations_applied": 35 },
"queue": { "active_sessions": 2, "queued_sessions": 1, "max_concurrency": 3 }
}
Environment Variables
| Variable | Default | Notes |
|---|---|---|
PORT | 3100 | HTTP bind port |
DB_PATH | ./autopod.db | SQLite file location |
LOG_LEVEL | info | pino log level |
NODE_ENV | — | Set to production to enforce auth |
ENTRA_CLIENT_ID | — | Azure AD app ID |
ENTRA_TENANT_ID | — | Azure AD tenant ID |
MAX_CONCURRENCY | 3 | Session queue concurrency |
ANTHROPIC_API_KEY | — | Default AI API key |
TEAMS_WEBHOOK_URL | — | MS Teams notifications |
ACR_REGISTRY_URL | — | Azure Container Registry |
FAQ
Can I use models other than Claude?
Yes. Set --runtime codex for OpenAI Codex, --runtime copilot for GitHub Copilot. The runtime interface is pluggable — same orchestration, same validation.
Do I need Azure?
For production deployment, yes — autopod is built around Azure Container Apps, ACR, and Key Vault. For local development, Docker Compose is all you need.
What happens if the agent gets stuck?
It can escalate: ask_human pauses and notifies you, ask_ai gets a second opinion, report_blocker declares a hard stop. You can also ap pause and ap nudge without killing work. After autoPauseAfter consecutive failures the session auto-pauses.
Can I review before anything gets merged?
Always. Nothing merges without explicit ap approve. The validated state means autopod thinks it's good — you always have the final say. The container is preserved after validation so you can open a live preview.
What is review_required?
When maxValidationAttempts retries are exhausted, the session enters review_required instead of hard-failing. You can extend the attempt count, create a linked workspace for manual fixes, or reject and restart fresh.
What are workspace pods for?
Workspace pods give you an interactive container (same image as agent pods) without an AI agent. Use them to explore, prototype, or write acceptance criteria manually, then hand off to an automated agent with --base-branch.
What are memory stores for?
Agents lose context between sessions. Memory stores let them suggest persistent knowledge (team conventions, known patterns) that humans approve. Approved memories are injected into future sessions' CLAUDE.md automatically.
Can agents access external data?
Yes — the action control plane gives agents read access to GitHub issues/PRs/code, Azure DevOps work items/PRs/code, Azure Logs, and Azure PIM. All responses are PII-stripped and scanned for prompt injection before reaching the agent.
Do I need an Anthropic API key?
Not necessarily. autopod supports four model providers: Anthropic API key, Claude MAX/PRO OAuth, Azure Foundry, and GitHub Copilot. Set modelProvider on your profile.
What happens if the daemon restarts mid-session?
autopod recovers. On startup it scans for sessions that were provisioning or running at shutdown and re-attaches to their containers. Work in progress is not lost.
Is there a macOS desktop app?
Yes — packages/desktop is a native SwiftUI + AppKit app. Three-column session browser, live terminal (SwiftTerm), diff viewer, validation report with screenshots, and a live preview panel. Build with Xcode and connect it to the same daemon as the CLI.
Can I use Azure DevOps instead of GitHub for PRs?
Yes. Set prProvider: ado on your profile and provide an ADO personal access token. Both dev.azure.com and visualstudio.com URL formats are supported.