Orchestration & Control

42 篇文章

Action-Selector Pattern

Untrusted input can hijack an agent's reasoning once tool feedback re-enters the context window, leading to arbitrary, harmful actions.

emerging

Agent-Driven Research

Traditional research methods often lack the ability to adapt search strategies based on emerging results, limiting efficiency and potential discoveries.

established

Agent Modes by Model Personality

Different AI models have fundamentally different personalities and working styles. Treating all models the same—expecting them to work identically—leads to suboptimal outcomes. Users expect a consiste

emerging

Autonomous Workflow Agent Architecture

Complex, long-running engineering workflows traditionally require extensive human oversight and intervention. Tasks like model training pipelines, infrastructure configuration, and multi-step deployme

established

Burn the Boats

In fast-moving AI development, holding onto features or workflows that are "working fine" prevents teams from fully embracing new paradigms. The comfort of existing functionality becomes an anchor tha

emerging

Continuous Autonomous Task Loop Pattern

Traditional development workflows require constant human intervention for task management: - **Manual Task Selection**: Developers spend time deciding what to work on next from todo lists - **Context

established

Custom Sandboxed Background Agent

Off-the-shelf coding agents (e.g., Devin, Claude Code, Cursor) are either: - **Too generic** - Not deeply integrated with company-specific dev environments, tools, and workflows - **Vendor-locked** -

emerging

Discrete Phase Separation

When AI agents attempt to simultaneously research, plan, and implement solutions, context contamination occurs. Competing priorities within a single conversation degrade output quality as the agent st

emerging

Disposable Scaffolding Over Durable Features

In a field where foundation models improve dramatically every few months, investing significant engineering effort into building complex, durable features *around* the model is extremely risky. A feat

best-practice

Distributed Execution with Cloud Workers

Single-session AI agent execution cannot scale to meet enterprise team demands. Complex projects require multiple simultaneous code changes across different parts of the codebase, but coordinating mul

emerging

Dual LLM Pattern

A privileged agent that both sees untrusted text **and** wields tools can be coerced into dangerous calls.

emerging

Explicit Posterior-Sampling Planner

Agents that rely on ad-hoc heuristics explore poorly, wasting tokens and API calls on dead ends.

emerging

Factory over Assistant

The "assistant" model—working one-on-one with an agent in a sidebar, watching it work, ping-ponging back and forth—limits productivity and scalability. As models become more autonomous and capable, th

emerging

Feature List as Immutable Contract

Long-running agents exhibit several failure modes when tasked with building complete applications: - **Premature victory declaration**: Agent declares "done" after implementing a fraction of requirem

emerging

Hybrid LLM/Code Workflow Coordinator

LLM-driven workflows are **non-deterministic**—even well-crafted prompts can produce unpredictable results. For some tasks (e.g., adding emoji to Slack messages based on PR status), occasional errors

proposed

Inference-Time Scaling

Traditional language models are limited by their training-time capabilities. Once trained, their performance is essentially fixed, regardless of how much compute is available at inference time. This m

emerging

Initializer-Maintainer Dual Agent Architecture

Long-running agent projects face distinct failure modes at different lifecycle stages: - **Project initialization** requires comprehensive setup: environment configuration, feature specification, tes

emerging

Inversion of Control

Traditional "prompt-as-puppeteer" workflows force humans to spell out every step, limiting scale and creativity.

validated-in-production

Iterative Multi-Agent Brainstorming

For complex problems or creative ideation, a single AI agent instance might get stuck in a local optimum or fail to explore a diverse range of solutions. Generating a breadth of ideas can be challengi

experimental-but-awesome

Lane-Based Execution Queueing

Traditional agent systems serialize all operations through a single execution queue, creating bottlenecks that limit throughput. Concurrent execution is desirable but risky: - **Interleaving hazards*

validated-in-production

Language Agent Tree Search (LATS)

Current language agents often struggle with complex reasoning tasks that require exploration of multiple solution paths. Simple linear approaches like ReACT or basic reflection patterns can get stuck

emerging

LLM Map-Reduce Pattern

Injecting a single poisoned document can manipulate global reasoning if all data is processed in one context.

emerging

Multi-Model Orchestration for Complex Edits

A single large language model, even if powerful, may not be optimally suited for all sub-tasks involved in a complex operation like multi-file code editing. Tasks such as understanding broad context,

validated-in-production

Opponent Processor / Multi-Agent Debate Pattern

Single-agent decision making can suffer from: - **Confirmation bias**: Agent finds evidence supporting initial hypothesis - **Limited perspectives**: One context window misses alternative approaches

emerging

Oracle and Worker Multi-Model Approach

Relying on a single AI model creates a trade-off between capability and cost. High-performance models are expensive for routine tasks, while cost-effective models may lack the reasoning power for comp

emerging

Parallel Tool Call Learning

Agents often execute tool calls sequentially even when they could run in parallel: - **Unnecessary latency**: Sequential calls add up when tool execution time dominates inference time - **Inefficient

emerging

Conditional Parallel Tool Execution

When an AI agent decides to use multiple tools in a single reasoning step, executing them strictly sequentially can lead to significant delays, especially if many tools are read-only and could be run

validated-in-production

Plan-Then-Execute Pattern

If tool outputs can alter the *choice* of later actions, injected instructions may redirect the agent toward malicious steps.

emerging

Planner-Worker Separation for Long-Running Agents

Running multiple AI agents in parallel for complex, multi-week projects creates significant coordination challenges: - **Flat structures** lead to conflicts, duplicated work, and agents stepping on e

emerging

Progressive Autonomy with Model Evolution

Agent scaffolding built for older models becomes unnecessary overhead as models improve: - **Prompt bloat**: System prompts accumulate instructions that newer models don't need - **Over-engineered fl

best-practice

Progressive Complexity Escalation

Organizations deploy AI agents with overly ambitious capabilities from day one, leading to: - Unreliable outputs when agents tackle tasks beyond current model capabilities - Failed implementations th

emerging

Recursive Best-of-N Delegation

Recursive delegation (parent agent -> sub-agents -> sub-sub-agents) is great for decomposing big tasks, but it has a failure mode: - A single weak sub-agent result can poison the parent's next steps

emerging

Self-Rewriting Meta-Prompt Loop

Static system prompts become stale or overly brittle as an agent encounters new tasks and edge-cases. Manually editing them is slow and error-prone.

emerging

Specification-Driven Agent Development

Hand-crafted prompts or loose user stories leave room for ambiguity; agents can wander, over-interpret, or produce code that conflicts with stakeholder intent.

proposed

Stop Hook Auto-Continue Pattern

Agents complete their turn and return control to the user even when the task isn't truly done. Common scenarios: - Code compiles but tests fail - Changes made but quality checks haven't passed - Feat

emerging

Sub-Agent Spawning

Large multi-file tasks blow out the main agent's context window and reasoning budget. You need a way to delegate work to specialized agents with isolated contexts and tools.

validated-in-production

Subject Hygiene for Task Delegation

When delegating work to subagents via the Task tool, empty or generic task subjects make conversations: - **Untraceable**: Cannot identify what a subagent was working on - **Unreferencable**: Cannot

emerging

Swarm Migration Pattern

Large-scale code migrations are time-consuming when done sequentially: - **Framework upgrades** (e.g., testing library A → testing library B) - **Lint rule rollouts** across hundreds of files - **API

validated-in-production

Three-Stage Perception Architecture

Complex AI agents often struggle with unstructured inputs and need a systematic way to process information before taking action. Without a clear separation of concerns, agents can become monolithic an

proposed

Tool Capability Compartmentalization

Model Context Protocol (MCP) encourages "mix-and-match" tools—often combining private-data readers, web fetchers, and writers in a single callable unit. This amplifies the lethality of prompt-injectio

emerging

Tool Selection Guide

AI agents often struggle to select the optimal tool for a given task, leading to inefficient workflows. Common anti-patterns include: - Using `Write` when `Edit` would be more appropriate - Launching

emerging

Tree-of-Thought Reasoning

Linear chain-of-thought reasoning can get stuck on complex problems, missing alternative approaches or failing to backtrack.

established