Skip to main content

Advanced Grammar: Conditional Logic, Nesting, and Learning

Patrick Roebuck
10 min read
aiconditional-logiclearningworkflowsadvanced

The patterns that make agent teams truly intelligent.


Part 4 of the Grammar of AI Collaboration series.

In the previous posts, we covered the 6 core composition patterns. Today we unveil the advanced grammar—patterns 7-10 that enable conditional execution, recursive composition, and systems that learn from experience.

These aren't incremental improvements. They're the difference between a phrasebook and fluency.

Pattern 7: Conditional Execution

The problem: Traditional workflows are linear. Every step runs regardless of context. But real decisions have branches:

  • IF confidence is low → get expert review
  • IF security finds critical issues → halt release
  • IF code complexity is high → add more reviewers

The solution: Conditional strategy with rich predicate evaluation.

JSON Predicate Syntax

We support MongoDB-style query operators:

# Comparison operators
{"confidence": {"$lt": 0.8}}        # Less than
{"score": {"$gte": 90}}             # Greater than or equal
{"status": {"$eq": "approved"}}     # Equals
{"tier": {"$in": ["CAPABLE", "PREMIUM"]}}  # In list

# Logical operators
{"$and": [
    {"confidence": {"$gt": 0.5}},
    {"errors": {"$eq": 0}}
]}

{"$or": [
    {"status": {"$eq": "approved"}},
    {"override": {"$eq": True}}
]}

{"$not": {"status": {"$eq": "failed"}}}

# Nested paths
{"result.metrics.coverage": {"$gt": 80}}

# Regex matching
{"filename": {"$regex": r"test_.*\.py"}}

# Existence check
{"optional_field": {"$exists": True}}

Implementation

@dataclass
class Condition:
    """Evaluable condition for branching."""
    predicate: dict | str  # JSON predicate or natural language
    condition_type: ConditionType = ConditionType.JSON_PREDICATE
    description: str = ""

@dataclass
class Branch:
    """Execution branch with agents."""
    agents: list[AgentTemplate]
    strategy: str = "sequential"
    label: str = ""

class ConditionalStrategy(ExecutionStrategy):
    """IF-THEN-ELSE execution pattern."""

    def __init__(
        self,
        condition: Condition,
        then_branch: Branch,
        else_branch: Branch | None = None
    ):
        self.condition = condition
        self.then_branch = then_branch
        self.else_branch = else_branch
        self.evaluator = ConditionEvaluator()

    async def execute(
        self,
        agents: list[Agent],
        context: dict
    ) -> StrategyResult:
        # Evaluate condition against context
        condition_met = self.evaluator.evaluate(self.condition, context)

        # Select branch
        if condition_met:
            branch = self.then_branch
            branch_label = "then"
        elif self.else_branch:
            branch = self.else_branch
            branch_label = "else"
        else:
            # No else branch, condition not met
            return StrategyResult(
                success=True,
                outputs=[],
                aggregated_output={"branch_taken": None},
                total_duration=0
            )

        # Execute selected branch
        strategy = get_strategy(branch.strategy)
        result = await strategy.execute(branch.agents, context)

        return StrategyResult(
            success=result.success,
            outputs=result.outputs,
            aggregated_output={
                **result.aggregated_output,
                "_conditional": {
                    "condition": self.condition.description,
                    "condition_met": condition_met,
                    "branch_taken": branch_label
                }
            },
            total_duration=result.total_duration
        )

Real-World Example: Smart Code Review

# Define conditional review workflow
conditional = ConditionalStrategy(
    condition=Condition(
        predicate={"confidence": {"$lt": 0.8}},
        description="Low confidence in automated analysis"
    ),
    then_branch=Branch(
        agents=[spawn("expert_reviewer", tier="PREMIUM")],
        strategy="sequential",
        label="Expert Review Required"
    ),
    else_branch=Branch(
        agents=[spawn("quick_validator", tier="CHEAP")],
        strategy="sequential",
        label="Auto-Approve"
    )
)

# Execute with high confidence → auto-approve
result = await conditional.execute([], {"confidence": 0.95})
# → Uses CHEAP quick_validator
# → Output: {"_conditional": {"branch_taken": "else"}}

# Execute with low confidence → expert review
result = await conditional.execute([], {"confidence": 0.65})
# → Uses PREMIUM expert_reviewer
# → Output: {"_conditional": {"branch_taken": "then"}}

Natural Language Conditions

For complex conditions that are hard to express as JSON:

# Natural language condition
condition = Condition(
    predicate="The security audit found critical vulnerabilities that could expose user data",
    condition_type=ConditionType.NATURAL_LANGUAGE,
    description="Critical security issues detected"
)

# Evaluator uses LLM to assess condition against context
# Falls back to keyword matching if LLM unavailable

Multi-Conditional (Switch/Case)

For multiple branches:

class MultiConditionalStrategy(ExecutionStrategy):
    """SWITCH-CASE style multiple conditions."""

    def __init__(
        self,
        conditions: list[tuple[Condition, Branch]],
        default_branch: Branch | None = None
    ):
        self.conditions = conditions  # Evaluated in order
        self.default_branch = default_branch

# Example: Route by severity
multi = MultiConditionalStrategy(
    conditions=[
        (Condition({"severity": "critical"}), Branch([spawn("emergency_team")])),
        (Condition({"severity": "high"}), Branch([spawn("urgent_team")])),
        (Condition({"severity": "medium"}), Branch([spawn("standard_team")])),
    ],
    default_branch=Branch([spawn("low_priority_team")])
)

Pattern 8-9: Nested Workflows (Sentences within Sentences)

The problem: Complex tasks need hierarchical decomposition. A "release preparation" workflow might contain "security deep dive" which itself contains "threat modeling" and "vulnerability scan".

The solution: Recursive workflow composition with safety guards.

Workflow Definitions

@dataclass
class WorkflowDefinition:
    """Reusable workflow that can be nested."""
    id: str
    agents: list[AgentTemplate]
    strategy: str
    description: str = ""
    quality_gates: dict[str, Any] = field(default_factory=dict)

# Register workflows
WORKFLOW_REGISTRY: dict[str, WorkflowDefinition] = {}

def register_workflow(workflow: WorkflowDefinition):
    WORKFLOW_REGISTRY[workflow.id] = workflow

# Example workflows
register_workflow(WorkflowDefinition(
    id="security-deep-dive",
    agents=[
        spawn("vuln_scanner"),
        spawn("threat_modeler"),
        spawn("compliance_checker")
    ],
    strategy="parallel",
    description="Comprehensive security analysis"
))

register_workflow(WorkflowDefinition(
    id="release-prep",
    agents=[
        WorkflowReference("security-deep-dive"),  # Nested!
        spawn("test_coverage_analyzer"),
        spawn("documentation_checker")
    ],
    strategy="parallel"
))

Workflow References

Workflows can be referenced by ID or defined inline:

@dataclass
class WorkflowReference:
    """Reference to nested workflow."""
    workflow_id: str = ""           # Reference by ID
    inline: "InlineWorkflow | None" = None  # Or define inline
    context_mapping: dict[str, str] = field(default_factory=dict)
    result_key: str = "nested_result"

# Reference by ID
ref = WorkflowReference(workflow_id="security-deep-dive")

# Or inline definition
ref = WorkflowReference(inline=InlineWorkflow(
    agents=[spawn("quick_scan"), spawn("quick_review")],
    strategy="sequential"
))

Depth Limits and Cycle Detection

Nesting needs safety guards:

class NestingContext:
    """Track nesting depth and detect cycles."""
    CONTEXT_KEY = "_nesting"
    DEFAULT_MAX_DEPTH = 3

    def __init__(self, max_depth: int = DEFAULT_MAX_DEPTH):
        self.current_depth: int = 0
        self.max_depth: int = max_depth
        self.workflow_stack: list[str] = []

    def can_nest(self, workflow_id: str = "") -> bool:
        """Check if nesting is allowed."""
        # Depth limit
        if self.current_depth >= self.max_depth:
            return False

        # Cycle detection
        if workflow_id and workflow_id in self.workflow_stack:
            return False  # Would create cycle!

        return True

    def enter(self, workflow_id: str) -> "NestingContext":
        """Create child context for nested workflow."""
        child = NestingContext(max_depth=self.max_depth)
        child.current_depth = self.current_depth + 1
        child.workflow_stack = self.workflow_stack + [workflow_id]
        return child

Real-World Example: Hierarchical Release Check

# Level 3: Atomic checks
threat_modeling = WorkflowDefinition(
    id="threat-modeling",
    agents=[spawn("threat_analyst"), spawn("attack_surface_mapper")],
    strategy="parallel"
)

# Level 2: Security deep dive (contains threat modeling)
security_deep_dive = WorkflowDefinition(
    id="security-deep-dive",
    agents=[
        spawn("vuln_scanner"),
        WorkflowReference("threat-modeling"),  # Nested level 3
        spawn("compliance_checker")
    ],
    strategy="parallel"
)

# Level 1: Release prep (contains security deep dive)
release_prep = WorkflowDefinition(
    id="release-prep",
    agents=[
        WorkflowReference("security-deep-dive"),  # Nested level 2
        spawn("test_analyzer"),
        spawn("docs_checker"),
        spawn("performance_validator")
    ],
    strategy="parallel"
)

# Execute - unfolds into full tree
result = await orchestrate("release-prep")

# Execution tree:
# release-prep (depth 0)
# ├─ security-deep-dive (depth 1)
# │  ├─ vuln_scanner
# │  ├─ threat-modeling (depth 2)
# │  │  ├─ threat_analyst
# │  │  └─ attack_surface_mapper
# │  └─ compliance_checker
# ├─ test_analyzer
# ├─ docs_checker
# └─ performance_validator

Pattern 10: Learning Grammar

The problem: Static patterns don't improve. You manually tune compositions based on gut feeling.

The solution: Track execution outcomes, recommend patterns based on historical success.

Execution Records

Every execution creates a record:

@dataclass
class ExecutionRecord:
    """Record of a pattern execution."""
    pattern: str              # "sequential", "parallel", etc.
    success: bool             # Did it achieve quality gates?
    duration_seconds: float   # How long?
    cost: float = 0.0         # API costs
    confidence: float = 0.0   # Quality score
    context_features: dict[str, Any] = field(default_factory=dict)
    timestamp: str = field(default_factory=lambda: datetime.now().isoformat())

Pattern Statistics

Aggregated metrics per pattern:

@dataclass
class PatternStats:
    """Aggregated statistics for a pattern."""
    pattern: str
    total_executions: int = 0
    success_count: int = 0
    total_duration: float = 0.0
    total_cost: float = 0.0
    total_confidence: float = 0.0

    @property
    def success_rate(self) -> float:
        if self.total_executions == 0:
            return 0.0
        return self.success_count / self.total_executions

    @property
    def avg_duration(self) -> float:
        if self.total_executions == 0:
            return 0.0
        return self.total_duration / self.total_executions

    def update(self, record: ExecutionRecord):
        """Update stats with new execution."""
        self.total_executions += 1
        if record.success:
            self.success_count += 1
        self.total_duration += record.duration_seconds
        self.total_cost += record.cost
        self.total_confidence += record.confidence

Context Similarity Matching

Find patterns that worked in similar contexts:

@dataclass
class ContextSignature:
    """Normalized context for similarity matching."""
    task_type: str = ""
    agent_count: int = 0
    has_conditions: bool = False
    priority: str = "normal"

    @classmethod
    def from_context(cls, context: dict) -> "ContextSignature":
        return cls(
            task_type=context.get("task_type", ""),
            agent_count=len(context.get("agents", [])),
            has_conditions="_conditional" in context,
            priority=context.get("priority", "normal")
        )

    def similarity(self, other: "ContextSignature") -> float:
        """Calculate similarity score (0-1)."""
        scores = []

        # Task type similarity
        if self.task_type == other.task_type:
            scores.append(1.0)
        elif self.task_type[:4] == other.task_type[:4]:  # Prefix match
            scores.append(0.7)
        else:
            scores.append(0.0)

        # Agent count similarity
        max_count = max(self.agent_count, other.agent_count, 1)
        diff = abs(self.agent_count - other.agent_count)
        scores.append(1.0 - (diff / max_count))

        # Boolean features
        scores.append(1.0 if self.has_conditions == other.has_conditions else 0.0)
        scores.append(1.0 if self.priority == other.priority else 0.5)

        return sum(scores) / len(scores)

Hybrid Recommendation Engine

Combines similarity matching with statistical fallback:

class PatternRecommender:
    """Recommend patterns based on historical success."""

    def __init__(self, store: LearningStore):
        self.store = store

    def recommend(
        self,
        context: dict,
        top_k: int = 3
    ) -> list[PatternRecommendation]:
        """Get pattern recommendations for context."""
        signature = ContextSignature.from_context(context)

        # Strategy 1: Find similar contexts
        similar_records = self.store.find_similar_records(signature, limit=20)

        if similar_records:
            # Aggregate by pattern
            pattern_scores = defaultdict(list)
            for record, similarity in similar_records:
                if record.success:
                    pattern_scores[record.pattern].append(
                        similarity * record.confidence
                    )

            recommendations = []
            for pattern, scores in pattern_scores.items():
                avg_score = sum(scores) / len(scores)
                recommendations.append(PatternRecommendation(
                    pattern=pattern,
                    confidence=avg_score,
                    reason="Similar contexts had success with this pattern",
                    expected_success_rate=avg_score
                ))

            recommendations.sort(key=lambda r: r.confidence, reverse=True)
            if recommendations:
                return recommendations[:top_k]

        # Strategy 2: Statistical fallback
        all_stats = self.store.get_all_stats()
        if all_stats:
            return [
                PatternRecommendation(
                    pattern=stats.pattern,
                    confidence=stats.success_rate,
                    reason="High overall success rate",
                    expected_success_rate=stats.success_rate
                )
                for stats in all_stats[:top_k]
            ]

        # Strategy 3: Default recommendations
        return [
            PatternRecommendation(
                pattern="parallel",
                confidence=0.5,
                reason="Default recommendation for unknown contexts",
                expected_success_rate=0.5
            )
        ]

The Pattern Learner Interface

class PatternLearner:
    """Main interface for pattern learning."""

    def __init__(self, storage_path: str | None = None):
        self.store = LearningStore(storage_path)
        self.recommender = PatternRecommender(self.store)

    def record(
        self,
        pattern: str,
        success: bool,
        duration: float,
        cost: float = 0.0,
        confidence: float = 0.0,
        context: dict | None = None
    ):
        """Record an execution outcome."""
        record = ExecutionRecord(
            pattern=pattern,
            success=success,
            duration_seconds=duration,
            cost=cost,
            confidence=confidence,
            context_features=context or {}
        )
        self.store.add_record(record)

    def recommend(
        self,
        context: dict,
        top_k: int = 3
    ) -> list[PatternRecommendation]:
        """Get recommended patterns for context."""
        return self.recommender.recommend(context, top_k)

    def report(self) -> str:
        """Generate learning report."""
        stats = self.store.get_all_stats()
        if not stats:
            return "No learning data collected yet."

        lines = ["# Pattern Learning Report\n"]
        for s in stats:
            lines.append(f"## {s.pattern}")
            lines.append(f"- Executions: {s.total_executions}")
            lines.append(f"- Success rate: {s.success_rate:.1%}")
            lines.append(f"- Avg duration: {s.avg_duration:.2f}s")
            lines.append("")

        return "\n".join(lines)

Real-World Example: Learning in Action

learner = PatternLearner()

# Record executions over time
learner.record(
    pattern="parallel",
    success=True,
    duration=5.2,
    confidence=0.92,
    context={"task_type": "security_scan", "agent_count": 4}
)

learner.record(
    pattern="sequential",
    success=False,
    duration=12.5,
    confidence=0.45,
    context={"task_type": "security_scan", "agent_count": 4}
)

# ... many more executions ...

# Later, for a similar task:
recommendations = learner.recommend({
    "task_type": "security_scan",
    "agent_count": 3
})

# Output:
# [
#   PatternRecommendation(
#     pattern="parallel",
#     confidence=0.89,
#     reason="Similar contexts had success with this pattern",
#     expected_success_rate=0.89
#   ),
#   ...
# ]

# System now knows: "For security scans, parallel pattern works best"

The Complete Picture

With all 10 patterns, the system can express arbitrarily complex orchestration:

orchestrate("prepare mission-critical release")

→ Meta-orchestrator analyzes:
  - Task: release preparation
  - Priority: critical
  - Historical success: parallel with nested security

→ Composes:
  parallel([
    nested("security-deep-dive"),
    conditional(
      if: {coverage: {$lt: 80}},
      then: sequential([coverage_analyzer, test_generator]),
      else: quick_validator
    ),
    teaching(junior_docs, expert_docs)
  ])

→ Executes with learning enabled

→ Records outcome:
  - Pattern: parallel+nested+conditional+teaching
  - Success: True
  - Duration: 45s
  - Context: {task: "release", priority: "critical"}

→ Next similar task benefits from learned composition

Summary: The Grammar Expanded

PatternTypeUse Case
SequentialCoreDependencies between steps
ParallelCoreIndependent validations
DebateCoreExpert consensus
TeachingCoreCost optimization
RefinementCoreQuality ladder
AdaptiveCoreVariable complexity
ConditionalAdvancedRuntime branching
Multi-ConditionalAdvancedSwitch/case routing
NestedAdvancedHierarchical workflows
LearningAdvancedOutcome-based improvement

What's Possible Now

With the complete grammar:

  • Self-improving systems that learn optimal patterns from experience
  • Conditional workflows that adapt to runtime context
  • Hierarchical composition for enterprise-scale orchestration
  • Intelligent routing based on historical success rates
  • Cost optimization that improves automatically over time

This is no longer a workflow engine. It's a language for AI collaboration.


The advanced grammar patterns are available in Attune AI v4.4.0.

Patrick Roebuck is the creator of Attune AI. Follow @DeepStudyAI for updates.

Related Articles