Skip to content

Multi-Agent Coordination

Conflict resolution and monitoring for distributed agent teams.

Overview

The Multi-Agent Coordination module enables:

  • Pattern Conflict Resolution: When multiple agents discover conflicting patterns, resolve which takes precedence
  • Team Monitoring: Track agent performance, collaboration efficiency, and system health
  • Shared Memory: Coordinate agents via shared pattern libraries (see Pattern Library)

Architecture

graph TB
    subgraph "Agent Team"
        A1[Code Reviewer]
        A2[Test Generator]
        A3[Security Analyzer]
    end

    subgraph "Coordination Layer"
        PL[PatternLibrary]
        CR[ConflictResolver]
        AM[AgentMonitor]
    end

    A1 --> PL
    A2 --> PL
    A3 --> PL

    PL --> CR
    CR --> PL

    A1 --> AM
    A2 --> AM
    A3 --> AM

Quick Start

from empathy_os import (
    EmpathyOS,
    PatternLibrary,
    ConflictResolver,
    AgentMonitor,
)

# 1. Create shared infrastructure
library = PatternLibrary()
resolver = ConflictResolver()
monitor = AgentMonitor(pattern_library=library)

# 2. Create agent team with shared library
code_reviewer = EmpathyOS(
    user_id="code_reviewer",
    target_level=4,
    shared_library=library
)

test_generator = EmpathyOS(
    user_id="test_generator",
    target_level=3,
    shared_library=library
)

# 3. Agents discover and share patterns
# (Code reviewer finds a pattern, test generator can use it)

# 4. Monitor team collaboration
stats = monitor.get_team_stats()
print(f"Collaboration efficiency: {stats['collaboration_efficiency']:.0%}")

ConflictResolver

Resolves conflicts between patterns from different agents.

Class Reference

Resolves conflicts between patterns from different agents.

When multiple agents contribute patterns that address the same issue but recommend different approaches, the ConflictResolver determines which pattern should take precedence.

Example

resolver = ConflictResolver()

Two agents have different recommendations

review_pattern = Pattern( ... id="use_list_comprehension", ... agent_id="code_reviewer", ... pattern_type="performance", ... name="Use list comprehension", ... description="Use list comprehension for better performance", ... confidence=0.85 ... )

style_pattern = Pattern( ... id="use_explicit_loop", ... agent_id="style_agent", ... pattern_type="style", ... name="Use explicit loop", ... description="Use explicit loop for better readability", ... confidence=0.80 ... )

resolution = resolver.resolve_patterns( ... patterns=[review_pattern, style_pattern], ... context={"team_priority": "readability", "code_complexity": "high"} ... ) print(f"Winner: {resolution.winning_pattern.name}")

__init__(default_strategy=ResolutionStrategy.WEIGHTED_SCORE, team_priorities=None)

Initialize the ConflictResolver.

Parameters:

Name Type Description Default
default_strategy ResolutionStrategy

Strategy to use when not specified

WEIGHTED_SCORE
team_priorities TeamPriorities | None

Team-configured priorities for resolution

None

clear_history()

Clear resolution history

get_resolution_stats()

Get statistics about resolution history

resolve_patterns(patterns, context=None, strategy=None)

Resolve conflict between multiple patterns.

Parameters:

Name Type Description Default
patterns list[Pattern]

List of conflicting patterns (minimum 2)

required
context dict[str, Any] | None

Current context for resolution decision

None
strategy ResolutionStrategy | None

Resolution strategy (uses default if not specified)

None

Returns:

Type Description
ResolutionResult

ResolutionResult with winning pattern and reasoning

Raises:

Type Description
ValueError

If fewer than 2 patterns provided

Resolution Strategies

Strategy Description Best For
HIGHEST_CONFIDENCE Pick pattern with highest confidence score When accuracy is paramount
MOST_RECENT Pick most recently discovered pattern Fast-changing domains
BEST_CONTEXT_MATCH Pick best match for current context Context-sensitive decisions
TEAM_PRIORITY Use team-configured priorities Enforcing team standards
WEIGHTED_SCORE Combine all factors (default) Balanced decisions

Example: Resolving Pattern Conflicts

from empathy_os import Pattern, ConflictResolver, ResolutionStrategy

resolver = ConflictResolver()

# Two agents have different recommendations
performance_pattern = Pattern(
    id="use_list_comprehension",
    agent_id="performance_agent",
    pattern_type="performance",
    name="Use list comprehension",
    description="Use list comprehension for better performance",
    confidence=0.85
)

readability_pattern = Pattern(
    id="use_explicit_loop",
    agent_id="style_agent",
    pattern_type="style",
    name="Use explicit loop",
    description="Use explicit loop for better readability",
    confidence=0.80
)

# Resolve based on team priority
resolution = resolver.resolve_patterns(
    patterns=[performance_pattern, readability_pattern],
    context={
        "team_priority": "readability",  # Team values readability
        "code_complexity": "high"         # Complex code needs clarity
    }
)

print(f"Winner: {resolution.winning_pattern.name}")
print(f"Reasoning: {resolution.reasoning}")
# Output: Winner: Use explicit loop
# Reasoning: Selected 'Use explicit loop' based on team priority: readability

Example: Custom Team Priorities

from empathy_os import ConflictResolver, TeamPriorities

# Configure team priorities
priorities = TeamPriorities(
    readability_weight=0.4,
    performance_weight=0.2,
    security_weight=0.3,
    maintainability_weight=0.1,
    type_preferences={
        "security": 1.0,      # Security always wins
        "best_practice": 0.8,
        "performance": 0.7,
        "style": 0.5,
    },
    preferred_tags=["production", "tested"]
)

resolver = ConflictResolver(team_priorities=priorities)

# Now security patterns will be strongly preferred

Resolution Statistics

# After several resolutions
stats = resolver.get_resolution_stats()

print(f"Total resolutions: {stats['total_resolutions']}")
print(f"Most used strategy: {stats['most_used_strategy']}")
print(f"Average confidence: {stats['average_confidence']:.0%}")

AgentMonitor

Tracks agent performance and team collaboration metrics.

Class Reference

Monitors and tracks metrics for multi-agent systems.

Provides insights into: - Individual agent performance - Pattern discovery and sharing - Team collaboration effectiveness - System health

Example

monitor = AgentMonitor()

Record agent activity

monitor.record_interaction("code_reviewer", response_time_ms=150.0) monitor.record_pattern_discovery("code_reviewer") monitor.record_pattern_use("test_gen", pattern_agent="code_reviewer", success=True)

Get individual stats

stats = monitor.get_agent_stats("code_reviewer") print(f"Interactions: {stats['total_interactions']}") print(f"Patterns discovered: {stats['patterns_discovered']}")

Get team stats

team = monitor.get_team_stats() print(f"Collaboration efficiency: {team['collaboration_efficiency']:.0%}")

__init__(pattern_library=None)

Initialize the AgentMonitor.

Parameters:

Name Type Description Default
pattern_library PatternLibrary | None

Optional pattern library to track for shared patterns

None

check_health()

Check overall system health.

Returns:

Type Description
dict[str, Any]

Health status dictionary

get_agent_stats(agent_id)

Get statistics for a specific agent.

Parameters:

Name Type Description Default
agent_id str

ID of the agent

required

Returns:

Type Description
dict[str, Any]

Dictionary with agent statistics

get_alerts(limit=100)

Get recent alerts.

Parameters:

Name Type Description Default
limit int

Maximum number of alerts to return

100

Returns:

Type Description
list[dict[str, Any]]

List of alert dictionaries

get_team_stats()

Get aggregated statistics for the entire agent team.

Returns:

Type Description
dict[str, Any]

Dictionary with team-wide statistics

get_top_contributors(n=5)

Get the top pattern-contributing agents.

Parameters:

Name Type Description Default
n int

Number of agents to return

5

Returns:

Type Description
list[dict[str, Any]]

List of agent stats, sorted by patterns discovered

record_interaction(agent_id, response_time_ms=0.0)

Record an agent interaction.

Parameters:

Name Type Description Default
agent_id str

ID of the agent

required
response_time_ms float

Response time in milliseconds

0.0

record_pattern_discovery(agent_id, pattern_id=None)

Record that an agent discovered a new pattern.

Parameters:

Name Type Description Default
agent_id str

ID of the agent that discovered the pattern

required
pattern_id str | None

Optional pattern ID for tracking

None

record_pattern_use(agent_id, pattern_id=None, pattern_agent=None, success=True)

Record that an agent used a pattern.

Parameters:

Name Type Description Default
agent_id str

ID of the agent using the pattern

required
pattern_id str | None

ID of the pattern being used

None
pattern_agent str | None

ID of the agent that contributed the pattern

None
success bool

Whether the pattern use was successful

True

reset()

Reset all monitoring data

Recording Agent Activity

from empathy_os import AgentMonitor, PatternLibrary

library = PatternLibrary()
monitor = AgentMonitor(pattern_library=library)

# Record agent interactions
monitor.record_interaction("code_reviewer", response_time_ms=150.0)
monitor.record_interaction("code_reviewer", response_time_ms=200.0)

# Record pattern discovery
monitor.record_pattern_discovery("code_reviewer", pattern_id="pat_001")

# Record cross-agent pattern reuse
monitor.record_pattern_use(
    agent_id="test_generator",
    pattern_id="pat_001",
    pattern_agent="code_reviewer",  # Original discoverer
    success=True
)

Individual Agent Stats

stats = monitor.get_agent_stats("code_reviewer")

print(f"Agent: {stats['agent_id']}")
print(f"Total interactions: {stats['total_interactions']}")
print(f"Avg response time: {stats['avg_response_time_ms']:.0f}ms")
print(f"Patterns discovered: {stats['patterns_discovered']}")
print(f"Success rate: {stats['success_rate']:.0%}")
print(f"Status: {stats['status']}")  # 'active' or 'inactive'

Team-Wide Metrics

team_stats = monitor.get_team_stats()

print(f"Active agents: {team_stats['active_agents']}")
print(f"Total agents: {team_stats['total_agents']}")
print(f"Shared patterns: {team_stats['shared_patterns']}")
print(f"Pattern reuse rate: {team_stats['pattern_reuse_rate']:.0%}")
print(f"Collaboration efficiency: {team_stats['collaboration_efficiency']:.0%}")

Collaboration Efficiency measures how effectively agents learn from each other: - 0% = Agents only use their own patterns - 100% = All pattern reuse is cross-agent

Top Contributors

# Find agents contributing most patterns
top = monitor.get_top_contributors(n=5)

for agent in top:
    print(f"{agent['agent_id']}: {agent['patterns_discovered']} patterns")

Health Monitoring

health = monitor.check_health()

print(f"Status: {health['status']}")  # 'healthy', 'degraded', or 'unhealthy'
print(f"Issues: {health['issues']}")
print(f"Active agents: {health['active_agents']}")
print(f"Recent alerts: {health['recent_alerts']}")

# Alerts are generated automatically for:
# - Slow response times (>5 seconds)
# - No active agents
# - Low collaboration efficiency

Data Classes

ResolutionResult

Result of conflict resolution between patterns

Result of conflict resolution:

result = resolver.resolve_patterns([pattern1, pattern2])

print(result.winning_pattern.name)   # The chosen pattern
print(result.losing_patterns)        # Patterns that lost
print(result.strategy_used)          # Which strategy was used
print(result.confidence)             # Confidence in this resolution
print(result.reasoning)              # Human-readable explanation
print(result.factors)                # Score breakdown

AgentMetrics

Metrics for a single agent

avg_response_time_ms property

Average response time in milliseconds

pattern_contribution_rate property

Rate of pattern discovery per interaction

success_rate property

Pattern usage success rate

Per-agent metrics:

# Accessing raw metrics
metrics = monitor.agents["code_reviewer"]

print(metrics.total_interactions)
print(metrics.patterns_discovered)
print(metrics.avg_response_time_ms)  # Property
print(metrics.success_rate)          # Property
print(metrics.pattern_contribution_rate)  # Property

TeamMetrics

Aggregated metrics for an agent team

collaboration_efficiency property

Measure of how effectively agents collaborate.

Higher values indicate more cross-agent pattern reuse, meaning agents are learning from each other.

pattern_reuse_rate property

Rate at which patterns are reused

Team-wide aggregated metrics:

from empathy_os.monitoring import TeamMetrics

metrics = TeamMetrics(
    active_agents=3,
    total_agents=5,
    shared_patterns=100,
    pattern_reuse_count=50,
    cross_agent_reuses=30
)

print(metrics.pattern_reuse_rate)       # 0.5 (50/100)
print(metrics.collaboration_efficiency)  # 0.6 (30/50)

Integration with EmpathyOS

EmpathyOS includes built-in support for shared pattern libraries:

from empathy_os import EmpathyOS, PatternLibrary, Pattern

# Create shared library
library = PatternLibrary()

# Create agent with shared library
agent = EmpathyOS(
    user_id="code_reviewer",
    target_level=4,
    shared_library=library  # Enable multi-agent coordination
)

# Check if agent has shared library
if agent.has_shared_library():
    # Contribute a pattern
    pattern = Pattern(
        id="pat_001",
        agent_id="code_reviewer",
        pattern_type="best_practice",
        name="Test Pattern",
        description="A discovered pattern"
    )
    agent.contribute_pattern(pattern)

    # Query patterns from other agents
    matches = agent.query_patterns(
        context={"language": "python"},
        min_confidence=0.7
    )

Best Practices

1. Use Consistent Agent IDs

# Good: Descriptive, consistent naming
code_reviewer = EmpathyOS(user_id="code_reviewer", ...)
test_generator = EmpathyOS(user_id="test_generator", ...)

# Bad: Generic or inconsistent names
agent1 = EmpathyOS(user_id="agent1", ...)

2. Monitor Collaboration Efficiency

# Check regularly
team_stats = monitor.get_team_stats()

if team_stats["collaboration_efficiency"] < 0.3:
    print("Warning: Agents aren't learning from each other")
    # Consider: shared contexts, better pattern tagging

3. Configure Team Priorities Early

# Set expectations before agents start
priorities = TeamPriorities(
    security_weight=0.4,  # Security first
    ...
)
resolver = ConflictResolver(team_priorities=priorities)

4. Track Resolution History

# Learn from past resolutions
stats = resolver.get_resolution_stats()

if stats["most_used_strategy"] == "highest_confidence":
    print("Tip: Consider using team priorities for more nuanced decisions")

See Also