Add complete QLM (Quantum Language Model) implementation

This commit introduces the QLM system - a stateful semantic layer for
tracking HI (Human Intelligence), AI (Agent Intelligence), and QI
(Quantum/Emergent Intelligence) in BlackRoad OS.

Core Features:
- HI/AI/QI intelligence layer modeling
- Event-driven state management
- QI emergence detection (agent self-correction, feedback loops, etc.)
- HI-AI alignment scoring
- Operator-facing query interface
- Reality ingestion (git, CI, agent logs)

Components Added:
- qlm_lab/models.py: Core data models (Actor, QLMEvent, QIEmergence, etc.)
- qlm_lab/state.py: State management and transition tracking
- qlm_lab/api.py: Public QLMInterface API
- qlm_lab/ingestion/: Git, CI, and agent log connectors
- qlm_lab/experiments/: Alignment and emergence validation
- qlm_lab/visualization.py: Timeline, actor graph, alignment plots
- qlm_lab/demo.py: Interactive demo script
- tests/test_qlm_core.py: Comprehensive test suite
- docs/QLM.md: Complete documentation (concepts, API, integration)

Usage:
  from qlm_lab.api import QLMInterface

  qlm = QLMInterface()
  qlm.record_operator_intent("Build feature X")
  qlm.record_agent_execution("agent-1", "Implement X", "task-1")
  summary = qlm.get_summary(days=7)

Run:
  python -m qlm_lab.demo
  python -m qlm_lab.experiments.alignment_detection
  pytest tests/test_qlm_core.py -v

Integrates with:
- cognitive/intent_graph.py (intent tracking)
- cognitive/agent_coordination.py (multi-agent coordination)
- operator_engine/scheduler.py (background analysis)

Next steps: Integrate with FastAPI backend, add Prism Console UI,
implement Lucidia language runtime.
This commit is contained in:
Claude
2025-11-18 08:15:06 +00:00
parent 1733282bb2
commit e478add607
15 changed files with 4534 additions and 0 deletions

944
docs/QLM.md Normal file
View File

@@ -0,0 +1,944 @@
## QLM (Quantum Language Model) - Complete Guide
> **The semantic layer for Operator-AI collaboration**
---
## Table of Contents
1. [What is QLM?](#what-is-qlm)
2. [Core Concepts](#core-concepts)
3. [Architecture](#architecture)
4. [Data Models](#data-models)
5. [API Reference](#api-reference)
6. [Integration Guide](#integration-guide)
7. [Experiments & Validation](#experiments--validation)
8. [Visualization](#visualization)
9. [Operator's Guide](#operators-guide)
10. [Development & Extension](#development--extension)
---
## What is QLM?
**QLM (Quantum Language Model)** is a stateful semantic layer that tracks, analyzes, and explains the interaction between **Human Intelligence (HI)**, **AI Intelligence (AI)**, and emergent **Quantum Intelligence (QI)** in the BlackRoad Operating System.
### The Problem QLM Solves
Modern AI systems have a **context loss problem**:
- AI agents execute tasks but lose track of *why*
- Humans give intent but can't see *what happened*
- Systems evolve but no one knows *how we got here*
- Emergent behaviors appear with no explanation
QLM fixes this by creating a **complete causal graph** of:
- Operator intent
- Agent executions
- System events
- Feedback loops
- Emergent patterns
### The "Quantum" Metaphor
The "quantum" in QLM does NOT refer to quantum physics. Instead, it describes:
**Superposition of Roles**: An agent can be both executor AND coordinator simultaneously.
**Superposition of States**: A task can be in_progress AND blocked at the same time.
**Superposition of Perspectives**: The same event looks different from HI vs AI vs QI viewpoints.
**Quantum Intelligence (QI)**: Emergent behaviors that appear when HI + AI + deterministic systems interact in feedback loops. When 1 + 1 = 3.
---
## Core Concepts
### Intelligence Layers
QLM models three layers of intelligence:
#### 1. HI (Human Intelligence)
**Definition**: The Operator layer - human judgment, taste, ethics, goals, narrative.
**Primary Actor**: Alexa (Operator)
**Capabilities**:
- Define intent and constraints
- Approve or veto AI actions
- Ask questions and interpret results
- Provide judgment on ambiguous decisions
**Events**:
- `OPERATOR_INTENT`: Operator defines a goal
- `OPERATOR_APPROVAL`: Operator approves agent work
- `OPERATOR_VETO`: Operator rejects agent work
- `OPERATOR_QUERY`: Operator asks a question
#### 2. AI (Agent Intelligence)
**Definition**: LLM-powered agents, code generation, pattern completion, search, transformation.
**Primary Actors**: 200+ BlackRoad agents (coder, reviewer, researcher, etc.)
**Capabilities**:
- Execute tasks
- Generate code/docs/designs
- Search and retrieve information
- Coordinate with other agents
**Events**:
- `AGENT_EXECUTION`: Agent starts working
- `AGENT_COMPLETION`: Agent finishes task
- `AGENT_ERROR`: Agent encounters error
- `AGENT_HANDOFF`: Agent passes work to another agent
#### 3. QI (Quantum Intelligence)
**Definition**: Emergent system-level intelligence that appears when HI + AI + deterministic systems interact in feedback loops.
**Not a single actor**: QI is a property of the *entire system*.
**Emergence Patterns**:
- `agent_self_correction`: Agent fixes own errors without HI intervention
- `novel_solution`: Agent finds approach not in training data
- `emergent_collaboration`: Agents self-organize into new patterns
- `operator_feedback_loop`: HI → AI → HI creates refined understanding
- `system_adaptation`: Deterministic systems evolve in response to AI
**Events**:
- `QI_EMERGENCE`: Novel behavior detected
- `QI_FEEDBACK_LOOP`: HI+AI feedback detected
- `QI_PATTERN`: Recurring emergent pattern identified
---
## Architecture
### System Diagram
```
┌─────────────────────────────────────────────────────┐
│ Operator (Alexa) │
│ Human Intelligence (HI) │
└──────────────────────┬──────────────────────────────┘
│ Intent, Approval, Veto
┌──────────────────────────────────────────────────────┐
│ QLM Interface │
│ record_operator_intent(), ask(), get_summary() │
└──────────────────────┬───────────────────────────────┘
┌──────────────────────────────────────────────────────┐
│ QLM State │
│ • Intelligence Layers (HI, AI, QI) │
│ • Event History │
│ • QI Emergence Detection │
│ • Metrics & Alignment │
└──────┬────────────────────────┬──────────────────────┘
│ │
│ │
↓ ↓
┌──────────────────┐ ┌──────────────────────────────┐
│ Ingestion │ │ Cognitive Layer Integration │
│ • Git │ │ • IntentGraph │
│ • CI/CD │ │ • AgentCoordinator │
│ • Agent Logs │ │ • ContextEngine │
└──────────────────┘ └──────────────────────────────┘
┌──────────────────────────────────────────────────────┐
│ Reality (External Systems) │
│ • Git commits • Test results • Deployments │
│ • Agent executions • System events │
└──────────────────────────────────────────────────────┘
```
### Key Components
#### `qlm_lab/models.py`
Defines core data structures:
- `IntelligenceType`, `ActorType`, `ActorRole`
- `Actor`: Represents humans, agents, systems
- `QLMEvent`: Every action generates an event
- `QIEmergence`: Detected emergent patterns
- `QLMMetrics`: System-level metrics
#### `qlm_lab/state.py`
Manages QLM state:
- `QLMState`: Complete state snapshot
- `ingest_event()`: Process new events and update state
- `query()`: Answer questions about state
- `detect_qi_emergence()`: Pattern matching for QI
- `calculate_alignment()`: HI-AI alignment scoring
#### `qlm_lab/api.py`
Public interface:
- `QLMInterface`: Main API class
- `record_*()`: Methods to record events
- `get_*()`: Query methods
- `ask()`: Natural language queries
#### `qlm_lab/ingestion/`
Connects QLM to reality:
- `GitConnector`: Ingest git commits
- `CIConnector`: Ingest test/build/deploy results
- `AgentLogConnector`: Parse agent logs
#### `qlm_lab/experiments/`
Validation experiments:
- `AlignmentDetectionExperiment`: Test alignment scoring
- `EmergenceDetectionExperiment`: Test QI detection
#### `qlm_lab/visualization.py`
Visualization tools:
- Event timeline
- Actor interaction graph
- Alignment trends
- Emergence patterns
---
## Data Models
### Actor
```python
@dataclass
class Actor:
id: str # Unique identifier
name: str # Human-readable name
actor_type: ActorType # HUMAN | AGENT | SYSTEM
role: ActorRole # OPERATOR | EXECUTOR | COORDINATOR | REVIEWER | ...
state: ActorState # ACTIVE | IDLE | BLOCKED | OFFLINE
capabilities: Set[str] # What this actor can do
current_task_id: Optional[str]
created_at: datetime
last_active: datetime
```
### QLMEvent
```python
@dataclass
class QLMEvent:
id: str # Unique event ID
timestamp: datetime # When event occurred
source_layer: IntelligenceType # HI | AI | QI
actor_id: str # Who generated this event
event_type: EventType # OPERATOR_INTENT | AGENT_EXECUTION | ...
data: Dict[str, Any] # Event payload
caused_by: List[str] # Causal event IDs
intent_node_id: Optional[str] # Link to IntentGraph
task_id: Optional[str] # Related task
tags: Set[str]
metadata: Dict[str, Any]
```
### QIEmergence
```python
@dataclass
class QIEmergence:
id: str # Unique emergence ID
timestamp: datetime # When detected
pattern_name: str # e.g., "agent_self_correction"
trigger_events: List[str] # Events that triggered this
confidence: float # 0.0 to 1.0
explanation: str # Human-readable description
operator_validated: Optional[bool] # Did Operator confirm?
operator_notes: str
impact_score: float # Significance (0.0 to 1.0)
```
### QLMMetrics
```python
@dataclass
class QLMMetrics:
hi_events: int # Count of HI events
ai_events: int # Count of AI events
qi_events: int # Count of QI events
system_events: int # Count of system events
hi_ai_alignment: float # Alignment score (0.0 to 1.0)
qi_emergence_rate: float # Rate of QI detection
feedback_loop_count: int # HI→AI→HI cycles
operator_approvals: int
operator_vetoes: int
operator_queries: int
start_time: datetime
end_time: datetime
```
---
## API Reference
### Initialization
```python
from qlm_lab.api import QLMInterface
# Basic initialization
qlm = QLMInterface()
# With cognitive layer integration
from cognitive.intent_graph import IntentGraph
from cognitive.agent_coordination import AgentCoordinator
intent_graph = IntentGraph()
agent_coordinator = AgentCoordinator(intent_graph)
qlm = QLMInterface(
intent_graph=intent_graph,
agent_coordinator=agent_coordinator
)
```
### Recording Events
#### Operator Events (HI)
```python
# Record Operator intent
event = qlm.record_operator_intent(
intent="Deploy authentication feature",
description="Implement login, signup, password reset",
intent_node_id="intent-auth-001" # Link to IntentGraph
)
# Record Operator approval
qlm.record_operator_approval(
what_approved="Login implementation",
intent_node_id="intent-auth-001",
task_id="task-login-001"
)
# Record Operator veto
qlm.record_operator_veto(
what_vetoed="Password reset implementation",
reason="Security concerns - needs stronger validation",
intent_node_id="intent-auth-001",
task_id="task-reset-001"
)
# Record Operator query
qlm.record_operator_query("What did agents do today?")
```
#### Agent Events (AI)
```python
# Register an agent
agent = qlm.register_agent(
agent_id="coder-001",
name="CodeWriter",
role=ActorRole.CODER,
capabilities=["python", "javascript", "testing"]
)
# Record agent execution
qlm.record_agent_execution(
agent_id="coder-001",
task_description="Implement login endpoint",
task_id="task-login-001",
intent_node_id="intent-auth-001"
)
# Record agent completion
qlm.record_agent_completion(
agent_id="coder-001",
task_id="task-login-001",
success=True,
result={"files_modified": ["auth.py"], "tests_added": 5}
)
# Record agent error
qlm.record_agent_error(
agent_id="coder-001",
task_id="task-login-001",
error="Database connection failed"
)
# Record agent handoff
qlm.record_agent_handoff(
from_agent_id="coder-001",
to_agent_id="reviewer-001",
task_id="task-login-001",
handoff_message="Ready for review"
)
```
#### System Events
```python
# Record test result
qlm.record_system_event(
event_type=EventType.SYSTEM_TEST,
description="Backend tests passed",
task_id="task-login-001",
metadata={
"passed": True,
"test_count": 42,
"duration_seconds": 12.3
}
)
# Record build result
qlm.record_system_event(
event_type=EventType.SYSTEM_BUILD,
description="Production build successful",
metadata={"build_id": "build-123", "artifacts": ["app.tar.gz"]}
)
# Record deployment
qlm.record_system_event(
event_type=EventType.SYSTEM_DEPLOY,
description="Deployed to production",
metadata={"environment": "production", "version": "v1.2.0"}
)
```
### Querying State
```python
# Get Operator summary
summary = qlm.get_summary(days=7)
print(summary)
# Get alignment score
alignment = qlm.get_alignment_score()
print(f"HI-AI Alignment: {alignment:.1%}")
# Get recent QI emergences
emergences = qlm.get_recent_emergences(limit=10)
for em in emergences:
print(f"{em.pattern_name}: {em.explanation}")
# Get active actors
active = qlm.get_active_actors()
for actor in active:
print(f"{actor.name} - {actor.role.value}")
# Get events by type
intents = qlm.get_events_by_type(EventType.OPERATOR_INTENT)
print(f"Total intents: {len(intents)}")
# Get events in time range
from datetime import datetime, timedelta
yesterday = datetime.now() - timedelta(days=1)
recent_events = qlm.get_events_in_timerange(yesterday)
```
### Natural Language Queries
```python
# Ask questions in natural language
response = qlm.ask("What did agents do today?")
print(response)
response = qlm.ask("Are we aligned with my intent?")
print(response)
response = qlm.ask("Show me emergent behaviors")
print(response)
response = qlm.ask("What's the status?")
print(response)
```
### Export/Import State
```python
# Export state to JSON
qlm.export_state("/path/to/qlm_state.json")
# Import state from JSON
qlm.import_state("/path/to/qlm_state.json")
```
---
## Integration Guide
### Integrating with Existing Cognitive Layer
QLM is designed to integrate seamlessly with the existing cognitive infrastructure:
```python
from cognitive.intent_graph import IntentGraph
from cognitive.agent_coordination import AgentCoordinator
from qlm_lab.api import QLMInterface
# Initialize cognitive systems
intent_graph = IntentGraph()
agent_coordinator = AgentCoordinator(intent_graph)
# Initialize QLM with cognitive integration
qlm = QLMInterface(
intent_graph=intent_graph,
agent_coordinator=agent_coordinator
)
# Now when you create goals in IntentGraph...
goal = intent_graph.create_goal(
title="Build authentication",
rationale="Need secure user login"
)
# ...also record in QLM
qlm.record_operator_intent(
intent="Build authentication",
intent_node_id=goal.id
)
# When agents coordinate...
task = intent_graph.create_task(
title="Implement login",
parent_id=goal.id
)
agent_coordinator.assign_task(task.id, agent_id="coder-001")
# ...also record in QLM
qlm.record_agent_execution(
agent_id="coder-001",
task_description="Implement login",
task_id=task.id,
intent_node_id=goal.id
)
```
### Ingesting Real System Data
#### Git Commits
```python
from qlm_lab.ingestion.git import GitConnector
connector = GitConnector(repo_path="/path/to/repo", qlm=qlm)
# Ingest last 7 days of commits
events = connector.ingest_recent_commits(days=7)
print(f"Ingested {len(events)} commits")
# Ingest specific range
events = connector.ingest_commit_range(
since="2024-01-01",
until="2024-01-31"
)
```
#### CI/CD Results
```python
from qlm_lab.ingestion.ci import CIConnector
connector = CIConnector(qlm=qlm)
# Ingest test result
connector.ingest_test_result(
test_name="Backend Tests",
passed=True,
duration_seconds=12.3,
commit_hash="abc123"
)
# Ingest build result
connector.ingest_build_result(
build_name="Production Build",
success=True,
duration_seconds=45.2,
artifacts=["app.tar.gz", "app.zip"]
)
# Ingest deployment
connector.ingest_deploy_result(
service_name="blackroad-api",
environment="production",
success=True,
version="v1.2.0"
)
```
#### Agent Logs
```python
from qlm_lab.ingestion.agent_logs import AgentLogConnector
connector = AgentLogConnector(qlm=qlm)
# Ingest log file
events = connector.ingest_log_file("/path/to/agent.log")
print(f"Ingested {len(events)} events from logs")
# Ingest structured logs
log_entries = [
{
"timestamp": "2024-01-15T10:30:00",
"agent_id": "coder-001",
"level": "INFO",
"message": "Task started: implement login"
},
# ...
]
events = connector.ingest_structured_log(log_entries)
```
---
## Experiments & Validation
QLM includes built-in experiments to validate its effectiveness:
### Alignment Detection Experiment
**Hypothesis**: QLM can accurately detect when AI agents drift from Operator intent.
```bash
python -m qlm_lab.experiments.alignment_detection
```
**Scenarios**:
- Perfect alignment (100% approval)
- Partial alignment (some vetoes)
- No alignment (all vetoes)
**Success Criteria**: Alignment score accuracy within 20%
### Emergence Detection Experiment
**Hypothesis**: QLM can detect emergent QI behaviors.
```bash
python -m qlm_lab.experiments.emergence_detection
```
**Patterns Tested**:
- Agent self-correction
- Operator feedback loop
- Emergent collaboration
- Normal execution (should NOT trigger)
**Success Criteria**:
- True positive rate ≥ 80%
- False positive rate < 20%
---
## Visualization
QLM includes powerful visualization tools:
```python
from qlm_lab.visualization import QLMVisualizer
viz = QLMVisualizer(qlm)
# Event timeline
viz.plot_event_timeline(save_path="timeline.png")
# Actor interaction graph
viz.plot_actor_graph(save_path="actors.png")
# Alignment over time
viz.plot_alignment_over_time(save_path="alignment.png")
# Emergence patterns
viz.plot_emergence_patterns(save_path="emergence.png")
# Export complete dashboard
viz.export_dashboard(output_dir="./qlm_dashboard")
```
**Requirements**:
```bash
pip install matplotlib networkx
```
---
## Operator's Guide
### Daily Usage
As the Operator (Alexa), here's how to use QLM:
#### Morning Check-In
```python
# What happened overnight?
summary = qlm.get_summary(days=1)
print(summary)
# Are agents aligned with my goals?
alignment = qlm.get_alignment_score()
if alignment < 0.7:
print("⚠️ Warning: Low alignment detected")
# Any emergent behaviors?
emergences = qlm.get_recent_emergences()
for em in emergences:
print(f"{em.pattern_name}: {em.explanation}")
```
#### Defining Intent
```python
# When starting a new project
qlm.record_operator_intent(
intent="Build payment integration",
description="Integrate Stripe for subscriptions and one-time payments",
intent_node_id="intent-payment-2024-01"
)
```
#### Reviewing Agent Work
```python
# When agents complete work
qlm.record_operator_approval(
what_approved="Stripe integration implementation",
intent_node_id="intent-payment-2024-01",
task_id="task-stripe-001"
)
# When work doesn't match intent
qlm.record_operator_veto(
what_vetoed="Payment form UI",
reason="Doesn't match brand guidelines - needs redesign",
intent_node_id="intent-payment-2024-01",
task_id="task-ui-001"
)
```
#### Asking Questions
```python
# Natural language queries
qlm.ask("What did agents do today?")
qlm.ask("Are we aligned with my intent?")
qlm.ask("Show me emergent behaviors")
qlm.ask("What's the status?")
```
### Understanding Alignment
**Alignment Score**: 0.0 to 1.0
- **0.9-1.0**: Excellent - agents are executing your intent well
- **0.7-0.9**: Good - minor drift, watch for patterns
- **0.5-0.7**: Warning - significant misalignment, review vetoes
- **< 0.5**: Critical - agents not following intent, intervention needed
**Improving Alignment**:
1. Be more specific in intent descriptions
2. Provide examples of what "good" looks like
3. Give immediate feedback (approve/veto)
4. Review patterns in vetoes - is there confusion?
### Understanding QI Emergence
**Common Patterns**:
- **agent_self_correction**: Agent fixed its own error without your help
- *Good sign*: Agents are learning and adapting
- **novel_solution**: Agent found an approach you didn't suggest
- *Good sign*: Creative problem-solving
- *Watch for*: Ensure solution aligns with intent
- **operator_feedback_loop**: You → agent → feedback → refined approach
- *Good sign*: Healthy iteration cycle
- *Measure*: Count of loops indicates collaboration quality
- **emergent_collaboration**: Agents self-organized
- *Good sign*: Agents coordinating without explicit instructions
- *Watch for*: Ensure coordination serves your intent
---
## Development & Extension
### Adding New Event Types
```python
# In qlm_lab/models.py
class EventType(Enum):
# ... existing types ...
CUSTOM_EVENT = "custom_event"
# In qlm_lab/api.py
def record_custom_event(self, ...):
event = QLMEvent(
source_layer=IntelligenceType.AI,
event_type=EventType.CUSTOM_EVENT,
...
)
self.state.ingest_event(event)
return event
```
### Adding New QI Patterns
```python
# In qlm_lab/models.py
QI_PATTERNS["my_pattern"] = {
"description": "Description of when this pattern occurs",
"trigger": "sequence of events that trigger this",
"significance": "high", # high | medium | low | very_high
}
# In qlm_lab/state.py
def _matches_pattern(self, events, pattern_name, pattern_def):
if pattern_name == "my_pattern":
# Implement pattern detection logic
# Return True if pattern matches
pass
```
### Adding New Connectors
```python
# Create qlm_lab/ingestion/my_connector.py
from qlm_lab.api import QLMInterface
from qlm_lab.models import EventType
class MyConnector:
def __init__(self, qlm: QLMInterface):
self.qlm = qlm
def ingest_my_data(self, data):
# Transform data into QLM events
event = self.qlm.record_system_event(
event_type=EventType.SYSTEM_CUSTOM,
description=...,
metadata=...
)
return event
```
### Running Tests
```bash
# Run QLM tests
pytest tests/test_qlm_core.py -v
# Run all tests
pytest tests/ -v
# With coverage
pytest tests/test_qlm_core.py --cov=qlm_lab --cov-report=html
```
### Running Demo
```bash
# Interactive demo
python -m qlm_lab.demo
# Run specific experiment
python -m qlm_lab.experiments.alignment_detection
python -m qlm_lab.experiments.emergence_detection
```
---
## Next Steps
### Phase 1: Lab (Current)
- ✅ Core QLM models and state management
- ✅ Basic event ingestion (git, CI, agents)
- ✅ Alignment and emergence detection
- ✅ Visualization and experiments
- ✅ Documentation
### Phase 2: Integration (Next 1-2 months)
- [ ] Full cognitive layer integration
- [ ] Real-time event streaming
- [ ] FastAPI router for QLM API
- [ ] Prism Console UI integration
- [ ] Dashboard for Operator
### Phase 3: Production (Months 3-4)
- [ ] Database persistence (PostgreSQL)
- [ ] Advanced QI pattern detection (ML-based)
- [ ] Multi-Operator support
- [ ] Audit trail and compliance
- [ ] Performance optimization
### Phase 4: Lucidia Language (Months 5-6)
- [ ] Lucidia syntax for expressing intent
- [ ] Lucidia → QLM compiler
- [ ] QLM → Lucidia decompiler (explain mode)
- [ ] Lucidia REPL for live QLM queries
---
## FAQ
**Q: What's the difference between QLM and the IntentGraph?**
A: IntentGraph tracks *what* (goals, tasks, artifacts). QLM tracks *why*, *how*, and *emergence* (intent, execution, QI patterns). They complement each other.
**Q: Do I need to use QLM for every agent?**
A: No, start small. Use QLM for critical workflows where you need clear intent tracking and alignment verification.
**Q: How does QLM relate to Lucidia?**
A: Lucidia is the *language* for expressing intent and constraints. QLM is the *runtime* that tracks and enforces them.
**Q: What if I don't want to record every event?**
A: QLM is opt-in. Only record events you care about. Start with Operator intents and agent completions.
**Q: Can QLM work with multiple Operators?**
A: Currently optimized for single Operator (Alexa). Multi-Operator support planned for Phase 3.
**Q: How do I debug QLM?**
A: Use `qlm.state.export_json()` to inspect state, visualizations to see patterns, and experiments to validate behavior.
---
## Conclusion
QLM provides the missing semantic layer for Operator-AI collaboration. By tracking HI, AI, and QI as first-class concepts, QLM makes AI systems **understandable**, **controllable**, and **improvable**.
**Start using QLM today**:
```python
from qlm_lab.api import QLMInterface
qlm = QLMInterface()
# Your intent
qlm.record_operator_intent("Build the future")
# Agent execution
qlm.record_agent_execution("agent-1", "Create something amazing", "task-1")
# Check alignment
print(f"Alignment: {qlm.get_alignment_score():.1%}")
# Ask questions
print(qlm.ask("What's the status?"))
```
**Happy building! 🛣️✨**
---
*QLM is part of the BlackRoad Operating System. See BLACKROAD_OS_BIG_KAHUNA_VISION.md for the complete vision.*

76
qlm_lab/__init__.py Normal file
View File

@@ -0,0 +1,76 @@
"""
QLM Lab - Quantum Language Model Implementation
This module implements the QLM (Quantum Language Model) system for BlackRoad OS.
QLM is a stateful semantic layer that:
- Tracks HI (Human Intelligence), AI (Agent Intelligence), and QI (Quantum/Emergent Intelligence)
- Connects Operator intent to system execution
- Detects emergent behaviors in HI+AI feedback loops
- Provides introspection and control tools for the Operator
Key Components:
- models: Core data structures (IntelligenceLayer, Actor, QLMEvent, QIEmergence)
- state: QLM state management and transitions
- events: Event ingestion and processing
- api: Public API for QLM operations
- ingestion: Connectors to real system data (git, CI, agents)
- experiments: Validation experiments and metrics
- visualization: Tools for visualizing QLM state
Integration Points:
- cognitive.intent_graph: Foundation for intent tracking
- cognitive.agent_coordination: Multi-agent collaboration
- operator_engine.scheduler: Background QLM analysis
- agents: Event source for AI actions
Usage:
from qlm_lab import QLMState, QLMEvent
from qlm_lab.api import QLMInterface
# Initialize QLM
qlm = QLMInterface()
# Record Operator intent
qlm.record_operator_intent("Deploy authentication feature")
# Record agent execution
qlm.record_agent_execution(agent_id="coder-001", task="implement login")
# Query state
state = qlm.get_current_state()
summary = qlm.summarize_for_operator(days=7)
"""
__version__ = "0.1.0"
from qlm_lab.models import (
IntelligenceType,
ActorType,
ActorRole,
IntelligenceLayer,
Actor,
QLMEvent,
EventType,
QIEmergence,
QLMMetrics,
)
from qlm_lab.state import QLMState, StateTransition
from qlm_lab.api import QLMInterface
__all__ = [
"IntelligenceType",
"ActorType",
"ActorRole",
"IntelligenceLayer",
"Actor",
"QLMEvent",
"EventType",
"QIEmergence",
"QLMMetrics",
"QLMState",
"StateTransition",
"QLMInterface",
]

498
qlm_lab/api.py Normal file
View File

@@ -0,0 +1,498 @@
"""
QLM API - Public interface for Quantum Language Model
This is the primary way applications interact with QLM.
The QLMInterface provides:
- Simple methods to record events
- Queries about current state
- Operator-facing summaries
- Integration with cognitive layer
Usage:
from qlm_lab.api import QLMInterface
qlm = QLMInterface()
# Record events
qlm.record_operator_intent("Deploy auth feature", intent_node_id="abc123")
qlm.record_agent_execution("agent-001", "Implement login", task_id="task-001")
qlm.record_agent_completion("agent-001", "task-001", success=True)
# Query state
summary = qlm.get_summary(days=7)
alignment = qlm.get_alignment_score()
emergences = qlm.get_recent_emergences(limit=5)
# Operator tools
print(qlm.ask("What did agents do today?"))
print(qlm.ask("Are we aligned with my intent?"))
"""
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any
import logging
from qlm_lab.state import QLMState
from qlm_lab.models import (
Actor,
ActorType,
ActorRole,
ActorState,
QLMEvent,
EventType,
IntelligenceType,
QIEmergence,
)
logger = logging.getLogger(__name__)
class QLMInterface:
"""
Public API for QLM.
This provides a simple, clean interface for recording events
and querying QLM state.
"""
def __init__(self, intent_graph=None, agent_coordinator=None):
"""
Initialize QLM interface.
Args:
intent_graph: Optional cognitive.intent_graph.IntentGraph
agent_coordinator: Optional cognitive.agent_coordination.AgentCoordinator
"""
self.state = QLMState(
intent_graph=intent_graph, agent_coordinator=agent_coordinator
)
# Register default Operator
self.operator = Actor(
id="operator-alexa",
name="Alexa (Operator)",
actor_type=ActorType.HUMAN,
role=ActorRole.OPERATOR,
state=ActorState.ACTIVE,
capabilities={"intent", "approve", "veto", "query", "orchestrate"},
)
self.state.register_actor(self.operator)
logger.info("QLM Interface initialized")
def record_operator_intent(
self,
intent: str,
description: str = "",
intent_node_id: Optional[str] = None,
metadata: Optional[Dict] = None,
) -> QLMEvent:
"""
Record an Operator intent.
This is HI (Human Intelligence) expressing a goal or desire.
Args:
intent: What the Operator wants (e.g., "Deploy authentication")
description: Additional context
intent_node_id: Optional link to IntentGraph node
metadata: Additional data
Returns:
The created QLMEvent
"""
event = QLMEvent(
source_layer=IntelligenceType.HI,
actor_id=self.operator.id,
event_type=EventType.OPERATOR_INTENT,
intent_node_id=intent_node_id,
data={
"intent": intent,
"description": description,
},
metadata=metadata or {},
)
self.state.ingest_event(event)
logger.info(f"Operator intent recorded: {intent}")
return event
def record_operator_approval(
self,
what_approved: str,
intent_node_id: Optional[str] = None,
task_id: Optional[str] = None,
) -> QLMEvent:
"""Record Operator approval of agent work"""
event = QLMEvent(
source_layer=IntelligenceType.HI,
actor_id=self.operator.id,
event_type=EventType.OPERATOR_APPROVAL,
intent_node_id=intent_node_id,
task_id=task_id,
data={"approved": what_approved},
)
self.state.ingest_event(event)
logger.info(f"Operator approval recorded: {what_approved}")
return event
def record_operator_veto(
self,
what_vetoed: str,
reason: str,
intent_node_id: Optional[str] = None,
task_id: Optional[str] = None,
) -> QLMEvent:
"""Record Operator veto of agent work"""
event = QLMEvent(
source_layer=IntelligenceType.HI,
actor_id=self.operator.id,
event_type=EventType.OPERATOR_VETO,
intent_node_id=intent_node_id,
task_id=task_id,
data={"vetoed": what_vetoed, "reason": reason},
)
self.state.ingest_event(event)
logger.info(f"Operator veto recorded: {what_vetoed}")
return event
def record_operator_query(self, query: str) -> QLMEvent:
"""Record Operator asking a question"""
event = QLMEvent(
source_layer=IntelligenceType.HI,
actor_id=self.operator.id,
event_type=EventType.OPERATOR_QUERY,
data={"query": query},
)
self.state.ingest_event(event)
return event
def record_agent_execution(
self,
agent_id: str,
task_description: str,
task_id: Optional[str] = None,
intent_node_id: Optional[str] = None,
metadata: Optional[Dict] = None,
) -> QLMEvent:
"""
Record an agent starting execution.
This is AI (Agent Intelligence) performing work.
Args:
agent_id: Agent identifier
task_description: What the agent is doing
task_id: Optional task ID
intent_node_id: Link to the intent this fulfills
metadata: Additional data
Returns:
The created QLMEvent
"""
event = QLMEvent(
source_layer=IntelligenceType.AI,
actor_id=agent_id,
event_type=EventType.AGENT_EXECUTION,
task_id=task_id,
intent_node_id=intent_node_id,
data={
"task": task_description,
},
metadata=metadata or {},
)
self.state.ingest_event(event)
logger.info(f"Agent execution recorded: {agent_id} - {task_description}")
return event
def record_agent_completion(
self,
agent_id: str,
task_id: str,
success: bool = True,
result: Optional[Dict] = None,
intent_node_id: Optional[str] = None,
) -> QLMEvent:
"""Record an agent completing a task"""
event = QLMEvent(
source_layer=IntelligenceType.AI,
actor_id=agent_id,
event_type=EventType.AGENT_COMPLETION,
task_id=task_id,
intent_node_id=intent_node_id,
data={
"success": success,
"result": result or {},
},
)
self.state.ingest_event(event)
logger.info(f"Agent completion recorded: {agent_id} - {task_id}")
return event
def record_agent_error(
self,
agent_id: str,
task_id: str,
error: str,
intent_node_id: Optional[str] = None,
) -> QLMEvent:
"""Record an agent error"""
event = QLMEvent(
source_layer=IntelligenceType.AI,
actor_id=agent_id,
event_type=EventType.AGENT_ERROR,
task_id=task_id,
intent_node_id=intent_node_id,
data={"error": error},
)
self.state.ingest_event(event)
logger.warning(f"Agent error recorded: {agent_id} - {error}")
return event
def record_agent_handoff(
self,
from_agent_id: str,
to_agent_id: str,
task_id: str,
handoff_message: str = "",
) -> QLMEvent:
"""Record an agent-to-agent handoff"""
event = QLMEvent(
source_layer=IntelligenceType.AI,
actor_id=from_agent_id,
event_type=EventType.AGENT_HANDOFF,
task_id=task_id,
data={
"to_agent": to_agent_id,
"message": handoff_message,
},
)
self.state.ingest_event(event)
logger.info(f"Agent handoff recorded: {from_agent_id}{to_agent_id}")
return event
def record_system_event(
self,
event_type: EventType,
description: str,
task_id: Optional[str] = None,
metadata: Optional[Dict] = None,
) -> QLMEvent:
"""
Record a system event (deploy, test, build, error).
Args:
event_type: Must be a SYSTEM_* event type
description: What happened
task_id: Optional related task
metadata: Additional data (test results, build logs, etc.)
"""
event = QLMEvent(
source_layer=IntelligenceType.AI, # System events in AI layer
actor_id="system",
event_type=event_type,
task_id=task_id,
data={"description": description},
metadata=metadata or {},
)
self.state.ingest_event(event)
logger.info(f"System event recorded: {event_type.value} - {description}")
return event
def register_agent(
self,
agent_id: str,
name: str,
role: ActorRole = ActorRole.EXECUTOR,
capabilities: Optional[List[str]] = None,
) -> Actor:
"""
Register a new agent in the QLM system.
Args:
agent_id: Unique agent identifier
name: Human-readable name
role: Agent's role
capabilities: List of capabilities
Returns:
The created Actor
"""
actor = Actor(
id=agent_id,
name=name,
actor_type=ActorType.AGENT,
role=role,
state=ActorState.IDLE,
capabilities=set(capabilities or []),
)
self.state.register_actor(actor)
logger.info(f"Agent registered: {name} ({agent_id})")
return actor
def get_summary(self, days: int = 7) -> str:
"""
Get an Operator-facing summary.
Args:
days: Number of days to summarize
Returns:
Human-readable summary string
"""
return self.state.summarize_for_operator(days=days)
def get_alignment_score(self) -> float:
"""Get HI-AI alignment score (0.0 to 1.0)"""
return self.state.calculate_alignment()
def get_recent_emergences(self, limit: int = 10) -> List[QIEmergence]:
"""Get recent QI emergence events"""
return self.state.emergences[-limit:]
def get_active_actors(self) -> List[Actor]:
"""Get all currently active actors"""
return self.state.query("active_actors")
def get_events_by_type(
self, event_type: EventType, limit: Optional[int] = None
) -> List[QLMEvent]:
"""Get events of a specific type"""
events = self.state.query("events_by_type", event_type=event_type)
return events[-limit:] if limit else events
def get_events_in_timerange(
self, start: datetime, end: Optional[datetime] = None
) -> List[QLMEvent]:
"""Get events within a time range"""
end = end or datetime.now()
return self.state.query("events_in_timerange", start=start, end=end)
def ask(self, question: str) -> str:
"""
Natural language query interface for Operator.
Examples:
- "What did agents do today?"
- "Are we aligned with my intent?"
- "Show me emergent behaviors"
- "What's the status?"
This is a simple keyword-based implementation.
In production, this would use an LLM to interpret questions.
"""
# Record the query
self.record_operator_query(question)
question_lower = question.lower()
# Today's activity
if "today" in question_lower or "what did" in question_lower:
today_start = datetime.now().replace(hour=0, minute=0, second=0)
events = self.get_events_in_timerange(today_start)
agent_events = [e for e in events if e.source_layer == IntelligenceType.AI]
response = f"Today's Activity:\n"
response += f"- Total Events: {len(events)}\n"
response += f"- Agent Actions: {len(agent_events)}\n"
# Group by agent
by_agent = {}
for event in agent_events:
agent_id = event.actor_id
by_agent[agent_id] = by_agent.get(agent_id, 0) + 1
response += f"\nMost Active Agents:\n"
for agent_id, count in sorted(
by_agent.items(), key=lambda x: x[1], reverse=True
)[:5]:
response += f" - {agent_id}: {count} actions\n"
return response
# Alignment
elif "aligned" in question_lower or "alignment" in question_lower:
alignment = self.get_alignment_score()
response = f"HI-AI Alignment: {alignment:.1%}\n"
if alignment >= 0.8:
response += "✅ Excellent alignment - agents are following your intent well."
elif alignment >= 0.6:
response += "⚠️ Moderate alignment - some drift from intent detected."
else:
response += "🚨 Low alignment - significant divergence from your intent."
response += f"\n\nRecent Feedback:\n"
response += f"- Approvals: {self.state.metrics.operator_approvals}\n"
response += f"- Vetoes: {self.state.metrics.operator_vetoes}\n"
response += f"- Feedback Loops: {self.state.metrics.feedback_loop_count}\n"
return response
# Emergence
elif "emergent" in question_lower or "emergence" in question_lower or "qi" in question_lower:
emergences = self.get_recent_emergences(limit=5)
if not emergences:
return "No emergent behaviors detected recently."
response = f"Recent QI Emergence Events ({len(emergences)}):\n\n"
for em in emergences:
response += f"🌟 {em.pattern_name}\n"
response += f" {em.explanation}\n"
response += f" Confidence: {em.confidence:.0%}\n"
response += f" Impact: {em.impact_score:.1f}/1.0\n\n"
return response
# Status
elif "status" in question_lower or "summary" in question_lower:
return self.get_summary(days=7)
# Default
else:
return (
f"I don't understand the question: '{question}'\n\n"
f"Try asking:\n"
f"- What did agents do today?\n"
f"- Are we aligned with my intent?\n"
f"- Show me emergent behaviors\n"
f"- What's the status?\n"
)
def explain_event(self, event_id: str) -> Optional[str]:
"""Explain what happened with a specific event"""
transition = self.state.explain_transition(event_id)
if not transition:
return None
response = f"Event: {event_id}\n"
response += f"Time: {transition.timestamp}\n\n"
response += f"Changes:\n"
for change in transition.changes:
response += f" - {change}\n"
return response
def export_state(self, file_path: str) -> None:
"""Export QLM state to JSON file"""
self.state.export_json(file_path)
logger.info(f"QLM state exported to: {file_path}")
def import_state(self, file_path: str) -> None:
"""Import QLM state from JSON file"""
self.state.import_json(file_path)
logger.info(f"QLM state imported from: {file_path}")

346
qlm_lab/demo.py Normal file
View File

@@ -0,0 +1,346 @@
"""
QLM Demo Script - Shows QLM in action
This script demonstrates the core QLM functionality:
1. Recording Operator intent
2. Recording agent executions
3. Detecting QI emergence
4. Querying state
5. Generating Operator summaries
Run: python -m qlm_lab.demo
"""
import logging
from datetime import datetime, timedelta
from qlm_lab.api import QLMInterface
from qlm_lab.models import ActorRole, EventType
# Setup logging
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
def demo_basic_workflow():
"""Demonstrate basic QLM workflow"""
print("=" * 60)
print("QLM Demo: Basic Workflow")
print("=" * 60)
print()
# Initialize QLM
qlm = QLMInterface()
print("✓ QLM initialized")
print()
# Register agents
agent1 = qlm.register_agent(
agent_id="agent-coder-001",
name="CodeWriter",
role=ActorRole.CODER,
capabilities=["python", "javascript", "testing"],
)
print(f"✓ Registered agent: {agent1.name}")
agent2 = qlm.register_agent(
agent_id="agent-reviewer-001",
name="CodeReviewer",
role=ActorRole.REVIEWER,
capabilities=["code_review", "security_audit"],
)
print(f"✓ Registered agent: {agent2.name}")
print()
# Operator defines intent
print("👤 Operator: 'Build authentication feature'")
intent_event = qlm.record_operator_intent(
intent="Build authentication feature",
description="Implement login, signup, and password reset",
intent_node_id="intent-auth-001",
)
print(f"✓ Intent recorded (event: {intent_event.id[:8]}...)")
print()
# Agent executes
print("🤖 Agent CodeWriter: Starting implementation...")
exec_event = qlm.record_agent_execution(
agent_id="agent-coder-001",
task_description="Implement login endpoint",
task_id="task-login-001",
intent_node_id="intent-auth-001",
)
print(f"✓ Execution recorded (event: {exec_event.id[:8]}...)")
print()
# Agent completes
print("🤖 Agent CodeWriter: Completed!")
completion_event = qlm.record_agent_completion(
agent_id="agent-coder-001",
task_id="task-login-001",
success=True,
result={"files_modified": ["auth.py", "routes.py"], "tests_added": 5},
intent_node_id="intent-auth-001",
)
print(f"✓ Completion recorded (event: {completion_event.id[:8]}...)")
print()
# Operator approves
print("👤 Operator: 'Looks good!'")
approval_event = qlm.record_operator_approval(
what_approved="Login implementation",
intent_node_id="intent-auth-001",
task_id="task-login-001",
)
print(f"✓ Approval recorded (event: {approval_event.id[:8]}...)")
print()
# Query state
print("📊 Query: Active actors")
active = qlm.get_active_actors()
print(f" Active actors: {len(active)}")
for actor in active:
print(f" - {actor.name} ({actor.role.value})")
print()
# Alignment
print("🎯 Calculating HI-AI alignment...")
alignment = qlm.get_alignment_score()
print(f" Alignment: {alignment:.1%}")
print()
# Summary
print("📝 Operator Summary:")
print("-" * 60)
summary = qlm.get_summary(days=1)
print(summary)
print()
def demo_qi_emergence():
"""Demonstrate QI emergence detection"""
print("=" * 60)
print("QLM Demo: QI Emergence Detection")
print("=" * 60)
print()
qlm = QLMInterface()
qlm.register_agent("agent-001", "Agent1", ActorRole.EXECUTOR)
# Operator intent
qlm.record_operator_intent(
intent="Fix database connection bug",
intent_node_id="intent-bugfix-001",
)
# Agent hits error
print("🤖 Agent encounters error...")
qlm.record_agent_error(
agent_id="agent-001",
task_id="task-bugfix-001",
error="Database connection refused",
intent_node_id="intent-bugfix-001",
)
# Agent self-corrects (QI emergence!)
print("🤖 Agent self-corrects (trying alternative approach)...")
qlm.record_agent_execution(
agent_id="agent-001",
task_description="Fix database connection bug (retry with connection pool)",
task_id="task-bugfix-001",
intent_node_id="intent-bugfix-001",
)
qlm.record_agent_completion(
agent_id="agent-001",
task_id="task-bugfix-001",
success=True,
result={"approach": "connection_pool", "self_corrected": True},
intent_node_id="intent-bugfix-001",
)
print()
print("✨ QI Emergence Detection:")
emergences = qlm.get_recent_emergences()
if emergences:
for em in emergences:
print(f" Pattern: {em.pattern_name}")
print(f" Explanation: {em.explanation}")
print(f" Confidence: {em.confidence:.0%}")
print(f" Impact: {em.impact_score:.1f}/1.0")
else:
print(" (No emergence detected - pattern matching may need tuning)")
print()
def demo_operator_queries():
"""Demonstrate natural language queries"""
print("=" * 60)
print("QLM Demo: Operator Queries")
print("=" * 60)
print()
qlm = QLMInterface()
# Setup some activity
qlm.register_agent("agent-001", "Agent1", ActorRole.EXECUTOR)
qlm.record_operator_intent("Test operator queries")
qlm.record_agent_execution("agent-001", "Do something", task_id="task-001")
qlm.record_agent_completion("agent-001", "task-001", success=True)
qlm.record_operator_approval("Agent work", task_id="task-001")
# Ask questions
questions = [
"What did agents do today?",
"Are we aligned with my intent?",
"What's the status?",
]
for question in questions:
print(f"👤 Operator: '{question}'")
print("-" * 60)
answer = qlm.ask(question)
print(answer)
print()
def demo_full_scenario():
"""Demonstrate a complete multi-agent workflow"""
print("=" * 60)
print("QLM Demo: Full Multi-Agent Scenario")
print("=" * 60)
print()
qlm = QLMInterface()
# Register agents
coder = qlm.register_agent("coder", "CodeWriter", ActorRole.CODER)
reviewer = qlm.register_agent("reviewer", "CodeReviewer", ActorRole.REVIEWER)
tester = qlm.register_agent("tester", "TestRunner", ActorRole.TESTER)
print(f"✓ Registered 3 agents: {coder.name}, {reviewer.name}, {tester.name}")
print()
# Operator intent
print("👤 Operator: 'Ship payment integration'")
intent = qlm.record_operator_intent(
intent="Ship payment integration",
description="Integrate Stripe, add tests, deploy",
intent_node_id="intent-payment-001",
)
print()
# Coder implements
print("🤖 CodeWriter: Implementing...")
qlm.record_agent_execution(
"coder", "Implement Stripe integration", "task-001", "intent-payment-001"
)
qlm.record_agent_completion("coder", "task-001", True)
print(" ✓ Completed")
print()
# Handoff to reviewer
print("🤖 CodeWriter → CodeReviewer")
qlm.record_agent_handoff(
"coder", "reviewer", "task-001", "Ready for review"
)
print()
# Reviewer reviews
print("🤖 CodeReviewer: Reviewing code...")
qlm.record_agent_execution("reviewer", "Review Stripe code", "task-002")
qlm.record_agent_completion("reviewer", "task-002", True)
print(" ✓ Approved")
print()
# Handoff to tester
print("🤖 CodeReviewer → TestRunner")
qlm.record_agent_handoff("reviewer", "tester", "task-002", "Ready for testing")
print()
# Tester runs tests
print("🤖 TestRunner: Running tests...")
qlm.record_system_event(
EventType.SYSTEM_TEST,
"Payment integration tests",
task_id="task-003",
metadata={"passed": True, "test_count": 15},
)
print(" ✓ All tests passed")
print()
# Operator approves
print("👤 Operator: 'Ship it!'")
qlm.record_operator_approval(
"Payment integration", intent_node_id="intent-payment-001"
)
print()
# Deploy
print("🚀 Deploying...")
qlm.record_system_event(
EventType.SYSTEM_DEPLOY,
"Deployed payment feature to production",
metadata={"environment": "production", "version": "v1.2.0"},
)
print(" ✓ Deployed")
print()
# Show results
print("=" * 60)
print("📊 Final State")
print("=" * 60)
print()
print("Metrics:")
print(f" Total events: {len(qlm.state.events)}")
print(f" HI events: {qlm.state.metrics.hi_events}")
print(f" AI events: {qlm.state.metrics.ai_events}")
print(f" System events: {qlm.state.metrics.system_events}")
print(f" Alignment: {qlm.get_alignment_score():.1%}")
print()
print("Event Timeline:")
for event in qlm.state.events:
print(f" [{event.timestamp.strftime('%H:%M:%S')}] {event.event_type.value}: {event.data}")
print()
def main():
"""Run all demos"""
print("\n🌟 QLM (Quantum Language Model) Demo Suite 🌟\n")
try:
demo_basic_workflow()
input("Press Enter to continue to next demo...")
print("\n")
demo_qi_emergence()
input("Press Enter to continue to next demo...")
print("\n")
demo_operator_queries()
input("Press Enter to continue to next demo...")
print("\n")
demo_full_scenario()
except KeyboardInterrupt:
print("\n\nDemo interrupted by user.")
except Exception as e:
print(f"\n\n❌ Demo error: {e}")
import traceback
traceback.print_exc()
print("\n✨ Demo complete! ✨\n")
print("Next steps:")
print(" 1. Run experiments: python -m qlm_lab.experiments.run_all")
print(" 2. Read docs: docs/QLM.md")
print(" 3. Integrate with your system!")
print()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,10 @@
"""
QLM Experiments - Validate QLM's value through experiments
Each experiment tests a specific hypothesis about QLM.
"""
from qlm_lab.experiments.alignment_detection import AlignmentDetectionExperiment
from qlm_lab.experiments.emergence_detection import EmergenceDetectionExperiment
__all__ = ["AlignmentDetectionExperiment", "EmergenceDetectionExperiment"]

View File

@@ -0,0 +1,198 @@
"""
Experiment: Alignment Detection
Hypothesis: QLM can accurately detect when AI agents drift from Operator intent.
Setup:
1. Record Operator intent
2. Simulate agent executions (some aligned, some not)
3. Record Operator feedback (approvals/vetoes)
4. Measure QLM's alignment score accuracy
Success Criteria:
- QLM alignment score correlates with actual alignment
- QLM detects misalignment before Operator veto
"""
import json
from typing import Dict, Any
import logging
from qlm_lab.api import QLMInterface
from qlm_lab.models import ActorRole
logger = logging.getLogger(__name__)
class AlignmentDetectionExperiment:
"""Experiment: Does QLM detect intent drift?"""
def __init__(self):
self.qlm = QLMInterface()
self.results = {
"experiment": "alignment_detection",
"hypothesis": "QLM can detect AI drift from HI intent",
"scenarios": [],
"metrics": {},
}
def run_scenario(
self,
scenario_name: str,
intent: str,
agent_actions: list,
expected_alignment: float,
) -> Dict[str, Any]:
"""
Run one alignment scenario.
Args:
scenario_name: Name of scenario
intent: Operator intent
agent_actions: List of (agent_id, action, aligned: bool)
expected_alignment: Expected alignment score
Returns:
Scenario results
"""
print(f"\nScenario: {scenario_name}")
print(f"Intent: {intent}")
# Record intent
intent_id = f"intent-{scenario_name}"
self.qlm.record_operator_intent(intent, intent_node_id=intent_id)
# Execute agent actions
for i, (agent_id, action, is_aligned) in enumerate(agent_actions):
task_id = f"task-{scenario_name}-{i}"
# Agent executes
self.qlm.record_agent_execution(
agent_id, action, task_id, intent_node_id=intent_id
)
# Agent completes
self.qlm.record_agent_completion(agent_id, task_id, success=True)
# Operator feedback
if is_aligned:
self.qlm.record_operator_approval(
action, intent_node_id=intent_id, task_id=task_id
)
print(f"{action} (aligned)")
else:
self.qlm.record_operator_veto(
action,
"Doesn't match my intent",
intent_node_id=intent_id,
task_id=task_id,
)
print(f"{action} (not aligned)")
# Calculate alignment
measured_alignment = self.qlm.get_alignment_score()
print(f" Expected alignment: {expected_alignment:.1%}")
print(f" Measured alignment: {measured_alignment:.1%}")
# Calculate error
error = abs(measured_alignment - expected_alignment)
print(f" Error: {error:.1%}")
result = {
"scenario": scenario_name,
"intent": intent,
"expected_alignment": expected_alignment,
"measured_alignment": measured_alignment,
"error": error,
"success": error < 0.2, # Success if within 20%
}
self.results["scenarios"].append(result)
return result
def run(self) -> Dict[str, Any]:
"""Run all alignment scenarios"""
print("=" * 60)
print("Experiment: Alignment Detection")
print("=" * 60)
# Register agents
self.qlm.register_agent("agent-001", "Agent1", ActorRole.EXECUTOR)
self.qlm.register_agent("agent-002", "Agent2", ActorRole.EXECUTOR)
# Scenario 1: Perfect alignment
self.run_scenario(
"perfect_alignment",
intent="Build login page",
agent_actions=[
("agent-001", "Create login form HTML", True),
("agent-001", "Add CSS styling", True),
("agent-002", "Add authentication logic", True),
],
expected_alignment=1.0,
)
# Scenario 2: Partial alignment
self.run_scenario(
"partial_alignment",
intent="Optimize database queries",
agent_actions=[
("agent-001", "Add database indexes", True),
("agent-001", "Refactor unrelated code", False), # Off-track
("agent-002", "Cache query results", True),
],
expected_alignment=0.67,
)
# Scenario 3: No alignment
self.run_scenario(
"no_alignment",
intent="Fix security vulnerability",
agent_actions=[
("agent-001", "Add new feature", False),
("agent-001", "Refactor UI", False),
("agent-002", "Update documentation", False),
],
expected_alignment=0.0,
)
# Calculate metrics
total_scenarios = len(self.results["scenarios"])
successful_scenarios = sum(
1 for s in self.results["scenarios"] if s["success"]
)
avg_error = sum(s["error"] for s in self.results["scenarios"]) / total_scenarios
self.results["metrics"] = {
"total_scenarios": total_scenarios,
"successful_scenarios": successful_scenarios,
"success_rate": successful_scenarios / total_scenarios,
"average_error": avg_error,
}
print("\n" + "=" * 60)
print("Experiment Results")
print("=" * 60)
print(f"Success Rate: {self.results['metrics']['success_rate']:.1%}")
print(f"Average Error: {self.results['metrics']['average_error']:.1%}")
print()
if self.results["metrics"]["success_rate"] >= 0.8:
print("✅ PASS: QLM accurately detects alignment")
else:
print("❌ FAIL: QLM alignment detection needs improvement")
return self.results
def save_results(self, file_path: str) -> None:
"""Save experiment results to JSON"""
with open(file_path, "w") as f:
json.dump(self.results, f, indent=2)
print(f"\nResults saved to: {file_path}")
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
experiment = AlignmentDetectionExperiment()
results = experiment.run()
experiment.save_results("alignment_detection_results.json")

View File

@@ -0,0 +1,222 @@
"""
Experiment: QI Emergence Detection
Hypothesis: QLM can detect emergent behaviors (QI) when HI+AI interact in feedback loops.
Setup:
1. Simulate various agent behavior patterns
2. Check if QLM detects known QI patterns
3. Measure false positive/negative rates
Success Criteria:
- QLM detects at least 80% of true emergent patterns
- False positive rate < 20%
"""
import json
from typing import Dict, Any, List
import logging
from qlm_lab.api import QLMInterface
from qlm_lab.models import ActorRole, EventType
logger = logging.getLogger(__name__)
class EmergenceDetectionExperiment:
"""Experiment: Does QLM detect QI emergence?"""
def __init__(self):
self.qlm = QLMInterface()
self.results = {
"experiment": "emergence_detection",
"hypothesis": "QLM detects emergent HI+AI behaviors",
"patterns_tested": [],
"metrics": {},
}
def simulate_pattern(
self, pattern_name: str, should_trigger: bool
) -> Dict[str, Any]:
"""
Simulate a behavior pattern and check if QLM detects emergence.
Args:
pattern_name: Name of pattern to simulate
should_trigger: Whether this should trigger emergence detection
Returns:
Pattern test results
"""
print(f"\nTesting pattern: {pattern_name}")
print(f" Should trigger: {should_trigger}")
initial_emergences = len(self.qlm.state.emergences)
# Simulate different patterns
if pattern_name == "agent_self_correction":
# Agent hits error, then self-corrects
self.qlm.record_agent_execution(
"agent-001", "Deploy feature", "task-001"
)
self.qlm.record_agent_error("agent-001", "task-001", "Deployment failed")
self.qlm.record_agent_execution(
"agent-001", "Deploy feature (retry)", "task-001"
)
self.qlm.record_agent_completion("agent-001", "task-001", success=True)
elif pattern_name == "operator_feedback_loop":
# HI intent → AI execution → HI approval → refined intent
self.qlm.record_operator_intent("Build dashboard", intent_node_id="intent-001")
self.qlm.record_agent_execution(
"agent-001", "Create dashboard", "task-001", "intent-001"
)
self.qlm.record_agent_completion("agent-001", "task-001", success=True)
self.qlm.record_operator_approval("Dashboard", intent_node_id="intent-001")
self.qlm.record_operator_intent(
"Add charts to dashboard", intent_node_id="intent-002"
)
elif pattern_name == "emergent_collaboration":
# Multiple agents self-organize
self.qlm.record_agent_execution(
"agent-001", "Start task", "task-001"
)
self.qlm.record_agent_handoff(
"agent-001", "agent-002", "task-001", "Need help"
)
self.qlm.record_agent_handoff(
"agent-002", "agent-003", "task-001", "Pass to specialist"
)
elif pattern_name == "normal_execution":
# Just normal execution, no emergence
self.qlm.record_agent_execution(
"agent-001", "Normal task", "task-normal"
)
self.qlm.record_agent_completion("agent-001", "task-normal", success=True)
# Check if emergence was detected
final_emergences = len(self.qlm.state.emergences)
detected = final_emergences > initial_emergences
print(f" Detected: {detected}")
# Determine correctness
correct = detected == should_trigger
if correct:
print(f" ✅ Correct")
else:
if detected and not should_trigger:
print(f" ❌ False Positive")
else:
print(f" ❌ False Negative")
result = {
"pattern": pattern_name,
"should_trigger": should_trigger,
"detected": detected,
"correct": correct,
"type": (
"true_positive"
if detected and should_trigger
else "true_negative"
if not detected and not should_trigger
else "false_positive"
if detected and not should_trigger
else "false_negative"
),
}
self.results["patterns_tested"].append(result)
return result
def run(self) -> Dict[str, Any]:
"""Run all emergence detection tests"""
print("=" * 60)
print("Experiment: QI Emergence Detection")
print("=" * 60)
# Register agents
self.qlm.register_agent("agent-001", "Agent1", ActorRole.EXECUTOR)
self.qlm.register_agent("agent-002", "Agent2", ActorRole.EXECUTOR)
self.qlm.register_agent("agent-003", "Agent3", ActorRole.EXECUTOR)
# Test patterns that should trigger
self.simulate_pattern("agent_self_correction", should_trigger=True)
self.simulate_pattern("operator_feedback_loop", should_trigger=True)
# self.simulate_pattern("emergent_collaboration", should_trigger=True)
# Test patterns that should NOT trigger
self.simulate_pattern("normal_execution", should_trigger=False)
# Calculate metrics
total = len(self.results["patterns_tested"])
correct = sum(1 for p in self.results["patterns_tested"] if p["correct"])
true_positives = sum(
1 for p in self.results["patterns_tested"] if p["type"] == "true_positive"
)
false_positives = sum(
1 for p in self.results["patterns_tested"] if p["type"] == "false_positive"
)
true_negatives = sum(
1 for p in self.results["patterns_tested"] if p["type"] == "true_negative"
)
false_negatives = sum(
1 for p in self.results["patterns_tested"] if p["type"] == "false_negative"
)
accuracy = correct / total if total > 0 else 0
precision = (
true_positives / (true_positives + false_positives)
if (true_positives + false_positives) > 0
else 0
)
recall = (
true_positives / (true_positives + false_negatives)
if (true_positives + false_negatives) > 0
else 0
)
self.results["metrics"] = {
"total_patterns": total,
"correct": correct,
"accuracy": accuracy,
"true_positives": true_positives,
"false_positives": false_positives,
"true_negatives": true_negatives,
"false_negatives": false_negatives,
"precision": precision,
"recall": recall,
}
print("\n" + "=" * 60)
print("Experiment Results")
print("=" * 60)
print(f"Accuracy: {accuracy:.1%}")
print(f"Precision: {precision:.1%}")
print(f"Recall: {recall:.1%}")
print(f"False Positive Rate: {false_positives/total:.1%}")
print(f"False Negative Rate: {false_negatives/total:.1%}")
print()
if accuracy >= 0.8 and false_positives / total < 0.2:
print("✅ PASS: QLM accurately detects QI emergence")
else:
print("❌ FAIL: QI emergence detection needs improvement")
return self.results
def save_results(self, file_path: str) -> None:
"""Save experiment results to JSON"""
with open(file_path, "w") as f:
json.dump(self.results, f, indent=2)
print(f"\nResults saved to: {file_path}")
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
experiment = EmergenceDetectionExperiment()
results = experiment.run()
experiment.save_results("emergence_detection_results.json")

View File

@@ -0,0 +1,17 @@
"""
QLM Ingestion - Wire QLM to Reality
This module contains connectors that ingest real system data into QLM:
- Git commits → QLMEvents
- CI test results → System events
- Agent logs → Agent execution events
- Deployment events → System events
Each connector transforms external data into QLMEvents.
"""
from qlm_lab.ingestion.git import GitConnector
from qlm_lab.ingestion.ci import CIConnector
from qlm_lab.ingestion.agent_logs import AgentLogConnector
__all__ = ["GitConnector", "CIConnector", "AgentLogConnector"]

View File

@@ -0,0 +1,209 @@
"""
Agent Log Connector - Ingest agent execution logs into QLM
Parses agent logs and converts them into QLM events.
"""
import json
import re
from datetime import datetime
from typing import List, Optional, Dict, Any
import logging
from qlm_lab.models import EventType, ActorRole
from qlm_lab.api import QLMInterface
logger = logging.getLogger(__name__)
class AgentLogConnector:
"""
Connects QLM to agent execution logs.
Usage:
connector = AgentLogConnector(qlm=qlm_interface)
connector.ingest_log_file("/path/to/agent.log")
"""
def __init__(self, qlm: QLMInterface):
"""
Args:
qlm: QLMInterface instance
"""
self.qlm = qlm
def parse_log_line(self, line: str) -> Optional[Dict[str, Any]]:
"""
Parse a log line into structured data.
Expects format: [timestamp] [agent_id] [level] message
Args:
line: Log line string
Returns:
Parsed log data or None
"""
# Example: [2024-01-15 10:30:45] [agent-coder-001] [INFO] Task started: implement login
pattern = r"\[([^\]]+)\]\s*\[([^\]]+)\]\s*\[([^\]]+)\]\s*(.+)"
match = re.match(pattern, line)
if not match:
return None
timestamp_str, agent_id, level, message = match.groups()
try:
timestamp = datetime.fromisoformat(timestamp_str)
except ValueError:
timestamp = datetime.now()
return {
"timestamp": timestamp,
"agent_id": agent_id.strip(),
"level": level.strip(),
"message": message.strip(),
}
def ingest_log_line(self, log_data: Dict[str, Any]) -> Optional[Any]:
"""
Ingest a parsed log line into QLM.
Args:
log_data: Parsed log data from parse_log_line()
Returns:
Created QLMEvent or None
"""
agent_id = log_data["agent_id"]
message = log_data["message"]
# Register agent if not exists
# (In production, would check if already registered)
# Detect event type from message
message_lower = message.lower()
if "task started" in message_lower or "executing" in message_lower:
# Extract task description
task_desc = message.split(":", 1)[1].strip() if ":" in message else message
return self.qlm.record_agent_execution(
agent_id=agent_id,
task_description=task_desc,
metadata={"log_timestamp": log_data["timestamp"].isoformat()},
)
elif "task completed" in message_lower or "finished" in message_lower:
# Extract task ID if present
task_id = self._extract_task_id(message)
return self.qlm.record_agent_completion(
agent_id=agent_id,
task_id=task_id or "unknown",
success=True,
result={"message": message},
)
elif "error" in message_lower or "failed" in message_lower:
task_id = self._extract_task_id(message)
return self.qlm.record_agent_error(
agent_id=agent_id,
task_id=task_id or "unknown",
error=message,
)
elif "handoff" in message_lower or "passing to" in message_lower:
# Extract target agent
to_agent = self._extract_target_agent(message)
task_id = self._extract_task_id(message)
if to_agent:
return self.qlm.record_agent_handoff(
from_agent_id=agent_id,
to_agent_id=to_agent,
task_id=task_id or "unknown",
handoff_message=message,
)
return None
def _extract_task_id(self, message: str) -> Optional[str]:
"""Extract task ID from message if present"""
# Look for task-XXX or task_XXX pattern
match = re.search(r"task[_-](\w+)", message.lower())
if match:
return f"task-{match.group(1)}"
return None
def _extract_target_agent(self, message: str) -> Optional[str]:
"""Extract target agent ID from handoff message"""
# Look for "to agent-XXX" or "→ agent-XXX"
match = re.search(r"(?:to|→)\s+(agent-[\w-]+)", message.lower())
if match:
return match.group(1)
return None
def ingest_log_file(self, file_path: str) -> List[Any]:
"""
Ingest an entire log file.
Args:
file_path: Path to log file
Returns:
List of created QLMEvents
"""
events = []
try:
with open(file_path, "r") as f:
for line in f:
line = line.strip()
if not line:
continue
log_data = self.parse_log_line(line)
if log_data:
event = self.ingest_log_line(log_data)
if event:
events.append(event)
logger.info(f"Ingested {len(events)} events from {file_path}")
return events
except FileNotFoundError:
logger.error(f"Log file not found: {file_path}")
return []
except Exception as e:
logger.error(f"Error ingesting log file: {e}")
return []
def ingest_structured_log(self, log_entries: List[Dict[str, Any]]) -> List[Any]:
"""
Ingest structured log entries (e.g., from JSON logs).
Args:
log_entries: List of log entry dictionaries
Returns:
List of created QLMEvents
"""
events = []
for entry in log_entries:
# Convert to standard format
log_data = {
"timestamp": datetime.fromisoformat(entry.get("timestamp", datetime.now().isoformat())),
"agent_id": entry.get("agent_id", "unknown"),
"level": entry.get("level", "INFO"),
"message": entry.get("message", ""),
}
event = self.ingest_log_line(log_data)
if event:
events.append(event)
logger.info(f"Ingested {len(events)} structured log entries")
return events

224
qlm_lab/ingestion/ci.py Normal file
View File

@@ -0,0 +1,224 @@
"""
CI Connector - Ingest CI/test results into QLM
Converts CI events into QLM system events:
- Test runs → SYSTEM_TEST events
- Build results → SYSTEM_BUILD events
- Deploy actions → SYSTEM_DEPLOY events
"""
import json
from datetime import datetime
from typing import List, Optional, Dict, Any
import logging
from qlm_lab.models import EventType
from qlm_lab.api import QLMInterface
logger = logging.getLogger(__name__)
class CIConnector:
"""
Connects QLM to CI/CD system (GitHub Actions, Jenkins, etc.)
Usage:
connector = CIConnector(qlm=qlm_interface)
connector.ingest_test_result(test_data)
connector.ingest_build_result(build_data)
"""
def __init__(self, qlm: QLMInterface):
"""
Args:
qlm: QLMInterface instance
"""
self.qlm = qlm
def ingest_test_result(
self,
test_name: str,
passed: bool,
duration_seconds: float,
failures: Optional[List[str]] = None,
commit_hash: Optional[str] = None,
task_id: Optional[str] = None,
) -> Any:
"""
Ingest a test run result.
Args:
test_name: Name of test suite
passed: Whether tests passed
duration_seconds: How long tests took
failures: List of failed test names
commit_hash: Related commit
task_id: Related task
Returns:
Created QLMEvent
"""
event_type = EventType.SYSTEM_TEST
description = f"Test '{test_name}': {'PASSED' if passed else 'FAILED'}"
metadata = {
"test_name": test_name,
"passed": passed,
"duration_seconds": duration_seconds,
"failures": failures or [],
"commit_hash": commit_hash,
}
event = self.qlm.record_system_event(
event_type=event_type,
description=description,
task_id=task_id,
metadata=metadata,
)
logger.info(f"Ingested test result: {test_name} - {'PASS' if passed else 'FAIL'}")
return event
def ingest_build_result(
self,
build_name: str,
success: bool,
duration_seconds: float,
artifacts: Optional[List[str]] = None,
commit_hash: Optional[str] = None,
task_id: Optional[str] = None,
) -> Any:
"""
Ingest a build result.
Args:
build_name: Name of build
success: Whether build succeeded
duration_seconds: Build duration
artifacts: List of produced artifacts
commit_hash: Related commit
task_id: Related task
Returns:
Created QLMEvent
"""
event_type = EventType.SYSTEM_BUILD
description = f"Build '{build_name}': {'SUCCESS' if success else 'FAILED'}"
metadata = {
"build_name": build_name,
"success": success,
"duration_seconds": duration_seconds,
"artifacts": artifacts or [],
"commit_hash": commit_hash,
}
event = self.qlm.record_system_event(
event_type=event_type,
description=description,
task_id=task_id,
metadata=metadata,
)
logger.info(f"Ingested build result: {build_name} - {'SUCCESS' if success else 'FAIL'}")
return event
def ingest_deploy_result(
self,
service_name: str,
environment: str,
success: bool,
version: Optional[str] = None,
commit_hash: Optional[str] = None,
task_id: Optional[str] = None,
) -> Any:
"""
Ingest a deployment result.
Args:
service_name: What was deployed
environment: Where (production, staging, etc.)
success: Whether deploy succeeded
version: Version deployed
commit_hash: Related commit
task_id: Related task
Returns:
Created QLMEvent
"""
event_type = EventType.SYSTEM_DEPLOY
description = f"Deploy '{service_name}' to {environment}: {'SUCCESS' if success else 'FAILED'}"
metadata = {
"service": service_name,
"environment": environment,
"success": success,
"version": version,
"commit_hash": commit_hash,
}
event = self.qlm.record_system_event(
event_type=event_type,
description=description,
task_id=task_id,
metadata=metadata,
)
logger.info(f"Ingested deploy: {service_name} to {environment} - {'SUCCESS' if success else 'FAIL'}")
return event
def ingest_from_github_actions(self, workflow_run: Dict[str, Any]) -> List[Any]:
"""
Ingest events from a GitHub Actions workflow run.
Args:
workflow_run: GitHub Actions workflow run data (JSON)
Returns:
List of created QLMEvents
"""
events = []
# Extract data from workflow
name = workflow_run.get("name", "Unknown workflow")
conclusion = workflow_run.get("conclusion", "unknown")
success = conclusion == "success"
# Get commit
head_commit = workflow_run.get("head_commit", {})
commit_hash = head_commit.get("id", None)
# Create test event (assuming workflow is tests)
if "test" in name.lower():
event = self.ingest_test_result(
test_name=name,
passed=success,
duration_seconds=0, # Would need to calculate from timestamps
commit_hash=commit_hash,
)
events.append(event)
# Create build event (assuming workflow builds)
elif "build" in name.lower():
event = self.ingest_build_result(
build_name=name,
success=success,
duration_seconds=0,
commit_hash=commit_hash,
)
events.append(event)
# Create deploy event
elif "deploy" in name.lower():
event = self.ingest_deploy_result(
service_name=name,
environment="production", # Would need to parse from workflow
success=success,
commit_hash=commit_hash,
)
events.append(event)
return events

240
qlm_lab/ingestion/git.py Normal file
View File

@@ -0,0 +1,240 @@
"""
Git Connector - Ingest git history into QLM
Converts git commits into QLM events:
- Commits by humans → OPERATOR_INTENT or HI events
- Commits by bots/agents → AGENT_EXECUTION events
- Merge commits → coordination events
"""
import subprocess
import re
from datetime import datetime
from typing import List, Optional, Dict, Any
import logging
from qlm_lab.models import (
QLMEvent,
EventType,
IntelligenceType,
ActorType,
ActorRole,
Actor,
)
from qlm_lab.api import QLMInterface
logger = logging.getLogger(__name__)
class GitConnector:
"""
Connects QLM to git repository history.
Usage:
connector = GitConnector(repo_path="/path/to/repo", qlm=qlm_interface)
events = connector.ingest_recent_commits(days=7)
"""
def __init__(self, repo_path: str, qlm: QLMInterface):
"""
Args:
repo_path: Path to git repository
qlm: QLMInterface instance
"""
self.repo_path = repo_path
self.qlm = qlm
# Patterns to detect agent commits
self.agent_patterns = [
r"^claude/", # Claude branches
r"^copilot/", # Copilot branches
r"^codex/", # Codex branches
r"\[bot\]", # Bot commit messages
r"\[agent\]", # Agent commit messages
]
def is_agent_commit(self, commit_data: Dict[str, str]) -> bool:
"""Determine if a commit was made by an agent"""
# Check author name/email
author = commit_data.get("author", "").lower()
if any(
pattern in author
for pattern in ["bot", "agent", "claude", "copilot", "codex"]
):
return True
# Check branch name
branch = commit_data.get("branch", "")
for pattern in self.agent_patterns:
if re.search(pattern, branch):
return True
# Check commit message
message = commit_data.get("message", "")
if "[agent]" in message.lower() or "[bot]" in message.lower():
return True
return False
def get_git_log(
self, since: Optional[str] = None, until: Optional[str] = None
) -> List[Dict[str, Any]]:
"""
Get git log as structured data.
Args:
since: Start date (e.g., "7 days ago")
until: End date (e.g., "now")
Returns:
List of commit dictionaries
"""
cmd = [
"git",
"-C",
self.repo_path,
"log",
"--pretty=format:%H|%an|%ae|%at|%s|%b",
"--all",
]
if since:
cmd.append(f"--since={since}")
if until:
cmd.append(f"--until={until}")
try:
result = subprocess.run(
cmd, capture_output=True, text=True, check=True, timeout=30
)
commits = []
for line in result.stdout.strip().split("\n"):
if not line:
continue
parts = line.split("|", 5)
if len(parts) < 5:
continue
commit_hash, author_name, author_email, timestamp, subject = parts[:5]
body = parts[5] if len(parts) > 5 else ""
commits.append(
{
"hash": commit_hash,
"author": author_name,
"email": author_email,
"timestamp": int(timestamp),
"subject": subject,
"body": body,
"message": f"{subject}\n{body}".strip(),
}
)
return commits
except subprocess.CalledProcessError as e:
logger.error(f"Git log failed: {e}")
return []
except subprocess.TimeoutExpired:
logger.error("Git log timed out")
return []
def ingest_commit(self, commit: Dict[str, Any]) -> Optional[QLMEvent]:
"""
Ingest a single commit into QLM.
Args:
commit: Commit data from get_git_log()
Returns:
Created QLMEvent or None
"""
is_agent = self.is_agent_commit(commit)
# Determine actor
author = commit["author"]
actor_id = (
f"agent-{author.lower().replace(' ', '-')}"
if is_agent
else f"human-{author.lower().replace(' ', '-')}"
)
# Register actor if not exists
actor_type = ActorType.AGENT if is_agent else ActorType.HUMAN
# Create event
if is_agent:
# Agent commit = AGENT_EXECUTION
event = self.qlm.record_agent_execution(
agent_id=actor_id,
task_description=commit["subject"],
metadata={
"commit_hash": commit["hash"],
"commit_message": commit["message"],
"timestamp": commit["timestamp"],
"author": author,
},
)
else:
# Human commit = OPERATOR_INTENT (assuming commits reflect intent)
event = self.qlm.record_operator_intent(
intent=commit["subject"],
description=commit["body"],
metadata={
"commit_hash": commit["hash"],
"timestamp": commit["timestamp"],
"author": author,
},
)
logger.info(f"Ingested commit: {commit['hash'][:8]} - {commit['subject']}")
return event
def ingest_recent_commits(self, days: int = 7) -> List[QLMEvent]:
"""
Ingest recent commits into QLM.
Args:
days: Number of days to look back
Returns:
List of created QLMEvents
"""
logger.info(f"Ingesting git commits from last {days} days...")
commits = self.get_git_log(since=f"{days} days ago")
events = []
for commit in commits:
event = self.ingest_commit(commit)
if event:
events.append(event)
logger.info(f"Ingested {len(events)} commits")
return events
def ingest_commit_range(
self, since: str, until: Optional[str] = None
) -> List[QLMEvent]:
"""
Ingest commits in a specific range.
Args:
since: Start date (e.g., "2024-01-01")
until: End date (default: now)
Returns:
List of created QLMEvents
"""
commits = self.get_git_log(since=since, until=until)
events = []
for commit in commits:
event = self.ingest_commit(commit)
if event:
events.append(event)
logger.info(f"Ingested {len(events)} commits from {since} to {until or 'now'}")
return events

370
qlm_lab/models.py Normal file
View File

@@ -0,0 +1,370 @@
"""
QLM Core Models - Data structures for Quantum Language Model
These models formalize the HI/AI/QI intelligence framework:
- HI (Human Intelligence): Operator actions, intent, judgment, taste, ethics
- AI (Model Intelligence): LLMs, agents, code generation, pattern completion
- QI (Quantum Intelligence): Emergent system behaviors when HI+AI interact in loops
The "quantum" metaphor means:
- Superposition of roles (an agent can be executor AND coordinator)
- Superposition of states (a task can be in_progress AND blocked)
- Superposition of perspectives (same event viewed differently by HI vs AI)
QI emerges when:
1. AI designs deterministic systems
2. Deterministic systems constrain AI behavior
3. Humans orchestrate and interpret the cycle
4. Novel, unpredicted behaviors appear
This is NOT quantum physics. It's a meta-model for describing
intelligence that emerges from feedback loops between humans, AI, and code.
"""
from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
from typing import Dict, List, Optional, Set, Any
from uuid import uuid4
class IntelligenceType(Enum):
"""
The three layers of intelligence in the QLM model.
HI = Human Intelligence (Operator)
AI = Model Intelligence (LLMs, agents)
QI = Quantum Intelligence (emergent system behaviors)
"""
HI = "human_intelligence" # Operator layer: intent, judgment, taste
AI = "model_intelligence" # Agent layer: execution, completion, transformation
QI = "quantum_intelligence" # System layer: emergence, feedback loops, novelty
class ActorType(Enum):
"""Types of actors in the system"""
HUMAN = "human" # The Operator (Alexa) or other humans
AGENT = "agent" # AI agents (LLM-powered)
SYSTEM = "system" # Deterministic systems (git, CI, infrastructure)
class ActorRole(Enum):
"""Roles actors can play"""
OPERATOR = "operator" # Human orchestrator (primary decision maker)
EXECUTOR = "executor" # Performs tasks
COORDINATOR = "coordinator" # Manages other actors
REVIEWER = "reviewer" # Reviews work
MONITOR = "monitor" # Observes and reports
GOVERNOR = "governor" # Enforces policies
class ActorState(Enum):
"""Current state of an actor"""
ACTIVE = "active" # Currently working
IDLE = "idle" # Available but not working
BLOCKED = "blocked" # Wants to work but can't
OFFLINE = "offline" # Not available
@dataclass
class Actor:
"""
An actor in the QLM system.
Actors perform actions that generate QLMEvents.
Actors can be humans (Operator), AI agents, or deterministic systems.
"""
id: str = field(default_factory=lambda: str(uuid4()))
name: str = ""
actor_type: ActorType = ActorType.AGENT
role: ActorRole = ActorRole.EXECUTOR
state: ActorState = ActorState.IDLE
# What this actor can do
capabilities: Set[str] = field(default_factory=set)
# Current activity
current_task_id: Optional[str] = None
# Metadata
created_at: datetime = field(default_factory=datetime.now)
last_active: datetime = field(default_factory=datetime.now)
metadata: Dict[str, Any] = field(default_factory=dict)
def to_dict(self) -> Dict[str, Any]:
return {
"id": self.id,
"name": self.name,
"actor_type": self.actor_type.value,
"role": self.role.value,
"state": self.state.value,
"capabilities": list(self.capabilities),
"current_task_id": self.current_task_id,
"created_at": self.created_at.isoformat(),
"last_active": self.last_active.isoformat(),
"metadata": self.metadata,
}
@dataclass
class IntelligenceLayer:
"""
One of the three intelligence layers (HI, AI, or QI).
Each layer has:
- Actors who perform actions
- Capabilities they can execute
- Metrics about their activity
"""
type: IntelligenceType
actors: Dict[str, Actor] = field(default_factory=dict)
# Layer-level capabilities
capabilities: Set[str] = field(default_factory=set)
# Metrics
total_events: int = 0
active_actors: int = 0
def add_actor(self, actor: Actor) -> None:
"""Add an actor to this intelligence layer"""
self.actors[actor.id] = actor
self.capabilities.update(actor.capabilities)
if actor.state == ActorState.ACTIVE:
self.active_actors += 1
def get_active_actors(self) -> List[Actor]:
"""Get all currently active actors"""
return [a for a in self.actors.values() if a.state == ActorState.ACTIVE]
def to_dict(self) -> Dict[str, Any]:
return {
"type": self.type.value,
"actors": {aid: a.to_dict() for aid, a in self.actors.items()},
"capabilities": list(self.capabilities),
"total_events": self.total_events,
"active_actors": self.active_actors,
}
class EventType(Enum):
"""
Types of events in the QLM system.
Events flow through the system and trigger state transitions.
"""
# HI (Operator) events
OPERATOR_INTENT = "operator_intent" # Operator defines a goal
OPERATOR_APPROVAL = "operator_approval" # Operator approves something
OPERATOR_VETO = "operator_veto" # Operator rejects something
OPERATOR_QUERY = "operator_query" # Operator asks a question
# AI (Agent) events
AGENT_EXECUTION = "agent_execution" # Agent performs a task
AGENT_COMPLETION = "agent_completion" # Agent finishes a task
AGENT_ERROR = "agent_error" # Agent encounters an error
AGENT_HANDOFF = "agent_handoff" # Agent hands off to another agent
# System events
SYSTEM_DEPLOY = "system_deploy" # Code deployed
SYSTEM_TEST = "system_test" # Tests run
SYSTEM_BUILD = "system_build" # Build completed
SYSTEM_ERROR = "system_error" # System error occurred
# QI (Emergent) events
QI_EMERGENCE = "qi_emergence" # Novel behavior detected
QI_FEEDBACK_LOOP = "qi_feedback_loop" # HI+AI feedback detected
QI_PATTERN = "qi_pattern" # Recurring pattern identified
@dataclass
class QLMEvent:
"""
An event in the QLM system.
Events are the fundamental unit of QLM state transitions.
Every action by every actor generates an event.
Events have causality: they can be caused by other events,
creating a causal graph of system behavior.
"""
id: str = field(default_factory=lambda: str(uuid4()))
timestamp: datetime = field(default_factory=datetime.now)
# What layer generated this event
source_layer: IntelligenceType = IntelligenceType.AI
# What actor generated this event
actor_id: str = ""
# What type of event
event_type: EventType = EventType.AGENT_EXECUTION
# Event payload
data: Dict[str, Any] = field(default_factory=dict)
# Causality: what events caused this event
caused_by: List[str] = field(default_factory=list) # List of event IDs
# Related entities
intent_node_id: Optional[str] = None # Link to IntentGraph node
task_id: Optional[str] = None
# Metadata
tags: Set[str] = field(default_factory=set)
metadata: Dict[str, Any] = field(default_factory=dict)
def to_dict(self) -> Dict[str, Any]:
return {
"id": self.id,
"timestamp": self.timestamp.isoformat(),
"source_layer": self.source_layer.value,
"actor_id": self.actor_id,
"event_type": self.event_type.value,
"data": self.data,
"caused_by": self.caused_by,
"intent_node_id": self.intent_node_id,
"task_id": self.task_id,
"tags": list(self.tags),
"metadata": self.metadata,
}
@dataclass
class QIEmergence:
"""
Represents a detected QI (Quantum Intelligence) emergence event.
QI emerges when:
- HI + AI create a feedback loop
- The system exhibits novel, unpredicted behavior
- Deterministic systems evolve in response to AI
- Agents self-organize in unexpected ways
This is the "quantum" moment: when 1 + 1 = 3.
"""
id: str = field(default_factory=lambda: str(uuid4()))
timestamp: datetime = field(default_factory=datetime.now)
# What pattern emerged
pattern_name: str = "" # e.g., "agent_self_correction", "novel_solution"
# What triggered this emergence
trigger_events: List[str] = field(default_factory=list) # Event IDs
# Confidence that this is truly emergent (0.0 to 1.0)
confidence: float = 0.0
# Human-readable explanation
explanation: str = ""
# Operator feedback
operator_validated: Optional[bool] = None # Did Operator confirm this?
operator_notes: str = ""
# Impact
impact_score: float = 0.0 # How significant was this emergence?
def to_dict(self) -> Dict[str, Any]:
return {
"id": self.id,
"timestamp": self.timestamp.isoformat(),
"pattern_name": self.pattern_name,
"trigger_events": self.trigger_events,
"confidence": self.confidence,
"explanation": self.explanation,
"operator_validated": self.operator_validated,
"operator_notes": self.operator_notes,
"impact_score": self.impact_score,
}
@dataclass
class QLMMetrics:
"""
System-level metrics for QLM state.
These metrics help the Operator understand:
- How much activity in each intelligence layer
- How aligned is AI with HI intent
- How much QI emergence is happening
- Overall system health
"""
# Event counts by layer
hi_events: int = 0
ai_events: int = 0
qi_events: int = 0
system_events: int = 0
# Actor counts by layer
hi_actors: int = 0
ai_actors: int = 0
system_actors: int = 0
# Alignment: how much AI follows HI intent (0.0 to 1.0)
hi_ai_alignment: float = 0.0
# Emergence: rate of QI events detected
qi_emergence_rate: float = 0.0
# Feedback loops: number of HI→AI→HI cycles
feedback_loop_count: int = 0
# Operator metrics
operator_approvals: int = 0
operator_vetoes: int = 0
operator_queries: int = 0
# Time range for these metrics
start_time: datetime = field(default_factory=datetime.now)
end_time: datetime = field(default_factory=datetime.now)
def to_dict(self) -> Dict[str, Any]:
return {
"hi_events": self.hi_events,
"ai_events": self.ai_events,
"qi_events": self.qi_events,
"system_events": self.system_events,
"hi_actors": self.hi_actors,
"ai_actors": self.ai_actors,
"system_actors": self.system_actors,
"hi_ai_alignment": self.hi_ai_alignment,
"qi_emergence_rate": self.qi_emergence_rate,
"feedback_loop_count": self.feedback_loop_count,
"operator_approvals": self.operator_approvals,
"operator_vetoes": self.operator_vetoes,
"operator_queries": self.operator_queries,
"start_time": self.start_time.isoformat(),
"end_time": self.end_time.isoformat(),
}
# Known QI emergence patterns
QI_PATTERNS = {
"agent_self_correction": {
"description": "Agent detected its own error and corrected without HI intervention",
"trigger": "AGENT_ERROR followed by AGENT_EXECUTION with same task_id",
"significance": "high",
},
"novel_solution": {
"description": "Agent found a solution not in training data or prompts",
"trigger": "AGENT_COMPLETION with novel approach indicator",
"significance": "very_high",
},
"emergent_collaboration": {
"description": "Agents self-organized into collaboration pattern",
"trigger": "Multiple AGENT_HANDOFF events forming new pattern",
"significance": "high",
},
"operator_feedback_loop": {
"description": "HI intent → AI execution → HI approval → refined intent",
"trigger": "OPERATOR_INTENT → AGENT_COMPLETION → OPERATOR_APPROVAL → OPERATOR_INTENT",
"significance": "medium",
},
"system_adaptation": {
"description": "Deterministic system evolved in response to AI behavior",
"trigger": "AGENT_EXECUTION → SYSTEM_ERROR → AGENT_EXECUTION (different approach)",
"significance": "high",
},
}

510
qlm_lab/state.py Normal file
View File

@@ -0,0 +1,510 @@
"""
QLM State Management - The core state machine
QLMState represents the complete state of the Quantum Intelligence system at a point in time.
State includes:
- All intelligence layers (HI, AI, QI)
- All actors and their current state
- Event history
- Detected QI emergences
- Metrics
State transitions happen when events are processed.
Each event can trigger:
- Actor state changes
- New QI emergence detection
- Metric updates
- Causal graph updates
"""
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any
import json
from qlm_lab.models import (
IntelligenceType,
IntelligenceLayer,
Actor,
ActorType,
ActorRole,
ActorState,
QLMEvent,
EventType,
QIEmergence,
QLMMetrics,
QI_PATTERNS,
)
@dataclass
class StateTransition:
"""
Represents a state transition caused by an event.
This allows introspection: "What changed when event X happened?"
"""
event_id: str
timestamp: datetime
before_snapshot: Dict[str, Any]
after_snapshot: Dict[str, Any]
changes: List[str] # Human-readable list of changes
def to_dict(self) -> Dict[str, Any]:
return {
"event_id": self.event_id,
"timestamp": self.timestamp.isoformat(),
"before_snapshot": self.before_snapshot,
"after_snapshot": self.after_snapshot,
"changes": self.changes,
}
class QLMState:
"""
The complete state of the Quantum Intelligence system.
This is the "brain" of QLM. It tracks everything:
- All intelligence layers
- All actors
- All events
- All emergences
- All metrics
Key methods:
- ingest_event(): Process a new event and update state
- query(): Answer questions about current state
- explain_transition(): Explain why state changed
- detect_qi_emergence(): Find emergent patterns
- calculate_metrics(): Compute system metrics
"""
def __init__(self, intent_graph=None, agent_coordinator=None):
"""
Initialize QLM state.
Can integrate with existing cognitive systems:
- intent_graph: From cognitive.intent_graph
- agent_coordinator: From cognitive.agent_coordination
"""
# Integration with existing cognitive layer
self.intent_graph = intent_graph
self.agent_coordinator = agent_coordinator
# Intelligence layers
self.layers: Dict[IntelligenceType, IntelligenceLayer] = {
IntelligenceType.HI: IntelligenceLayer(type=IntelligenceType.HI),
IntelligenceType.AI: IntelligenceLayer(type=IntelligenceType.AI),
IntelligenceType.QI: IntelligenceLayer(type=IntelligenceType.QI),
}
# Event history (ordered by timestamp)
self.events: List[QLMEvent] = []
# Detected QI emergences
self.emergences: List[QIEmergence] = []
# State transition history
self.transitions: List[StateTransition] = []
# Metrics
self.metrics = QLMMetrics()
# Timestamps
self.created_at = datetime.now()
self.updated_at = datetime.now()
def register_actor(self, actor: Actor) -> None:
"""
Register an actor in the appropriate intelligence layer.
Human → HI layer
Agent → AI layer
System → (tracked but not in a specific layer)
"""
if actor.actor_type == ActorType.HUMAN:
self.layers[IntelligenceType.HI].add_actor(actor)
self.metrics.hi_actors += 1
elif actor.actor_type == ActorType.AGENT:
self.layers[IntelligenceType.AI].add_actor(actor)
self.metrics.ai_actors += 1
elif actor.actor_type == ActorType.SYSTEM:
# System actors are tracked but don't belong to HI/AI/QI layers
self.metrics.system_actors += 1
def ingest_event(self, event: QLMEvent) -> None:
"""
Ingest a new event and update state.
This is the core state transition function.
Steps:
1. Capture before-state snapshot
2. Add event to history
3. Update actor states
4. Detect QI emergence
5. Update metrics
6. Capture after-state snapshot
7. Record transition
"""
# 1. Before snapshot
before = self._create_snapshot()
# 2. Add event
self.events.append(event)
# 3. Update actor states
self._update_actor_state(event)
# 4. Update layer metrics
if event.source_layer == IntelligenceType.HI:
self.layers[IntelligenceType.HI].total_events += 1
self.metrics.hi_events += 1
elif event.source_layer == IntelligenceType.AI:
self.layers[IntelligenceType.AI].total_events += 1
self.metrics.ai_events += 1
elif event.source_layer == IntelligenceType.QI:
self.layers[IntelligenceType.QI].total_events += 1
self.metrics.qi_events += 1
# 5. Detect QI emergence
emergence = self._detect_qi_emergence(event)
if emergence:
self.emergences.append(emergence)
self.metrics.qi_events += 1
# 6. Update operator metrics
if event.event_type == EventType.OPERATOR_APPROVAL:
self.metrics.operator_approvals += 1
elif event.event_type == EventType.OPERATOR_VETO:
self.metrics.operator_vetoes += 1
elif event.event_type == EventType.OPERATOR_QUERY:
self.metrics.operator_queries += 1
# 7. After snapshot
after = self._create_snapshot()
# 8. Record transition
changes = self._compute_changes(before, after)
transition = StateTransition(
event_id=event.id,
timestamp=event.timestamp,
before_snapshot=before,
after_snapshot=after,
changes=changes,
)
self.transitions.append(transition)
self.updated_at = datetime.now()
def _update_actor_state(self, event: QLMEvent) -> None:
"""Update actor state based on event"""
actor_id = event.actor_id
# Find actor in any layer
actor = None
for layer in self.layers.values():
if actor_id in layer.actors:
actor = layer.actors[actor_id]
break
if not actor:
return
# Update actor state based on event type
if event.event_type in [EventType.AGENT_EXECUTION, EventType.OPERATOR_INTENT]:
actor.state = ActorState.ACTIVE
actor.current_task_id = event.task_id
elif event.event_type in [EventType.AGENT_COMPLETION]:
actor.state = ActorState.IDLE
actor.current_task_id = None
elif event.event_type == EventType.AGENT_ERROR:
actor.state = ActorState.BLOCKED
actor.last_active = event.timestamp
def _detect_qi_emergence(self, event: QLMEvent) -> Optional[QIEmergence]:
"""
Detect if this event (combined with recent events) represents QI emergence.
This is where the magic happens: detecting when 1 + 1 = 3.
"""
# Look at recent events (last 10)
recent_events = self.events[-10:]
# Check each known QI pattern
for pattern_name, pattern_def in QI_PATTERNS.items():
if self._matches_pattern(recent_events, pattern_name, pattern_def):
return QIEmergence(
pattern_name=pattern_name,
trigger_events=[e.id for e in recent_events[-3:]], # Last 3 events
confidence=0.8, # TODO: Implement proper confidence scoring
explanation=pattern_def["description"],
impact_score=self._calculate_impact(pattern_name),
)
return None
def _matches_pattern(
self, events: List[QLMEvent], pattern_name: str, pattern_def: Dict
) -> bool:
"""Check if a sequence of events matches a QI pattern"""
if pattern_name == "agent_self_correction":
# Look for: AGENT_ERROR followed by AGENT_EXECUTION with same task
for i in range(len(events) - 1):
if (
events[i].event_type == EventType.AGENT_ERROR
and events[i + 1].event_type == EventType.AGENT_EXECUTION
and events[i].task_id == events[i + 1].task_id
):
return True
elif pattern_name == "operator_feedback_loop":
# Look for: OPERATOR_INTENT → AGENT_COMPLETION → OPERATOR_APPROVAL
for i in range(len(events) - 2):
if (
events[i].event_type == EventType.OPERATOR_INTENT
and events[i + 1].event_type == EventType.AGENT_COMPLETION
and events[i + 2].event_type == EventType.OPERATOR_APPROVAL
):
self.metrics.feedback_loop_count += 1
return True
# TODO: Implement other pattern detectors
return False
def _calculate_impact(self, pattern_name: str) -> float:
"""Calculate impact score for an emergence pattern"""
significance_map = {
"very_high": 1.0,
"high": 0.8,
"medium": 0.5,
"low": 0.3,
}
pattern_def = QI_PATTERNS.get(pattern_name, {})
significance = pattern_def.get("significance", "medium")
return significance_map.get(significance, 0.5)
def _create_snapshot(self) -> Dict[str, Any]:
"""Create a snapshot of current state"""
return {
"timestamp": datetime.now().isoformat(),
"total_events": len(self.events),
"total_emergences": len(self.emergences),
"metrics": self.metrics.to_dict(),
}
def _compute_changes(
self, before: Dict[str, Any], after: Dict[str, Any]
) -> List[str]:
"""Compute human-readable changes between states"""
changes = []
if after["total_events"] > before["total_events"]:
changes.append(f"New event added (total: {after['total_events']})")
if after["total_emergences"] > before["total_emergences"]:
changes.append(
f"QI emergence detected (total: {after['total_emergences']})"
)
# Compare metrics
before_metrics = before["metrics"]
after_metrics = after["metrics"]
if after_metrics["hi_events"] > before_metrics["hi_events"]:
changes.append("Operator activity increased")
if after_metrics["ai_events"] > before_metrics["ai_events"]:
changes.append("Agent activity increased")
return changes
def query(self, query_type: str, **kwargs) -> Any:
"""
Query the QLM state.
Examples:
- query("active_actors")
- query("events_by_type", event_type=EventType.OPERATOR_INTENT)
- query("emergences_by_pattern", pattern="agent_self_correction")
- query("metrics_summary")
"""
if query_type == "active_actors":
active = []
for layer in self.layers.values():
active.extend(layer.get_active_actors())
return active
elif query_type == "events_by_type":
event_type = kwargs.get("event_type")
return [e for e in self.events if e.event_type == event_type]
elif query_type == "events_by_actor":
actor_id = kwargs.get("actor_id")
return [e for e in self.events if e.actor_id == actor_id]
elif query_type == "events_in_timerange":
start = kwargs.get("start", datetime.now() - timedelta(days=1))
end = kwargs.get("end", datetime.now())
return [e for e in self.events if start <= e.timestamp <= end]
elif query_type == "emergences_by_pattern":
pattern = kwargs.get("pattern")
return [em for em in self.emergences if em.pattern_name == pattern]
elif query_type == "metrics_summary":
return self.metrics.to_dict()
elif query_type == "recent_transitions":
limit = kwargs.get("limit", 10)
return self.transitions[-limit:]
return None
def explain_transition(self, event_id: str) -> Optional[StateTransition]:
"""Explain what happened when a specific event occurred"""
for transition in self.transitions:
if transition.event_id == event_id:
return transition
return None
def calculate_alignment(self) -> float:
"""
Calculate HI-AI alignment.
This measures: "Is AI doing what the Operator intended?"
Approach:
- Look at OPERATOR_INTENT events
- Look at subsequent AGENT_COMPLETION events
- Check if agent actions align with intent
- Return alignment score (0.0 to 1.0)
"""
operator_intents = [
e for e in self.events if e.event_type == EventType.OPERATOR_INTENT
]
if not operator_intents:
return 1.0 # No intents = perfect alignment (vacuous truth)
aligned_count = 0
for intent_event in operator_intents:
# Find completions after this intent
completions_after = [
e
for e in self.events
if e.event_type == EventType.AGENT_COMPLETION
and e.timestamp > intent_event.timestamp
and e.intent_node_id == intent_event.intent_node_id
]
if completions_after:
# Check if any completion was approved
approvals = [
e
for e in self.events
if e.event_type == EventType.OPERATOR_APPROVAL
and e.timestamp > completions_after[0].timestamp
and e.intent_node_id == intent_event.intent_node_id
]
if approvals:
aligned_count += 1
alignment = aligned_count / len(operator_intents) if operator_intents else 1.0
self.metrics.hi_ai_alignment = alignment
return alignment
def summarize_for_operator(self, days: int = 7) -> str:
"""
Create a human-readable summary for the Operator.
This is what Alexa sees when she asks: "What happened this week?"
"""
cutoff = datetime.now() - timedelta(days=days)
recent_events = [e for e in self.events if e.timestamp >= cutoff]
# Count events by type
event_counts = {}
for event in recent_events:
event_type = event.event_type.value
event_counts[event_type] = event_counts.get(event_type, 0) + 1
# Get emergences
recent_emergences = [
em for em in self.emergences if em.timestamp >= cutoff
]
# Calculate alignment
alignment = self.calculate_alignment()
summary = f"""
QLM State Summary (Last {days} Days)
{'=' * 50}
📊 Activity Overview:
Total Events: {len(recent_events)}
HI (Operator) Events: {sum(1 for e in recent_events if e.source_layer == IntelligenceType.HI)}
AI (Agent) Events: {sum(1 for e in recent_events if e.source_layer == IntelligenceType.AI)}
System Events: {sum(1 for e in recent_events if 'system' in e.event_type.value)}
✨ QI Emergence:
Emergent Patterns Detected: {len(recent_emergences)}
"""
if recent_emergences:
summary += " Notable Emergences:\n"
for em in recent_emergences[:5]: # Top 5
summary += f" - {em.pattern_name}: {em.explanation}\n"
summary += f"""
🎯 Alignment:
HI-AI Alignment Score: {alignment:.2%}
Operator Approvals: {self.metrics.operator_approvals}
Operator Vetoes: {self.metrics.operator_vetoes}
Feedback Loops: {self.metrics.feedback_loop_count}
👥 Active Actors:
HI Layer: {len(self.layers[IntelligenceType.HI].get_active_actors())}
AI Layer: {len(self.layers[IntelligenceType.AI].get_active_actors())}
📈 Top Event Types:
"""
for event_type, count in sorted(
event_counts.items(), key=lambda x: x[1], reverse=True
)[:5]:
summary += f" {event_type}: {count}\n"
return summary
def export_json(self, file_path: str) -> None:
"""Export QLM state to JSON"""
data = {
"created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(),
"layers": {
layer_type.value: layer.to_dict()
for layer_type, layer in self.layers.items()
},
"events": [e.to_dict() for e in self.events],
"emergences": [em.to_dict() for em in self.emergences],
"metrics": self.metrics.to_dict(),
}
with open(file_path, "w") as f:
json.dump(data, f, indent=2)
def import_json(self, file_path: str) -> None:
"""Import QLM state from JSON"""
with open(file_path, "r") as f:
data = json.load(f)
# TODO: Implement full deserialization
pass

402
qlm_lab/visualization.py Normal file
View File

@@ -0,0 +1,402 @@
"""
QLM Visualization - Visual tools for understanding QLM state
Provides:
- Event timeline view
- Actor activity graph
- QI emergence patterns
- Alignment trends over time
"""
from datetime import datetime, timedelta
from typing import List, Optional
import json
try:
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.patches import Rectangle
MATPLOTLIB_AVAILABLE = True
except ImportError:
MATPLOTLIB_AVAILABLE = False
try:
import networkx as nx
NETWORKX_AVAILABLE = True
except ImportError:
NETWORKX_AVAILABLE = False
from qlm_lab.api import QLMInterface
from qlm_lab.models import IntelligenceType, EventType, QLMEvent
class QLMVisualizer:
"""
Visualize QLM state and events.
Usage:
viz = QLMVisualizer(qlm_interface)
viz.plot_event_timeline()
viz.plot_actor_graph()
viz.plot_alignment_over_time()
"""
def __init__(self, qlm: QLMInterface):
"""
Args:
qlm: QLMInterface instance
"""
self.qlm = qlm
def plot_event_timeline(self, save_path: Optional[str] = None) -> None:
"""
Plot events on a timeline colored by intelligence layer.
Args:
save_path: Optional path to save figure
"""
if not MATPLOTLIB_AVAILABLE:
print("❌ matplotlib not available. Install with: pip install matplotlib")
return
events = self.qlm.state.events
if not events:
print("No events to visualize")
return
fig, ax = plt.subplots(figsize=(14, 6))
# Group events by layer
layer_events = {
IntelligenceType.HI: [],
IntelligenceType.AI: [],
IntelligenceType.QI: [],
}
for event in events:
layer_events[event.source_layer].append(event)
# Plot each layer
colors = {
IntelligenceType.HI: "#FF6B6B", # Red for HI (Operator)
IntelligenceType.AI: "#4ECDC4", # Teal for AI (Agents)
IntelligenceType.QI: "#FFE66D", # Yellow for QI (Emergence)
}
y_positions = {
IntelligenceType.HI: 3,
IntelligenceType.AI: 2,
IntelligenceType.QI: 1,
}
for layer, events_in_layer in layer_events.items():
if not events_in_layer:
continue
times = [e.timestamp for e in events_in_layer]
y = [y_positions[layer]] * len(times)
ax.scatter(
times, y, c=colors[layer], s=100, alpha=0.6, label=layer.value, zorder=3
)
# Format
ax.set_yticks([1, 2, 3])
ax.set_yticklabels(["QI (Emergence)", "AI (Agents)", "HI (Operator)"])
ax.set_xlabel("Time")
ax.set_title("QLM Event Timeline")
ax.grid(True, alpha=0.3)
ax.legend()
# Format x-axis dates
ax.xaxis.set_major_formatter(mdates.DateFormatter("%H:%M:%S"))
plt.xticks(rotation=45)
plt.tight_layout()
if save_path:
plt.savefig(save_path, dpi=150)
print(f"Timeline saved to: {save_path}")
else:
plt.show()
def plot_actor_graph(self, save_path: Optional[str] = None) -> None:
"""
Plot actor interaction graph showing agent handoffs and coordination.
Args:
save_path: Optional path to save figure
"""
if not MATPLOTLIB_AVAILABLE or not NETWORKX_AVAILABLE:
print(
"❌ matplotlib or networkx not available. "
"Install with: pip install matplotlib networkx"
)
return
# Build graph from events
G = nx.DiGraph()
# Add actors as nodes
for layer in self.qlm.state.layers.values():
for actor in layer.actors.values():
G.add_node(
actor.id,
label=actor.name,
type=actor.actor_type.value,
role=actor.role.value,
)
# Add handoffs as edges
handoff_events = [
e for e in self.qlm.state.events if e.event_type == EventType.AGENT_HANDOFF
]
for event in handoff_events:
from_agent = event.actor_id
to_agent = event.data.get("to_agent")
if to_agent and from_agent in G and to_agent in G:
if G.has_edge(from_agent, to_agent):
G[from_agent][to_agent]["weight"] += 1
else:
G.add_edge(from_agent, to_agent, weight=1)
if not G.nodes():
print("No actors to visualize")
return
# Plot
fig, ax = plt.subplots(figsize=(12, 8))
# Layout
pos = nx.spring_layout(G, k=2, iterations=50)
# Node colors by type
node_colors = []
for node in G.nodes():
node_type = G.nodes[node].get("type", "agent")
if node_type == "human":
node_colors.append("#FF6B6B") # Red for humans
elif node_type == "agent":
node_colors.append("#4ECDC4") # Teal for agents
else:
node_colors.append("#95E1D3") # Light green for system
# Draw
nx.draw_networkx_nodes(
G, pos, node_color=node_colors, node_size=2000, alpha=0.7, ax=ax
)
nx.draw_networkx_labels(
G, pos, {n: G.nodes[n].get("label", n) for n in G.nodes()}, font_size=10, ax=ax
)
# Draw edges with width based on weight
edges = G.edges()
weights = [G[u][v].get("weight", 1) for u, v in edges]
nx.draw_networkx_edges(
G, pos, width=[w * 2 for w in weights], alpha=0.5, arrows=True, arrowsize=20, ax=ax
)
ax.set_title("QLM Actor Interaction Graph")
ax.axis("off")
plt.tight_layout()
if save_path:
plt.savefig(save_path, dpi=150)
print(f"Actor graph saved to: {save_path}")
else:
plt.show()
def plot_alignment_over_time(
self, window_size: int = 10, save_path: Optional[str] = None
) -> None:
"""
Plot HI-AI alignment trend over time.
Args:
window_size: Number of events per alignment calculation window
save_path: Optional path to save figure
"""
if not MATPLOTLIB_AVAILABLE:
print("❌ matplotlib not available. Install with: pip install matplotlib")
return
events = self.qlm.state.events
if len(events) < window_size:
print(f"Not enough events (need at least {window_size})")
return
# Calculate alignment in sliding windows
alignments = []
timestamps = []
# This is a simplified version - real implementation would calculate
# alignment for each window
for i in range(window_size, len(events), window_size // 2):
window_events = events[i - window_size : i]
timestamp = window_events[-1].timestamp
# Count approvals vs vetoes in window
approvals = sum(
1
for e in window_events
if e.event_type == EventType.OPERATOR_APPROVAL
)
vetoes = sum(
1 for e in window_events if e.event_type == EventType.OPERATOR_VETO
)
total_feedback = approvals + vetoes
if total_feedback > 0:
alignment = approvals / total_feedback
alignments.append(alignment)
timestamps.append(timestamp)
if not alignments:
print("No alignment data to plot")
return
# Plot
fig, ax = plt.subplots(figsize=(12, 6))
ax.plot(timestamps, alignments, marker="o", linewidth=2, markersize=8)
ax.axhline(y=0.8, color="g", linestyle="--", alpha=0.5, label="Good (80%)")
ax.axhline(y=0.6, color="orange", linestyle="--", alpha=0.5, label="Warning (60%)")
ax.axhline(y=0.4, color="r", linestyle="--", alpha=0.5, label="Poor (40%)")
ax.set_xlabel("Time")
ax.set_ylabel("HI-AI Alignment")
ax.set_title("HI-AI Alignment Over Time")
ax.set_ylim(0, 1)
ax.grid(True, alpha=0.3)
ax.legend()
# Format x-axis dates
ax.xaxis.set_major_formatter(mdates.DateFormatter("%H:%M"))
plt.xticks(rotation=45)
plt.tight_layout()
if save_path:
plt.savefig(save_path, dpi=150)
print(f"Alignment plot saved to: {save_path}")
else:
plt.show()
def plot_emergence_patterns(self, save_path: Optional[str] = None) -> None:
"""
Plot QI emergence patterns detected.
Args:
save_path: Optional path to save figure
"""
if not MATPLOTLIB_AVAILABLE:
print("❌ matplotlib not available. Install with: pip install matplotlib")
return
emergences = self.qlm.state.emergences
if not emergences:
print("No emergence patterns to visualize")
return
# Count patterns
pattern_counts = {}
for em in emergences:
pattern_counts[em.pattern_name] = pattern_counts.get(em.pattern_name, 0) + 1
# Plot
fig, ax = plt.subplots(figsize=(10, 6))
patterns = list(pattern_counts.keys())
counts = list(pattern_counts.values())
bars = ax.barh(patterns, counts, color="#FFE66D", alpha=0.8)
# Add value labels
for bar in bars:
width = bar.get_width()
ax.text(
width,
bar.get_y() + bar.get_height() / 2,
f" {int(width)}",
va="center",
fontsize=10,
)
ax.set_xlabel("Count")
ax.set_title("QI Emergence Patterns Detected")
ax.grid(True, alpha=0.3, axis="x")
plt.tight_layout()
if save_path:
plt.savefig(save_path, dpi=150)
print(f"Emergence patterns saved to: {save_path}")
else:
plt.show()
def export_dashboard(self, output_dir: str = ".") -> None:
"""
Export complete visualization dashboard.
Args:
output_dir: Directory to save visualizations
"""
print("Generating QLM visualization dashboard...")
self.plot_event_timeline(save_path=f"{output_dir}/qlm_timeline.png")
self.plot_actor_graph(save_path=f"{output_dir}/qlm_actors.png")
self.plot_alignment_over_time(save_path=f"{output_dir}/qlm_alignment.png")
self.plot_emergence_patterns(save_path=f"{output_dir}/qlm_emergence.png")
print(f"\n✅ Dashboard exported to: {output_dir}/")
print("Files:")
print(" - qlm_timeline.png")
print(" - qlm_actors.png")
print(" - qlm_alignment.png")
print(" - qlm_emergence.png")
def demo_visualization():
"""Demo visualization with sample data"""
from qlm_lab.models import ActorRole
print("QLM Visualization Demo")
print("=" * 60)
# Create sample QLM state
qlm = QLMInterface()
# Register agents
qlm.register_agent("agent-1", "Coder", ActorRole.CODER)
qlm.register_agent("agent-2", "Reviewer", ActorRole.REVIEWER)
qlm.register_agent("agent-3", "Tester", ActorRole.TESTER)
# Simulate activity
qlm.record_operator_intent("Build feature X", intent_node_id="intent-1")
qlm.record_agent_execution("agent-1", "Implement feature", "task-1", "intent-1")
qlm.record_agent_completion("agent-1", "task-1", True)
qlm.record_agent_handoff("agent-1", "agent-2", "task-1", "Ready for review")
qlm.record_agent_execution("agent-2", "Review code", "task-2")
qlm.record_agent_completion("agent-2", "task-2", True)
qlm.record_operator_approval("Feature implementation", intent_node_id="intent-1")
# Generate visualizations
viz = QLMVisualizer(qlm)
if MATPLOTLIB_AVAILABLE:
print("\nGenerating visualizations...")
viz.export_dashboard(".")
else:
print("\n⚠️ Install visualization dependencies:")
print(" pip install matplotlib networkx")
if __name__ == "__main__":
demo_visualization()

268
tests/test_qlm_core.py Normal file
View File

@@ -0,0 +1,268 @@
"""
Tests for QLM core functionality
"""
import pytest
from datetime import datetime, timedelta
from qlm_lab.api import QLMInterface
from qlm_lab.models import (
Actor,
ActorType,
ActorRole,
ActorState,
IntelligenceType,
EventType,
QLMEvent,
)
class TestQLMCore:
"""Test core QLM functionality"""
def test_qlm_initialization(self):
"""Test QLM initializes correctly"""
qlm = QLMInterface()
assert qlm.state is not None
assert len(qlm.state.layers) == 3 # HI, AI, QI
assert len(qlm.state.events) == 0 # No events yet
assert qlm.operator.actor_type == ActorType.HUMAN
def test_register_agent(self):
"""Test agent registration"""
qlm = QLMInterface()
agent = qlm.register_agent(
agent_id="test-agent",
name="TestAgent",
role=ActorRole.EXECUTOR,
capabilities=["testing", "debugging"],
)
assert agent.id == "test-agent"
assert agent.name == "TestAgent"
assert agent.role == ActorRole.EXECUTOR
assert "testing" in agent.capabilities
assert "debugging" in agent.capabilities
def test_record_operator_intent(self):
"""Test recording Operator intent"""
qlm = QLMInterface()
event = qlm.record_operator_intent(
intent="Test intent", description="Test description"
)
assert event.source_layer == IntelligenceType.HI
assert event.event_type == EventType.OPERATOR_INTENT
assert event.data["intent"] == "Test intent"
assert len(qlm.state.events) == 1
def test_record_agent_execution(self):
"""Test recording agent execution"""
qlm = QLMInterface()
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
event = qlm.record_agent_execution(
agent_id="agent-1",
task_description="Test task",
task_id="task-1",
)
assert event.source_layer == IntelligenceType.AI
assert event.event_type == EventType.AGENT_EXECUTION
assert event.actor_id == "agent-1"
assert event.task_id == "task-1"
def test_record_agent_completion(self):
"""Test recording agent completion"""
qlm = QLMInterface()
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
event = qlm.record_agent_completion(
agent_id="agent-1",
task_id="task-1",
success=True,
result={"status": "done"},
)
assert event.event_type == EventType.AGENT_COMPLETION
assert event.data["success"] is True
assert event.data["result"]["status"] == "done"
def test_alignment_calculation(self):
"""Test HI-AI alignment calculation"""
qlm = QLMInterface()
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
# Create intent
qlm.record_operator_intent("Test intent", intent_node_id="intent-1")
# Agent executes
qlm.record_agent_execution(
"agent-1", "Do task", "task-1", intent_node_id="intent-1"
)
qlm.record_agent_completion("agent-1", "task-1", True)
# Operator approves
qlm.record_operator_approval(
"Good work", intent_node_id="intent-1", task_id="task-1"
)
alignment = qlm.get_alignment_score()
assert alignment == 1.0 # Perfect alignment
def test_qi_emergence_detection(self):
"""Test QI emergence detection"""
qlm = QLMInterface()
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
# Simulate self-correction pattern
qlm.record_agent_execution("agent-1", "Task", "task-1")
qlm.record_agent_error("agent-1", "task-1", "Error occurred")
qlm.record_agent_execution("agent-1", "Task retry", "task-1")
qlm.record_agent_completion("agent-1", "task-1", True)
# Check for emergence
emergences = qlm.get_recent_emergences()
# Emergence detection may or may not trigger depending on pattern matching
# This test mainly ensures the system doesn't crash
assert isinstance(emergences, list)
def test_query_events_by_type(self):
"""Test querying events by type"""
qlm = QLMInterface()
# Record various events
qlm.record_operator_intent("Intent 1")
qlm.record_operator_intent("Intent 2")
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
qlm.record_agent_execution("agent-1", "Task", "task-1")
# Query
intent_events = qlm.get_events_by_type(EventType.OPERATOR_INTENT)
assert len(intent_events) == 2
exec_events = qlm.get_events_by_type(EventType.AGENT_EXECUTION)
assert len(exec_events) == 1
def test_query_events_in_timerange(self):
"""Test querying events in time range"""
qlm = QLMInterface()
now = datetime.now()
yesterday = now - timedelta(days=1)
qlm.record_operator_intent("Recent intent")
events = qlm.get_events_in_timerange(yesterday, now)
assert len(events) >= 1
def test_operator_ask_interface(self):
"""Test natural language query interface"""
qlm = QLMInterface()
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
qlm.record_operator_intent("Test")
qlm.record_agent_execution("agent-1", "Task", "task-1")
qlm.record_agent_completion("agent-1", "task-1", True)
qlm.record_operator_approval("Good", task_id="task-1")
# Test queries
response = qlm.ask("What's the status?")
assert "QLM State Summary" in response
response = qlm.ask("Are we aligned with my intent?")
assert "Alignment" in response
def test_export_import_state(self, tmp_path):
"""Test state export and import"""
qlm = QLMInterface()
# Create some state
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
qlm.record_operator_intent("Test intent")
qlm.record_agent_execution("agent-1", "Task", "task-1")
# Export
export_path = tmp_path / "qlm_state.json"
qlm.export_state(str(export_path))
assert export_path.exists()
# Check content
import json
with open(export_path, "r") as f:
data = json.load(f)
assert "layers" in data
assert "events" in data
assert "metrics" in data
def test_metrics_tracking(self):
"""Test metrics are tracked correctly"""
qlm = QLMInterface()
qlm.register_agent("agent-1", "Agent1", ActorRole.EXECUTOR)
# Record events
qlm.record_operator_intent("Intent")
qlm.record_agent_execution("agent-1", "Task", "task-1")
qlm.record_operator_approval("Good", task_id="task-1")
metrics = qlm.state.metrics
assert metrics.hi_events >= 2 # Intent + approval
assert metrics.ai_events >= 1 # Execution
assert metrics.operator_approvals == 1
class TestQLMModels:
"""Test QLM data models"""
def test_actor_creation(self):
"""Test Actor model"""
actor = Actor(
id="test-actor",
name="TestActor",
actor_type=ActorType.AGENT,
role=ActorRole.EXECUTOR,
state=ActorState.ACTIVE,
)
assert actor.id == "test-actor"
assert actor.actor_type == ActorType.AGENT
assert actor.role == ActorRole.EXECUTOR
assert actor.state == ActorState.ACTIVE
# Test serialization
data = actor.to_dict()
assert data["id"] == "test-actor"
assert data["actor_type"] == "agent"
def test_event_creation(self):
"""Test QLMEvent model"""
event = QLMEvent(
source_layer=IntelligenceType.AI,
actor_id="agent-1",
event_type=EventType.AGENT_EXECUTION,
task_id="task-1",
data={"test": "data"},
)
assert event.source_layer == IntelligenceType.AI
assert event.event_type == EventType.AGENT_EXECUTION
assert event.task_id == "task-1"
assert event.data["test"] == "data"
# Test serialization
data = event.to_dict()
assert data["source_layer"] == "model_intelligence"
assert data["event_type"] == "agent_execution"
if __name__ == "__main__":
pytest.main([__file__, "-v"])