mirror of
https://github.com/blackboxprogramming/BlackRoad-Operating-System.git
synced 2026-03-17 05:57:21 -05:00
feat: Add Research Lab pack with paralleled math modules
Create comprehensive research-lab pack structure with mathematical and quantum computing modules from blackroad-prism-console: Math Modules: - hilbert_core.py: Hilbert space symbolic reasoning - collatz/: Distributed Collatz conjecture verification - linmath/: Linear mathematics C library - lucidia_math_forge/: Symbolic proof engine - lucidia_math_lab/: Experimental mathematics Quantum Modules: - lucidia_quantum/: Quantum core - quantum_engine/: Circuit simulation Experiments: - br_math/: Gödel gap, quantum experiments Includes pack.yaml manifest and comprehensive README. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
127
packs/research-lab/README.md
Normal file
127
packs/research-lab/README.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# BlackRoad OS Research Lab Pack
|
||||
|
||||
Mathematical research, quantum computing experiments, and computational proof systems for the BlackRoad OS ecosystem.
|
||||
|
||||
## Overview
|
||||
|
||||
The Research Lab pack provides a unified home for all mathematical and computational research within BlackRoad OS. It parallels and integrates with the `blackboxprogramming/blackroad-prism-console` math modules.
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
research-lab/
|
||||
├── math/ # Mathematical modules
|
||||
│ ├── hilbert_core.py # Hilbert space symbolic reasoning
|
||||
│ ├── collatz/ # Collatz conjecture verification
|
||||
│ ├── linmath/ # Linear mathematics (C library)
|
||||
│ ├── lucidia_math_forge/ # Symbolic proof engine
|
||||
│ └── lucidia_math_lab/ # Experimental mathematics
|
||||
├── quantum/ # Quantum computing
|
||||
│ ├── lucidia_quantum/ # Quantum core
|
||||
│ └── quantum_engine/ # Circuit simulation
|
||||
├── experiments/ # Research experiments
|
||||
│ └── br_math/ # Mathematical experiments
|
||||
├── docs/ # Documentation
|
||||
├── pack.yaml # Pack manifest
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Modules
|
||||
|
||||
### Hilbert Core (`hilbert_core.py`)
|
||||
Quantum-inspired symbolic reasoning using density matrices and projectors.
|
||||
|
||||
```python
|
||||
from hilbert_core import pure_state, truth_degree, projector_from_basis
|
||||
|
||||
# Create a pure state
|
||||
psi = pure_state([1, 0, 0])
|
||||
|
||||
# Measure truth degree
|
||||
P = projector_from_basis([[1], [0], [0]])
|
||||
degree = truth_degree(psi, P) # Returns 1.0
|
||||
```
|
||||
|
||||
### Collatz Verification (`collatz/`)
|
||||
Distributed Collatz conjecture verification system.
|
||||
|
||||
```bash
|
||||
# Start orchestrator
|
||||
python -m collatz.orchestrator --start 1 --end 1000000000 --db ./campaign.sqlite
|
||||
|
||||
# Run workers on devices
|
||||
python -m collatz.worker --db ./campaign.sqlite
|
||||
```
|
||||
|
||||
### Lucidia Math Forge (`lucidia_math_forge/`)
|
||||
Symbolic proof engine with contradiction detection.
|
||||
|
||||
- `proofs.py` - Lightweight symbolic proof engine
|
||||
- `operators.py` - Mathematical operators
|
||||
- `numbers.py` - Number theory
|
||||
- `fractals.py` - Fractal generation
|
||||
- `dimensions.py` - Dimensional analysis
|
||||
|
||||
### Lucidia Math Lab (`lucidia_math_lab/`)
|
||||
Interactive mathematical exploration.
|
||||
|
||||
- `prime_explorer.py` - Prime analysis with Ulam spirals
|
||||
- `trinary_logic.py` - Ternary logic systems
|
||||
- `quantum_finance.py` - Quantum-inspired financial math
|
||||
- `iterative_math_build.py` - Iterative construction
|
||||
|
||||
### Linear Math (`linmath/`)
|
||||
C header library for vectors, matrices, and transformations.
|
||||
|
||||
```c
|
||||
#include "linmath.h"
|
||||
|
||||
vec3 v = {1.0f, 2.0f, 3.0f};
|
||||
mat4x4 m;
|
||||
mat4x4_identity(m);
|
||||
mat4x4_rotate_Z(m, m, 0.5f);
|
||||
```
|
||||
|
||||
## Research Areas
|
||||
|
||||
| Area | Description | Modules |
|
||||
|------|-------------|---------|
|
||||
| Number Theory | Primes, Collatz, Riemann | collatz, prime_explorer |
|
||||
| Proof Systems | Symbolic proofs, contradictions | lucidia_math_forge, hilbert_core |
|
||||
| Quantum Computing | Circuits, simulation | lucidia_quantum, quantum_engine |
|
||||
| Computational Geometry | Linear algebra, transforms | linmath, hilbert_core |
|
||||
| Logic Systems | Trinary, fuzzy, non-classical | lucidia_math_lab |
|
||||
|
||||
## Integration
|
||||
|
||||
### With QLM Lab
|
||||
```python
|
||||
# The pack integrates with /qlm_lab
|
||||
from qlm_lab import api as qlm
|
||||
from packs.research_lab.math import hilbert_core
|
||||
```
|
||||
|
||||
### With Agent System
|
||||
Research agents from the agent registry can interact with these modules:
|
||||
- `agent.lucidia.core` - Core Lucidia intelligence
|
||||
- `agent.lucidia.math` - Mathematical operations
|
||||
- `agent.research.assistant` - Research coordination
|
||||
|
||||
## Source Repositories
|
||||
|
||||
This pack parallels content from:
|
||||
- `blackboxprogramming/blackroad-prism-console` (source)
|
||||
- `BlackRoad-OS/blackroad-os-prism-console` (target)
|
||||
|
||||
Maintained sync ensures mathematical research is available across the ecosystem.
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Add new experiments to `experiments/`
|
||||
2. Extend math modules in `math/`
|
||||
3. Update `pack.yaml` with new modules
|
||||
4. Run verification tests before committing
|
||||
|
||||
## License
|
||||
|
||||
Apache 2.0 - See LICENSE file
|
||||
22
packs/research-lab/experiments/br_math/abacus_gate.py
Normal file
22
packs/research-lab/experiments/br_math/abacus_gate.py
Normal file
@@ -0,0 +1,22 @@
|
||||
"""Balanced‑ternary projection operator."""
|
||||
from __future__ import annotations
|
||||
|
||||
import numpy as np
|
||||
|
||||
|
||||
def abacus_projection(x: np.ndarray) -> np.ndarray:
|
||||
"""Project a vector to the nearest balanced‑ternary lattice.
|
||||
|
||||
Each element is rounded to -1, 0 or +1.
|
||||
"""
|
||||
return np.clip(np.round(x), -1, 1)
|
||||
|
||||
|
||||
def trlog(x: float) -> int:
|
||||
"""Balanced‑ternary logarithm index for positive scalars."""
|
||||
if x <= 0:
|
||||
raise ValueError("x must be positive")
|
||||
return int(np.round(np.log(x) / np.log(3)))
|
||||
|
||||
|
||||
__all__ = ["abacus_projection", "trlog"]
|
||||
15
packs/research-lab/experiments/br_math/godel_gap.py
Normal file
15
packs/research-lab/experiments/br_math/godel_gap.py
Normal file
@@ -0,0 +1,15 @@
|
||||
"""Gödel–Gap potential scorer."""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Iterable
|
||||
import numpy as np
|
||||
|
||||
|
||||
def godel_gap(complexities: Iterable[float], evidences: Iterable[float], alpha: float = 1.0, beta: float = 1.0) -> float:
|
||||
"""Compute \Phi_G = alpha * K - beta * I."""
|
||||
k = np.sum(list(complexities))
|
||||
i = np.sum(list(evidences))
|
||||
return alpha * k - beta * i
|
||||
|
||||
|
||||
__all__ = ["godel_gap"]
|
||||
61
packs/research-lab/experiments/br_math/llb_step.py
Normal file
61
packs/research-lab/experiments/br_math/llb_step.py
Normal file
@@ -0,0 +1,61 @@
|
||||
"""Language–Lindblad step prototype.
|
||||
|
||||
Implements a single Euler–Maruyama update of logits inspired by the
|
||||
Language–Lindblad equation. The function is intentionally tiny and
|
||||
framework agnostic; callers supply gradients for prompt, memory, kindness
|
||||
and harm potentials.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Callable
|
||||
import numpy as np
|
||||
|
||||
|
||||
@dataclass
|
||||
class Potentials:
|
||||
"""Container for gradient callables.
|
||||
|
||||
Each callable accepts the current logits and returns a gradient with
|
||||
the same shape.
|
||||
"""
|
||||
|
||||
grad_prompt: Callable[[np.ndarray], np.ndarray]
|
||||
grad_memory: Callable[[np.ndarray], np.ndarray]
|
||||
grad_kindness: Callable[[np.ndarray], np.ndarray]
|
||||
grad_harm: Callable[[np.ndarray], np.ndarray]
|
||||
|
||||
|
||||
def llb_step(
|
||||
logits: np.ndarray,
|
||||
potentials: Potentials,
|
||||
dt: float = 1e-2,
|
||||
noise_scale: float = 1e-3,
|
||||
lam_k: float = 1.0,
|
||||
lam_h: float = 1.0,
|
||||
) -> np.ndarray:
|
||||
"""Perform one stochastic update of the logits.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
logits: current logit vector.
|
||||
potentials: gradients of prompt, memory, kindness and harm potentials.
|
||||
dt: integration step.
|
||||
noise_scale: standard deviation of gaussian noise.
|
||||
lam_k: kindness strength.
|
||||
lam_h: harm penalty.
|
||||
"""
|
||||
|
||||
lap = np.zeros_like(logits) # placeholder for \nabla^2_E logit smoothing
|
||||
grad = (
|
||||
lap
|
||||
- potentials.grad_prompt(logits)
|
||||
- potentials.grad_memory(logits)
|
||||
+ lam_k * potentials.grad_kindness(logits)
|
||||
- lam_h * potentials.grad_harm(logits)
|
||||
)
|
||||
noise = np.random.normal(scale=noise_scale, size=logits.shape)
|
||||
return logits + dt * grad + noise
|
||||
|
||||
|
||||
__all__ = ["Potentials", "llb_step"]
|
||||
27
packs/research-lab/experiments/br_math/noether_care.py
Normal file
27
packs/research-lab/experiments/br_math/noether_care.py
Normal file
@@ -0,0 +1,27 @@
|
||||
"""Care–Noether current prototype."""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Iterable
|
||||
import numpy as np
|
||||
|
||||
|
||||
def care_current(vectors: Iterable[np.ndarray], deltas: Iterable[np.ndarray]) -> np.ndarray:
|
||||
"""Compute a simple discrete care current.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
vectors: iterable of semantic vectors for each token.
|
||||
deltas: iterable of variations produced by a paraphrase.
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.ndarray representing the care current. Zero indicates preserved
|
||||
meaning under the given transformation.
|
||||
"""
|
||||
|
||||
vec = np.stack(list(vectors))
|
||||
deltas = np.stack(list(deltas))
|
||||
return (vec * deltas).sum(axis=0)
|
||||
|
||||
|
||||
__all__ = ["care_current"]
|
||||
51
packs/research-lab/experiments/br_math/qt3.py
Normal file
51
packs/research-lab/experiments/br_math/qt3.py
Normal file
@@ -0,0 +1,51 @@
|
||||
"""Quaternion‑Ternary token algebra utilities."""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
import numpy as np
|
||||
|
||||
TERNARY = (-1, 0, 1)
|
||||
|
||||
|
||||
def ternary_mul(a: int, b: int) -> int:
|
||||
"""Balanced‑ternary multiplication with 0 absorbing."""
|
||||
if a == 0 or b == 0:
|
||||
return 0
|
||||
return 1 if a == b else -1
|
||||
|
||||
|
||||
@dataclass
|
||||
class QT3:
|
||||
"""Quaternion‑Ternary statement."""
|
||||
|
||||
v: int
|
||||
q: np.ndarray # (4,) quaternion [w,x,y,z]
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.v not in TERNARY:
|
||||
raise ValueError("v must be -1,0,+1")
|
||||
self.q = np.asarray(self.q, dtype=float)
|
||||
self.q = self.q / np.linalg.norm(self.q)
|
||||
|
||||
def star(self, other: "QT3") -> "QT3":
|
||||
"""Compose with another statement."""
|
||||
v = ternary_mul(self.v, other.v)
|
||||
q = quat_mul(self.q, other.q)
|
||||
return QT3(v=v, q=q)
|
||||
|
||||
|
||||
def quat_mul(a: np.ndarray, b: np.ndarray) -> np.ndarray:
|
||||
"""Quaternion multiplication."""
|
||||
w1, x1, y1, z1 = a
|
||||
w2, x2, y2, z2 = b
|
||||
return np.array(
|
||||
[
|
||||
w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2,
|
||||
w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2,
|
||||
w1 * y2 - x1 * z2 + y1 * w2 + z1 * x2,
|
||||
w1 * z2 + x1 * y2 - y1 * x2 + z1 * w2,
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
__all__ = ["QT3", "ternary_mul", "quat_mul"]
|
||||
25
packs/research-lab/experiments/br_math/tro_control.py
Normal file
25
packs/research-lab/experiments/br_math/tro_control.py
Normal file
@@ -0,0 +1,25 @@
|
||||
"""Trust–Resilience ODE controller."""
|
||||
from __future__ import annotations
|
||||
|
||||
import numpy as np
|
||||
|
||||
|
||||
def tro_step(t: float, state: np.ndarray, params: np.ndarray) -> np.ndarray:
|
||||
"""One step of the trust–resilience dynamics.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
t: time (unused but included for ODE solver compatibility)
|
||||
state: array ``[T, R, S, E, Jc, K]`` representing trust, resilience,
|
||||
entropy, error, care current and kindness.
|
||||
params: array ``[alpha, beta, gamma, eta, mu, nu, xi]`` of coefficients.
|
||||
"""
|
||||
|
||||
T, R, S, E, Jc, K = state
|
||||
alpha, beta, gamma, eta, mu, nu, xi = params
|
||||
dT = alpha * K + gamma * Jc - beta * E - eta * S
|
||||
dR = mu * Jc - nu * E - xi * np.gradient([S, S])[0] # crude \dot S estimate
|
||||
return np.array([dT, dR])
|
||||
|
||||
|
||||
__all__ = ["tro_step"]
|
||||
83
packs/research-lab/experiments/hilbert_polya_gue.py
Normal file
83
packs/research-lab/experiments/hilbert_polya_gue.py
Normal file
@@ -0,0 +1,83 @@
|
||||
"""Hilbert–Pólya / GUE spacing test.
|
||||
|
||||
This script compares normalized spacing distributions of the
|
||||
first `nzeros` nontrivial zeros of the Riemann zeta function and
|
||||
random Gaussian Unitary Ensemble (GUE) eigenvalues against the
|
||||
Wigner surmise. It reports the Kolmogorov–Smirnov distance for
|
||||
both samples relative to the Wigner CDF.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import math
|
||||
from typing import Callable
|
||||
|
||||
import mpmath as mp
|
||||
import numpy as np
|
||||
|
||||
|
||||
def wigner_cdf(s: np.ndarray) -> np.ndarray:
|
||||
"""CDF of the Wigner surmise for the GUE."""
|
||||
return np.vectorize(
|
||||
lambda x: math.erf(2 * x / math.sqrt(math.pi))
|
||||
- (4 * x / math.pi) * math.exp(-4 * x**2 / math.pi)
|
||||
)(s)
|
||||
|
||||
|
||||
def ks_distance(sample: np.ndarray, cdf: Callable[[np.ndarray], np.ndarray]) -> float:
|
||||
"""Return the two-sided Kolmogorov–Smirnov distance between a sample and a CDF."""
|
||||
x = np.sort(sample)
|
||||
n = x.size
|
||||
if n == 0:
|
||||
raise ValueError("sample must contain at least one element")
|
||||
cdf_vals = cdf(x)
|
||||
ecdf_right = np.arange(1, n + 1) / n
|
||||
ecdf_left = np.arange(0, n) / n
|
||||
d_plus = np.max(ecdf_right - cdf_vals)
|
||||
d_minus = np.max(cdf_vals - ecdf_left)
|
||||
return float(max(d_plus, d_minus))
|
||||
|
||||
|
||||
def riemann_zeros(n: int) -> np.ndarray:
|
||||
"""Return the imaginary parts of the first n Riemann zeta zeros."""
|
||||
mp.mp.dps = 30
|
||||
return np.array([float(mp.zetazero(k).imag) for k in range(1, n + 1)])
|
||||
|
||||
|
||||
def normalized_spacings(vals: np.ndarray) -> np.ndarray:
|
||||
"""Return consecutive spacings normalised to unit mean."""
|
||||
diffs = np.diff(vals)
|
||||
return diffs / np.mean(diffs)
|
||||
|
||||
|
||||
def gue_spacings(n: int, k: int) -> np.ndarray:
|
||||
"""Sample spacings from k random n×n GUE matrices."""
|
||||
spacings: list[float] = []
|
||||
for _ in range(k):
|
||||
a = np.random.normal(size=(n, n)) + 1j * np.random.normal(size=(n, n))
|
||||
h = (a + a.conj().T) / 2 # Hermitian
|
||||
eigvals = np.linalg.eigvalsh(h)
|
||||
spacings.extend(normalized_spacings(eigvals))
|
||||
return np.array(spacings)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument("--nzeros", type=int, default=200, help="number of zeta zeros")
|
||||
parser.add_argument("--gue_n", type=int, default=200, help="dimension of GUE matrix")
|
||||
parser.add_argument("--gue_k", type=int, default=20, help="number of GUE samples")
|
||||
args = parser.parse_args()
|
||||
|
||||
zero_spacings = normalized_spacings(riemann_zeros(args.nzeros))
|
||||
gue_sample = gue_spacings(args.gue_n, args.gue_k)
|
||||
|
||||
ks_zero = ks_distance(zero_spacings, wigner_cdf)
|
||||
ks_gue = ks_distance(gue_sample, wigner_cdf)
|
||||
|
||||
print(f"KS distance (zeros vs Wigner): {ks_zero:.4f}")
|
||||
print(f"KS distance (GUE vs Wigner): {ks_gue:.4f}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
9
packs/research-lab/math/collatz/Dockerfile
Normal file
9
packs/research-lab/math/collatz/Dockerfile
Normal file
@@ -0,0 +1,9 @@
|
||||
# Multi-arch Python base; works on x86_64, arm64, aarch64 (Jetson/RPi)
|
||||
FROM python:3.11-slim
|
||||
WORKDIR /app
|
||||
COPY requirements.txt ./
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
COPY collatz ./collatz
|
||||
COPY scripts ./scripts
|
||||
COPY collatz/config.yaml ./collatz/config.yaml
|
||||
CMD ["bash", "scripts/run_local.sh"]
|
||||
42
packs/research-lab/math/collatz/README.md
Normal file
42
packs/research-lab/math/collatz/README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Collatz Campaign (LLM-assisted)
|
||||
|
||||
Goal: search for Collatz counterexamples or extend verified bounds.
|
||||
|
||||
- Deterministic chunking; resumable via SQLite.
|
||||
- Each chunk is verified by a second pass (different arithmetic schedule).
|
||||
- Anomalies emit full "witness" traces for human audit.
|
||||
|
||||
## Quickstart
|
||||
|
||||
```bash
|
||||
python3 -m venv .venv && source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
python -m collatz.orchestrator --start 1 --end 100000000 --chunk 100000 --workers 4
|
||||
```
|
||||
|
||||
## Run on multiple machines (Jetson/RPi/PC)
|
||||
|
||||
```bash
|
||||
# On each device, point to the same repo folder (or sync via git pulls),
|
||||
# then run worker(s) pulling chunks from the same SQLite DB file:
|
||||
python -m collatz.worker --db ./campaign.sqlite --workers 4
|
||||
```
|
||||
|
||||
Or just:
|
||||
|
||||
```bash
|
||||
bash scripts/run_local.sh
|
||||
```
|
||||
|
||||
## Outputs
|
||||
|
||||
- `campaign.sqlite`: jobs, results, anomalies, and checkpoints.
|
||||
- `artifacts/`: CSV summaries, anomaly traces (repro inputs + partial trajectories).
|
||||
- `RESULTS.md`: rolling human-readable results.
|
||||
|
||||
## What counts as "progress"?
|
||||
|
||||
1. No counterexample up to N (monotone increase of checked bound).
|
||||
2. New records: largest stopping time / maximum excursion discovered with full witnesses.
|
||||
|
||||
_Last updated on 2025-09-11_
|
||||
6
packs/research-lab/math/collatz/config.yaml
Normal file
6
packs/research-lab/math/collatz/config.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
db_path: './campaign.sqlite'
|
||||
artifact_dir: './artifacts'
|
||||
chunk_size: 100000
|
||||
verify_pass: true
|
||||
max_trace_steps: 1000000
|
||||
report_every_seconds: 10
|
||||
115
packs/research-lab/math/collatz/db.py
Normal file
115
packs/research-lab/math/collatz/db.py
Normal file
@@ -0,0 +1,115 @@
|
||||
import os
|
||||
import sqlite3
|
||||
import time
|
||||
from typing import Optional, Tuple
|
||||
|
||||
SCHEMA = """
|
||||
CREATE TABLE IF NOT EXISTS jobs(
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
start_n INTEGER NOT NULL,
|
||||
end_n INTEGER NOT NULL,
|
||||
status TEXT NOT NULL DEFAULT 'queued', -- queued|running|done|error
|
||||
claimed_at REAL,
|
||||
finished_at REAL
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS results(
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
job_id INTEGER NOT NULL,
|
||||
min_n INTEGER NOT NULL,
|
||||
max_n INTEGER NOT NULL,
|
||||
max_stopping_time INTEGER,
|
||||
max_excursion INTEGER,
|
||||
checked_count INTEGER,
|
||||
verified INTEGER NOT NULL, -- 0/1
|
||||
FOREIGN KEY(job_id) REFERENCES jobs(id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS anomalies(
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
n0 INTEGER NOT NULL,
|
||||
reason TEXT NOT NULL,
|
||||
job_id INTEGER,
|
||||
trace_path TEXT,
|
||||
created_at REAL
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_jobs_status ON jobs(status);
|
||||
"""
|
||||
|
||||
|
||||
def connect(path: str):
|
||||
os.makedirs(os.path.dirname(path) or ".", exist_ok=True)
|
||||
conn = sqlite3.connect(path, timeout=60, isolation_level=None)
|
||||
conn.execute("PRAGMA journal_mode=WAL;")
|
||||
for stmt in SCHEMA.strip().split(";"):
|
||||
if stmt.strip():
|
||||
conn.execute(stmt)
|
||||
return conn
|
||||
|
||||
|
||||
def enqueue_chunks(conn, start_n: int, end_n: int, chunk: int):
|
||||
cur = conn.cursor()
|
||||
n = start_n
|
||||
while n <= end_n:
|
||||
cur.execute(
|
||||
"INSERT INTO jobs(start_n,end_n) VALUES(?,?)",
|
||||
(n, min(n + chunk - 1, end_n)),
|
||||
)
|
||||
n += chunk
|
||||
|
||||
|
||||
def claim_job(conn) -> Optional[Tuple[int, int, int]]:
|
||||
cur = conn.cursor()
|
||||
cur.execute("BEGIN IMMEDIATE;")
|
||||
row = cur.execute(
|
||||
"SELECT id,start_n,end_n FROM jobs WHERE status='queued' ORDER BY id LIMIT 1"
|
||||
).fetchone()
|
||||
if not row:
|
||||
conn.execute("COMMIT;")
|
||||
return None
|
||||
job_id, s, e = row
|
||||
conn.execute(
|
||||
"UPDATE jobs SET status='running', claimed_at=? WHERE id=?",
|
||||
(time.time(), job_id),
|
||||
)
|
||||
conn.execute("COMMIT;")
|
||||
return job_id, s, e
|
||||
|
||||
|
||||
def finish_job(
|
||||
conn,
|
||||
job_id: int,
|
||||
verified: int,
|
||||
min_n: int,
|
||||
max_n: int,
|
||||
max_stopping_time: int,
|
||||
max_excursion: int,
|
||||
checked: int,
|
||||
):
|
||||
conn.execute(
|
||||
"INSERT INTO results(job_id,min_n,max_n,max_stopping_time,max_excursion,checked_count,verified) VALUES(?,?,?,?,?,?,?)",
|
||||
(job_id, min_n, max_n, max_stopping_time, max_excursion, checked, verified),
|
||||
)
|
||||
conn.execute(
|
||||
"UPDATE jobs SET status='done', finished_at=? WHERE id=?",
|
||||
(time.time(), job_id),
|
||||
)
|
||||
|
||||
|
||||
def record_anomaly(
|
||||
conn,
|
||||
n0: int,
|
||||
reason: str,
|
||||
job_id: Optional[int],
|
||||
trace_path: Optional[str],
|
||||
):
|
||||
conn.execute(
|
||||
"INSERT INTO anomalies(n0,reason,job_id,trace_path,created_at) VALUES(?,?,?,?,?)",
|
||||
(n0, reason, job_id, trace_path, time.time()),
|
||||
)
|
||||
|
||||
|
||||
def status(conn):
|
||||
cur = conn.cursor()
|
||||
queued = cur.execute("SELECT COUNT() FROM jobs WHERE status='queued'").fetchone()[0]
|
||||
running = cur.execute("SELECT COUNT() FROM jobs WHERE status='running'").fetchone()[0]
|
||||
done = cur.execute("SELECT COUNT(*) FROM jobs WHERE status='done'").fetchone()[0]
|
||||
return queued, running, done
|
||||
38
packs/research-lab/math/collatz/orchestrator.py
Normal file
38
packs/research-lab/math/collatz/orchestrator.py
Normal file
@@ -0,0 +1,38 @@
|
||||
import argparse
|
||||
import time
|
||||
|
||||
import yaml
|
||||
|
||||
from .db import connect, enqueue_chunks, status
|
||||
|
||||
|
||||
def main():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--start", type=int, required=True)
|
||||
ap.add_argument("--end", type=int, required=True)
|
||||
ap.add_argument("--chunk", type=int, default=None)
|
||||
ap.add_argument("--db", default="./campaign.sqlite")
|
||||
ap.add_argument("--cfg", default="./collatz/config.yaml")
|
||||
ap.add_argument("--workers", type=int, default=0, help="optional hint for humans/logs only")
|
||||
args = ap.parse_args()
|
||||
|
||||
cfg = yaml.safe_load(open(args.cfg))
|
||||
db = connect(args.db)
|
||||
chunk = args.chunk or int(cfg["chunk_size"])
|
||||
|
||||
enqueue_chunks(db, args.start, args.end, chunk)
|
||||
print(f"Enqueued [{args.start}, {args.end}] in chunks of {chunk}.")
|
||||
print("Run workers on each device: `python -m collatz.worker --db ./campaign.sqlite`")
|
||||
print("Status will refresh every ~10s.\n")
|
||||
|
||||
while True:
|
||||
q, r, d = status(db)
|
||||
print(f"[{time.strftime('%Y-%m-%d %H:%M:%S')}] queued={q} running={r} done={d}")
|
||||
if q == 0 and r == 0:
|
||||
print("All jobs complete.")
|
||||
break
|
||||
time.sleep(10)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
23
packs/research-lab/math/collatz/verifier.py
Normal file
23
packs/research-lab/math/collatz/verifier.py
Normal file
@@ -0,0 +1,23 @@
|
||||
import argparse
|
||||
|
||||
from .db import connect
|
||||
|
||||
|
||||
def main():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--db", default="./campaign.sqlite")
|
||||
args = ap.parse_args()
|
||||
db = connect(args.db)
|
||||
cur = db.cursor()
|
||||
jobs, checked, max_stop, max_exc = cur.execute(
|
||||
"SELECT COUNT(), SUM(checked_count), MAX(max_stopping_time), MAX(max_excursion) FROM results"
|
||||
).fetchone()
|
||||
print(
|
||||
f"Jobs: {jobs} Integers checked: {checked or 0} Record stopping time: {max_stop or 0} Record excursion: {max_exc or 0}"
|
||||
)
|
||||
anomalies = cur.execute("SELECT COUNT() FROM anomalies").fetchone()[0]
|
||||
print(f"Anomalies (need audit): {anomalies}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
108
packs/research-lab/math/collatz/worker.py
Normal file
108
packs/research-lab/math/collatz/worker.py
Normal file
@@ -0,0 +1,108 @@
|
||||
import argparse
|
||||
import os
|
||||
from typing import Tuple
|
||||
|
||||
import psutil
|
||||
import yaml
|
||||
|
||||
from .db import claim_job, connect, finish_job, record_anomaly
|
||||
|
||||
|
||||
# Fast integer Collatz step with power-of-two compression
|
||||
def collatz_step(n: int) -> int:
|
||||
if n % 2 == 0:
|
||||
return n // 2
|
||||
# 3n+1 then compress factors of 2
|
||||
n = 3 * n + 1
|
||||
# remove all trailing zeros in binary (i.e., divide by 2^k)
|
||||
return n >> (
|
||||
(n & -n).bit_length() - 1
|
||||
) # bit trick: count trailing zeros via bit_length of lowbit
|
||||
|
||||
|
||||
def stopping_time_and_excursion(n0: int, max_steps: int = 10_000_000) -> Tuple[int, int]:
|
||||
n = n0
|
||||
max_exc = n
|
||||
steps = 0
|
||||
while n != 1 and steps < max_steps:
|
||||
n = collatz_step(n)
|
||||
if n > max_exc:
|
||||
max_exc = n
|
||||
steps += 1
|
||||
if n != 1:
|
||||
return -1, max_exc # anomaly (didn't reach 1 within cap)
|
||||
return steps, max_exc
|
||||
|
||||
|
||||
def verify_second_pass(n0: int) -> bool:
|
||||
# Different schedule: classic per-step without compression, but still safe.
|
||||
n = n0
|
||||
seen_steps = 0
|
||||
while n != 1 and seen_steps < 20_000_000:
|
||||
if n % 2 == 0:
|
||||
n //= 2
|
||||
else:
|
||||
n = 3 * n + 1
|
||||
seen_steps += 1
|
||||
return n == 1
|
||||
|
||||
|
||||
def run_job(db_path: str, artifact_dir: str, job_id: int, s: int, e: int, verify: bool):
|
||||
os.makedirs(artifact_dir, exist_ok=True)
|
||||
conn = connect(db_path)
|
||||
checked = 0
|
||||
max_stop = 0
|
||||
max_exc = 0
|
||||
for n0 in range(s, e + 1):
|
||||
st, exc = stopping_time_and_excursion(n0)
|
||||
if st < 0:
|
||||
# anomaly: didn't converge within cap
|
||||
trace_path = os.path.join(artifact_dir, f"anomaly_trace_{n0}.txt")
|
||||
with open(trace_path, "w") as f:
|
||||
n = n0
|
||||
for _ in range(1_000_000):
|
||||
f.write(str(n) + "\n")
|
||||
if n == 1:
|
||||
break
|
||||
n = 3 * n + 1 if n & 1 else n // 2
|
||||
record_anomaly(conn, n0, "no_convergence_cap", job_id, trace_path)
|
||||
else:
|
||||
if st > max_stop:
|
||||
max_stop = st
|
||||
if exc > max_exc:
|
||||
max_exc = exc
|
||||
if verify and not verify_second_pass(n0):
|
||||
record_anomaly(conn, n0, "verify_mismatch", job_id, None)
|
||||
checked += 1
|
||||
finish_job(conn, job_id, 1 if verify else 0, s, e, max_stop, max_exc, checked)
|
||||
|
||||
|
||||
def worker_loop(db_path: str, artifact_dir: str, verify: bool):
|
||||
conn = connect(db_path)
|
||||
while True:
|
||||
slot = claim_job(conn)
|
||||
if not slot:
|
||||
break
|
||||
job_id, s, e = slot
|
||||
run_job(db_path, artifact_dir, job_id, s, e, verify)
|
||||
|
||||
|
||||
def main():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--db", default="./campaign.sqlite")
|
||||
ap.add_argument("--cfg", default="./collatz/config.yaml")
|
||||
ap.add_argument("--workers", type=int, default=max(1, psutil.cpu_count(logical=False) or 1))
|
||||
args = ap.parse_args()
|
||||
cfg = yaml.safe_load(open(args.cfg))
|
||||
artifact_dir = cfg["artifact_dir"]
|
||||
verify = bool(cfg.get("verify_pass", True))
|
||||
|
||||
# Simple local worker loop
|
||||
import multiprocessing as mp
|
||||
|
||||
with mp.Pool(processes=args.workers) as pool:
|
||||
pool.starmap(worker_loop, [(args.db, artifact_dir, verify)] * args.workers)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
186
packs/research-lab/math/hilbert_core.py
Normal file
186
packs/research-lab/math/hilbert_core.py
Normal file
@@ -0,0 +1,186 @@
|
||||
# Minimal Hilbert-space symbolic core for context-sensitive reasoning.
|
||||
# Dependencies: numpy
|
||||
#
|
||||
# Quickstart:
|
||||
# python hilbert_core.py
|
||||
#
|
||||
# What you get:
|
||||
# • Projectors from basis vectors/subspaces
|
||||
# • Density-matrix (pure/mixed) states
|
||||
# • Truth degrees via Tr(ρ P)
|
||||
# • Lüders update (measurement-as-question)
|
||||
# • Tensor product for role/filler binding
|
||||
# • Commutator-based order/context effects demo
|
||||
|
||||
import numpy as np
|
||||
|
||||
# ---------- Linear algebra helpers ----------
|
||||
|
||||
def normalize(v: np.ndarray) -> np.ndarray:
|
||||
v = np.asarray(v, dtype=np.complex128).reshape(-1)
|
||||
n = np.linalg.norm(v)
|
||||
if n == 0:
|
||||
raise ValueError("Zero vector cannot be normalized.")
|
||||
return v / n
|
||||
|
||||
def orthonormalize(B: np.ndarray) -> np.ndarray:
|
||||
"""QR-based orthonormalization for (possibly) non-orthonormal columns."""
|
||||
B = np.asarray(B, dtype=np.complex128)
|
||||
if B.ndim == 1:
|
||||
B = B.reshape(-1, 1)
|
||||
Q, _ = np.linalg.qr(B)
|
||||
return Q
|
||||
|
||||
def projector_from_basis(B: np.ndarray) -> np.ndarray:
|
||||
"""Return projector onto the column span of B."""
|
||||
Q = orthonormalize(B)
|
||||
P = Q @ Q.conj().T
|
||||
return (P + P.conj().T) / 2 # hermitize for numerical stability
|
||||
|
||||
def pure_state(psi: np.ndarray) -> np.ndarray:
|
||||
psi = normalize(psi)
|
||||
return np.outer(psi, psi.conj())
|
||||
|
||||
def mixed_state(states, probs=None) -> np.ndarray:
|
||||
"""Create a mixed state from state vectors.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
states : Iterable[np.ndarray]
|
||||
State vectors that will be orthonormalized as a group.
|
||||
probs : Iterable[float] | None
|
||||
Probability weights for the states. If omitted, a uniform
|
||||
distribution is assumed.
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.ndarray
|
||||
Density matrix representing the mixed state.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If no states are provided, if the probabilities do not match the
|
||||
number of states, or if a negative/zero-sum probability is supplied.
|
||||
"""
|
||||
states = list(states)
|
||||
if not states:
|
||||
raise ValueError("At least one state vector is required")
|
||||
|
||||
states_arr = np.column_stack(states)
|
||||
ortho = orthonormalize(states_arr)
|
||||
states = [ortho[:, i] for i in range(ortho.shape[1])]
|
||||
|
||||
if probs is None:
|
||||
probs = np.ones(len(states), dtype=float)
|
||||
else:
|
||||
probs = np.asarray(probs, dtype=float)
|
||||
if probs.size != len(states):
|
||||
raise ValueError("Length of probs must match number of states")
|
||||
if np.any(probs < 0):
|
||||
raise ValueError("Probabilities must be non-negative")
|
||||
total = probs.sum()
|
||||
if total <= 0:
|
||||
raise ValueError("Sum of probabilities must be positive")
|
||||
probs = probs / total
|
||||
|
||||
rho = sum(p * np.outer(s, s.conj()) for s, p in zip(states, probs))
|
||||
return (rho + rho.conj().T) / 2
|
||||
|
||||
def tensor(*ops) -> np.ndarray:
|
||||
out = np.array([[1.0+0j]])
|
||||
for op in ops:
|
||||
out = np.kron(out, op)
|
||||
return out
|
||||
|
||||
# ---------- Reasoning primitives ----------
|
||||
|
||||
def truth_degree(rho: np.ndarray, P: np.ndarray) -> float:
|
||||
"""Degree of truth for proposition P in state ρ: Tr(ρP)."""
|
||||
return float(np.real(np.trace(rho @ P)))
|
||||
|
||||
def luders_update(rho: np.ndarray, P: np.ndarray, eps: float = 1e-12):
|
||||
"""
|
||||
Lüders rule: ρ' = PρP / Tr(PρP).
|
||||
Returns (ρ', probability_of_yes).
|
||||
"""
|
||||
M = P @ rho @ P
|
||||
p = float(np.real(np.trace(M)))
|
||||
if p > eps:
|
||||
rho_new = (M / p)
|
||||
rho_new = (rho_new + rho_new.conj().T) / 2
|
||||
return rho_new, p
|
||||
return rho, 0.0
|
||||
|
||||
def commutator(A: np.ndarray, B: np.ndarray) -> np.ndarray:
|
||||
return A @ B - B @ A
|
||||
|
||||
def noncommutativity(A: np.ndarray, B: np.ndarray) -> float:
|
||||
return float(np.linalg.norm(commutator(A, B), ord='fro'))
|
||||
|
||||
# ---------- Stateful helper (ledger of Q&A) ----------
|
||||
|
||||
class SymbolicState:
|
||||
def __init__(self, dim: int | None = None, rho: np.ndarray | None = None):
|
||||
if rho is None:
|
||||
if dim is None:
|
||||
raise ValueError("Provide dim or rho")
|
||||
self.rho = np.eye(dim, dtype=np.complex128) / dim # maximally mixed
|
||||
self.dim = dim
|
||||
else:
|
||||
self.rho = np.asarray(rho, dtype=np.complex128)
|
||||
self.dim = self.rho.shape[0]
|
||||
self.ledger: list[tuple[str, str | None, float]] = []
|
||||
|
||||
def degree(self, P: np.ndarray, name: str | None = None) -> float:
|
||||
d = truth_degree(self.rho, P)
|
||||
if name:
|
||||
self.ledger.append(("degree", name, d))
|
||||
return d
|
||||
|
||||
def ask(self, P: np.ndarray, name: str | None = None) -> float:
|
||||
rho_new, p = luders_update(self.rho, P)
|
||||
self.ledger.append(("ask", name, p))
|
||||
self.rho = rho_new
|
||||
return p
|
||||
|
||||
def copy(self) -> "SymbolicState":
|
||||
s = SymbolicState(rho=self.rho.copy())
|
||||
s.ledger = list(self.ledger)
|
||||
return s
|
||||
|
||||
# ---------- Tiny demo: order/context effects ----------
|
||||
|
||||
if __name__ == "__main__":
|
||||
d = 3
|
||||
e0 = np.array([1, 0, 0], dtype=np.complex128)
|
||||
e1 = np.array([0, 1, 0], dtype=np.complex128)
|
||||
e2 = np.array([0, 0, 1], dtype=np.complex128)
|
||||
|
||||
# Concepts as subspaces / rank-1 projectors onto unit vectors
|
||||
P_bird = projector_from_basis(e0) # "bird"
|
||||
v_fly = normalize(0.8*e0 + 0.6*e1) # "flying"
|
||||
P_flying = projector_from_basis(v_fly)
|
||||
P_penguin= projector_from_basis(e2) # "penguin"
|
||||
|
||||
S = SymbolicState(dim=d) # start maximally mixed: ignorance
|
||||
|
||||
base_bird = S.degree(P_bird, "bird")
|
||||
base_fly = S.degree(P_flying, "flying")
|
||||
print(f"Initial truth degrees: bird={base_bird:.3f}, flying={base_fly:.3f}")
|
||||
|
||||
# Ask 'bird?' then 'flying?'
|
||||
S1 = SymbolicState(dim=d)
|
||||
p_bird = S1.ask(P_bird, "bird")
|
||||
deg_fly_after_bird = S1.degree(P_flying)
|
||||
print(f"[bird→flying] P(yes bird)={p_bird:.3f}, flying after bird={deg_fly_after_bird:.3f}")
|
||||
|
||||
# Ask 'flying?' then 'bird?'
|
||||
S2 = SymbolicState(dim=d)
|
||||
p_fly = S2.ask(P_flying, "flying")
|
||||
deg_bird_after_fly = S2.degree(P_bird)
|
||||
print(f"[flying→bird] P(yes flying)={p_fly:.3f}, bird after flying={deg_bird_after_fly:.3f}")
|
||||
|
||||
# Noncommutativity (if >0, order can matter)
|
||||
nc = noncommutativity(P_bird, P_flying)
|
||||
print(f"Noncommutativity ||[P_bird,P_flying]||_F = {nc:.3f}")
|
||||
1
packs/research-lab/math/linmath/.blackroad/pin.sha
Normal file
1
packs/research-lab/math/linmath/.blackroad/pin.sha
Normal file
@@ -0,0 +1 @@
|
||||
UNAVAILABLE
|
||||
3
packs/research-lab/math/linmath/.clang-format
Normal file
3
packs/research-lab/math/linmath/.clang-format
Normal file
@@ -0,0 +1,3 @@
|
||||
BasedOnStyle: LLVM
|
||||
IndentWidth: 2
|
||||
ColumnLimit: 100
|
||||
1
packs/research-lab/math/linmath/.clang-tidy
Normal file
1
packs/research-lab/math/linmath/.clang-tidy
Normal file
@@ -0,0 +1 @@
|
||||
Checks: 'bugprone-*,clang-analyzer-*,readability-*,portability-*,-readability-magic-numbers'
|
||||
7
packs/research-lab/math/linmath/.editorconfig
Normal file
7
packs/research-lab/math/linmath/.editorconfig
Normal file
@@ -0,0 +1,7 @@
|
||||
root = true
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
insert_final_newline = true
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
23
packs/research-lab/math/linmath/.github/workflows/ci.yml
vendored
Normal file
23
packs/research-lab/math/linmath/.github/workflows/ci.yml
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
name: ci
|
||||
on: [push, pull_request]
|
||||
jobs:
|
||||
build:
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-latest, windows-latest]
|
||||
cc: [gcc, clang]
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Configure
|
||||
run: cmake -S . -B build -DBUILD_TESTING=ON
|
||||
- name: Build
|
||||
run: cmake --build build --config Release -j 2
|
||||
- name: Test
|
||||
run: ctest --test-dir build --output-on-failure
|
||||
cppcheck:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: deep5050/cppcheck-action@v6
|
||||
with: { enable: "all", std: "c11", inconclusive: true }
|
||||
10
packs/research-lab/math/linmath/.github/workflows/codeql.yml
vendored
Normal file
10
packs/research-lab/math/linmath/.github/workflows/codeql.yml
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
name: codeql
|
||||
on:
|
||||
push: {branches: ["**"]}
|
||||
pull_request:
|
||||
schedule: [{cron: "0 6 * * 1"}]
|
||||
jobs:
|
||||
analyze:
|
||||
uses: github/codeql-action/codeql-config@v3
|
||||
with:
|
||||
languages: c-cpp
|
||||
11
packs/research-lab/math/linmath/.github/workflows/sbom.yml
vendored
Normal file
11
packs/research-lab/math/linmath/.github/workflows/sbom.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
name: sbom
|
||||
on: [push, workflow_dispatch]
|
||||
jobs:
|
||||
syft:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: anchore/sbom-action@v0.17.5
|
||||
with:
|
||||
format: spdx-json
|
||||
artifact-name: sbom.spdx.json
|
||||
7
packs/research-lab/math/linmath/.pre-commit-config.yaml
Normal file
7
packs/research-lab/math/linmath/.pre-commit-config.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
repos:
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.6.0
|
||||
hooks: [ {id: trailing-whitespace}, {id: end-of-file-fixer} ]
|
||||
- repo: https://github.com/pre-commit/mirrors-clang-format
|
||||
rev: v18.1.8
|
||||
hooks: [ {id: clang-format} ]
|
||||
19
packs/research-lab/math/linmath/CMakeLists.txt
Normal file
19
packs/research-lab/math/linmath/CMakeLists.txt
Normal file
@@ -0,0 +1,19 @@
|
||||
cmake_minimum_required(VERSION 3.15)
|
||||
project(linmath-header C)
|
||||
add_library(linmath INTERFACE)
|
||||
target_include_directories(linmath INTERFACE $<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/include>
|
||||
$<INSTALL_INTERFACE:include>)
|
||||
include(CTest)
|
||||
if (BUILD_TESTING)
|
||||
add_executable(linmath_test tests/linmath_test.c)
|
||||
target_include_directories(linmath_test PRIVATE include tests)
|
||||
add_test(NAME linmath_test COMMAND linmath_test)
|
||||
endif()
|
||||
install(DIRECTORY include/ DESTINATION include)
|
||||
install(TARGETS linmath EXPORT linmathTargets)
|
||||
install(EXPORT linmathTargets NAMESPACE linmath:: DESTINATION lib/cmake/linmath)
|
||||
include(CMakePackageConfigHelpers)
|
||||
write_basic_package_version_file("${CMAKE_CURRENT_BINARY_DIR}/linmathConfigVersion.cmake"
|
||||
VERSION 0.1.0 COMPATIBILITY AnyNewerVersion)
|
||||
configure_file(packaging/pkgconfig/linmath.pc.in linmath.pc @ONLY)
|
||||
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/linmath.pc" DESTINATION lib/pkgconfig)
|
||||
6
packs/research-lab/math/linmath/Doxyfile
Normal file
6
packs/research-lab/math/linmath/Doxyfile
Normal file
@@ -0,0 +1,6 @@
|
||||
PROJECT_NAME = "linmath"
|
||||
PROJECT_NUMBER = "0.1.0"
|
||||
INPUT = include
|
||||
RECURSIVE = NO
|
||||
GENERATE_HTML = YES
|
||||
QUIET = YES
|
||||
13
packs/research-lab/math/linmath/LICENCE
Normal file
13
packs/research-lab/math/linmath/LICENCE
Normal file
@@ -0,0 +1,13 @@
|
||||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
Version 2, December 2004
|
||||
|
||||
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim or modified
|
||||
copies of this license document, and changing it is allowed as long
|
||||
as the name is changed.
|
||||
|
||||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. You just DO WHAT THE FUCK YOU WANT TO.
|
||||
3
packs/research-lab/math/linmath/NOTICE
Normal file
3
packs/research-lab/math/linmath/NOTICE
Normal file
@@ -0,0 +1,3 @@
|
||||
This repository is a mirror of datenwolf/linmath.h pinned to a specific commit.
|
||||
BlackRoad added build tooling, CI, packaging, docs, and policies. Original code:
|
||||
(c) upstream authors, license: WTFPL (see LICENCE).
|
||||
10
packs/research-lab/math/linmath/README.md
Normal file
10
packs/research-lab/math/linmath/README.md
Normal file
@@ -0,0 +1,10 @@
|
||||
# linmath
|
||||
|
||||
Placeholder mirror of datenwolf/linmath.h.
|
||||
|
||||
## BlackRoad Mirror
|
||||
|
||||
- Pinned to upstream SHA: UNAVAILABLE
|
||||
- Added CMake/Meson/pkg-config, CI, tests, SBOM, Conan/vcpkg, and docs.
|
||||
|
||||
_Last updated on 2025-09-11_
|
||||
3
packs/research-lab/math/linmath/SECURITY.md
Normal file
3
packs/research-lab/math/linmath/SECURITY.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Security Policy
|
||||
This is a header-only library mirrored from upstream (see pin in .blackroad/pin.sha).
|
||||
Please open security reports privately via GitHub Security advisories.
|
||||
4
packs/research-lab/math/linmath/examples/example.c
Normal file
4
packs/research-lab/math/linmath/examples/example.c
Normal file
@@ -0,0 +1,4 @@
|
||||
#include "linmath.h"
|
||||
int main(void) {
|
||||
return 0;
|
||||
}
|
||||
19
packs/research-lab/math/linmath/include/linmath.h
Normal file
19
packs/research-lab/math/linmath/include/linmath.h
Normal file
@@ -0,0 +1,19 @@
|
||||
/*
|
||||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
Version 2, December 2004
|
||||
|
||||
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim or modified
|
||||
copies of this license document, and changing it is allowed as long
|
||||
as the name is changed.
|
||||
|
||||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. You just DO WHAT THE FUCK YOU WANT TO.
|
||||
*/
|
||||
#ifndef LINMATH_H
|
||||
#define LINMATH_H
|
||||
/* Placeholder for upstream linmath.h; upstream source unavailable in this environment. */
|
||||
#endif /* LINMATH_H */
|
||||
4
packs/research-lab/math/linmath/meson.build
Normal file
4
packs/research-lab/math/linmath/meson.build
Normal file
@@ -0,0 +1,4 @@
|
||||
project('linmath', 'c', version: '0.1.0')
|
||||
inc = include_directories('include')
|
||||
linmath_dep = declare_dependency(include_directories: inc)
|
||||
test('linmath_test', executable('linmath_test', 'tests/linmath_test.c', include_directories: inc))
|
||||
13
packs/research-lab/math/linmath/packaging/conan/conanfile.py
Normal file
13
packs/research-lab/math/linmath/packaging/conan/conanfile.py
Normal file
@@ -0,0 +1,13 @@
|
||||
from conan import ConanFile
|
||||
class Linmath(ConanFile):
|
||||
name = "linmath"
|
||||
version = "0.1.0"
|
||||
license = "WTFPL"
|
||||
description = "Header-only linear math for graphics"
|
||||
url = "https://github.com/blackboxprogramming/linmath.h"
|
||||
exports_sources = "include/*"
|
||||
no_copy_source = True
|
||||
def package(self):
|
||||
self.copy("*.h", dst="include", src="include")
|
||||
def package_info(self):
|
||||
self.cpp_info.includedirs = ["include"]
|
||||
@@ -0,0 +1,7 @@
|
||||
prefix=/usr
|
||||
exec_prefix=${prefix}
|
||||
includedir=${prefix}/include
|
||||
Name: linmath
|
||||
Description: Header-only linear math for graphics (vec3/vec4/mat4x4/quat)
|
||||
Version: 0.1.0
|
||||
Cflags: -I${includedir}
|
||||
5
packs/research-lab/math/linmath/tests/linmath_test.c
Normal file
5
packs/research-lab/math/linmath/tests/linmath_test.c
Normal file
@@ -0,0 +1,5 @@
|
||||
#include "linmath.h"
|
||||
int main(void) {
|
||||
/* Placeholder test: upstream test unavailable */
|
||||
return 0;
|
||||
}
|
||||
6
packs/research-lab/math/linmath/vcpkg.json
Normal file
6
packs/research-lab/math/linmath/vcpkg.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"name": "linmath",
|
||||
"version-string": "0.1.0",
|
||||
"description": "Header-only linear algebra for graphics",
|
||||
"homepage": "https://github.com/blackboxprogramming/linmath.h"
|
||||
}
|
||||
1
packs/research-lab/math/lucidia_math_forge/__init__.py
Normal file
1
packs/research-lab/math/lucidia_math_forge/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Lucidia Math Forge package."""
|
||||
@@ -0,0 +1 @@
|
||||
[]
|
||||
41
packs/research-lab/math/lucidia_math_forge/dimensions.py
Normal file
41
packs/research-lab/math/lucidia_math_forge/dimensions.py
Normal file
@@ -0,0 +1,41 @@
|
||||
"""Simple higher-dimensional math utilities."""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
|
||||
def hyper_equation(x: float, y: float, z: float) -> float:
|
||||
"""Example equation unique to 4D space: ``w = x * y * z``."""
|
||||
|
||||
return x * y * z
|
||||
|
||||
|
||||
@dataclass
|
||||
class HyperPoint:
|
||||
coords: List[float]
|
||||
|
||||
def project(self, dims: int = 3) -> List[float]:
|
||||
"""Project the point onto the first ``dims`` axes."""
|
||||
|
||||
return self.coords[:dims]
|
||||
|
||||
|
||||
def plot_projection(points: List[HyperPoint], filename: str = "projection.png") -> str:
|
||||
"""Project 4D points to 3D and plot them."""
|
||||
|
||||
arr = np.array([p.project(3) for p in points])
|
||||
fig = plt.figure()
|
||||
ax = fig.add_subplot(111, projection="3d")
|
||||
ax.scatter(arr[:, 0], arr[:, 1], arr[:, 2])
|
||||
plt.savefig(filename)
|
||||
plt.close()
|
||||
return filename
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pts = [HyperPoint([x, x, x, hyper_equation(x, x, x)]) for x in range(3)]
|
||||
print("Saved projection to", plot_projection(pts))
|
||||
51
packs/research-lab/math/lucidia_math_forge/fractals.py
Normal file
51
packs/research-lab/math/lucidia_math_forge/fractals.py
Normal file
@@ -0,0 +1,51 @@
|
||||
"""Fractal generators for the Lucidia Math Forge.
|
||||
|
||||
The :func:`generate_fractal` function accepts a recursion rule and writes
|
||||
a simple fractal image to disk. Rules operate on complex numbers and
|
||||
are intentionally simple to keep execution fast in constrained
|
||||
environments.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Callable
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
|
||||
def julia_rule(z: complex, c: complex) -> complex:
|
||||
"""Default Julia set rule ``z^2 + c``."""
|
||||
|
||||
return z * z + c
|
||||
|
||||
|
||||
def generate_fractal(
|
||||
rule: Callable[[complex, complex], complex] = julia_rule,
|
||||
filename: str = "fractal.png",
|
||||
iterations: int = 50,
|
||||
bounds=(-2.0, 2.0, -2.0, 2.0),
|
||||
resolution: int = 300,
|
||||
) -> str:
|
||||
"""Generate a fractal image using ``rule`` and save it to ``filename``."""
|
||||
|
||||
xmin, xmax, ymin, ymax = bounds
|
||||
x = np.linspace(xmin, xmax, resolution)
|
||||
y = np.linspace(ymin, ymax, resolution)
|
||||
c = x[:, None] + 1j * y[None, :]
|
||||
z = np.zeros_like(c)
|
||||
mask = np.ones(c.shape, dtype=bool)
|
||||
|
||||
for _ in range(iterations):
|
||||
z[mask] = rule(z[mask], c[mask])
|
||||
mask &= np.abs(z) < 2
|
||||
|
||||
plt.imshow(mask.T, extent=bounds, cmap="magma")
|
||||
plt.axis("off")
|
||||
plt.savefig(filename, bbox_inches="tight", pad_inches=0)
|
||||
plt.close()
|
||||
return filename
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Create a quick Julia set image.
|
||||
print("Saved fractal to", generate_fractal())
|
||||
105
packs/research-lab/math/lucidia_math_forge/main.py
Normal file
105
packs/research-lab/math/lucidia_math_forge/main.py
Normal file
@@ -0,0 +1,105 @@
|
||||
"""Lucidia Math Forge interactive shell."""
|
||||
from __future__ import annotations
|
||||
|
||||
import cmd
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from .dimensions import HyperPoint, hyper_equation, plot_projection
|
||||
from .fractals import generate_fractal
|
||||
from .numbers import Infinitesimal, SurrealNumber, WaveNumber
|
||||
from .operators import infinite_fold, paradox_merge
|
||||
from .proofs import ProofEngine
|
||||
from .sinewave import SineWave, test_properties
|
||||
|
||||
|
||||
class LucidiaShell(cmd.Cmd):
|
||||
intro = "Welcome to the Lucidia Math Forge. Type help or ? to list commands."
|
||||
prompt = "forge> "
|
||||
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.engine = ProofEngine()
|
||||
self.history: list[dict[str, str]] = []
|
||||
|
||||
def do_numbers(self, arg: str) -> None:
|
||||
"""Demonstrate the alternative number systems."""
|
||||
|
||||
s = SurrealNumber(1, 2) + SurrealNumber(3, 4)
|
||||
i = Infinitesimal(1, 1) * Infinitesimal(2, -0.5)
|
||||
w = WaveNumber(2, 1) * WaveNumber(0.5, 3)
|
||||
print("Surreal:", s)
|
||||
print("Infinitesimal:", i)
|
||||
print("Wave:", w)
|
||||
self.history.append({"numbers": "shown"})
|
||||
|
||||
def do_operator(self, arg: str) -> None:
|
||||
"""Demonstrate custom operators."""
|
||||
|
||||
merged = paradox_merge(3, 1)
|
||||
folded = infinite_fold(lambda a, b: a + b, [1, 2, 3])
|
||||
print("paradox_merge(3,1) ->", merged)
|
||||
print("infinite_fold sum ->", folded)
|
||||
self.history.append({"operator": str(merged)})
|
||||
|
||||
def do_proof(self, arg: str) -> None:
|
||||
"""Assume a statement: ``proof <statement>``."""
|
||||
|
||||
statement = arg.strip()
|
||||
self.engine.assume(statement)
|
||||
self.history.append({"assume": statement})
|
||||
print("Assumed", statement)
|
||||
|
||||
def do_fractal(self, arg: str) -> None:
|
||||
"""Generate a fractal image."""
|
||||
|
||||
filename = generate_fractal()
|
||||
self.history.append({"fractal": filename})
|
||||
print("Fractal written to", filename)
|
||||
|
||||
def do_dimension(self, arg: str) -> None:
|
||||
"""Plot 4D points projected into 3D."""
|
||||
|
||||
pts = [HyperPoint([x, x, x, hyper_equation(x, x, x)]) for x in range(3)]
|
||||
filename = plot_projection(pts)
|
||||
self.history.append({"projection": filename})
|
||||
print("Projection saved to", filename)
|
||||
|
||||
def do_sine(self, arg: str) -> None:
|
||||
"""Test sine wave algebra properties."""
|
||||
|
||||
waves = [SineWave(1, 1), SineWave(2, 3), SineWave(-1, -0.5)]
|
||||
test_properties(waves)
|
||||
self.history.append({"sine_test": "done"})
|
||||
print("Sine wave algebra tested; see contradiction log for issues.")
|
||||
|
||||
def do_save(self, arg: str) -> None:
|
||||
"""Save session history to JSON: ``save <file>``."""
|
||||
|
||||
path = Path(arg.strip() or "session.json")
|
||||
path.write_text(json.dumps(self.history, indent=2))
|
||||
print("History saved to", path)
|
||||
|
||||
def do_exit(self, arg: str) -> bool: # pragma: no cover - interactive
|
||||
"""Exit the REPL."""
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def main() -> None:
|
||||
shell = LucidiaShell()
|
||||
if len(sys.argv) > 1 and sys.argv[1] == "--demo":
|
||||
shell.do_numbers("")
|
||||
shell.do_operator("")
|
||||
shell.do_proof("p")
|
||||
shell.do_fractal("")
|
||||
shell.do_dimension("")
|
||||
shell.do_sine("")
|
||||
shell.do_save("demo_session.json")
|
||||
else:
|
||||
shell.cmdloop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
120
packs/research-lab/math/lucidia_math_forge/numbers.py
Normal file
120
packs/research-lab/math/lucidia_math_forge/numbers.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""Experimental number systems for the Lucidia Math Forge.
|
||||
|
||||
This module defines three playful number systems used by the Lucidia
|
||||
Math Forge. Each system implements basic arithmetic via operator
|
||||
overloading so that instances behave a little like normal numbers.
|
||||
|
||||
The implementations are intentionally lightweight and educational rather
|
||||
than mathematically rigorous. They are meant to demonstrate how Python's
|
||||
operator overloading can be used to explore alternative arithmetic.
|
||||
|
||||
Example
|
||||
-------
|
||||
>>> from numbers import SurrealNumber, Infinitesimal, WaveNumber
|
||||
>>> SurrealNumber(1, 2) + SurrealNumber(3, 4)
|
||||
SurrealNumber(left=4, right=6)
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Union
|
||||
|
||||
NumberLike = Union[float, int]
|
||||
|
||||
|
||||
@dataclass
|
||||
class SurrealNumber:
|
||||
"""A toy representation of a surreal number.
|
||||
|
||||
The number is represented by a left and right value. Real surreal
|
||||
arithmetic is far richer; here we simply operate component-wise to
|
||||
keep the implementation approachable.
|
||||
"""
|
||||
|
||||
left: NumberLike
|
||||
right: NumberLike
|
||||
|
||||
def __add__(self, other: "SurrealNumber") -> "SurrealNumber":
|
||||
if not isinstance(other, SurrealNumber):
|
||||
return NotImplemented
|
||||
return SurrealNumber(self.left + other.left, self.right + other.right)
|
||||
|
||||
def __mul__(self, other: "SurrealNumber") -> "SurrealNumber":
|
||||
if not isinstance(other, SurrealNumber):
|
||||
return NotImplemented
|
||||
return SurrealNumber(self.left * other.left, self.right * other.right)
|
||||
|
||||
def inverse(self) -> "SurrealNumber":
|
||||
return SurrealNumber(1 / self.left, 1 / self.right)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Infinitesimal:
|
||||
"""Numbers with an infinitesimal component.
|
||||
|
||||
The value is ``real + eps * coefficient``. Multiplication ignores
|
||||
``eps^2`` terms, giving a tiny taste of differential arithmetic.
|
||||
"""
|
||||
|
||||
real: NumberLike
|
||||
eps: NumberLike = 0.0
|
||||
|
||||
def __add__(self, other: "Infinitesimal") -> "Infinitesimal":
|
||||
if not isinstance(other, Infinitesimal):
|
||||
return NotImplemented
|
||||
return Infinitesimal(self.real + other.real, self.eps + other.eps)
|
||||
|
||||
def __mul__(self, other: "Infinitesimal") -> "Infinitesimal":
|
||||
if not isinstance(other, Infinitesimal):
|
||||
return NotImplemented
|
||||
real = self.real * other.real
|
||||
eps = self.real * other.eps + self.eps * other.real
|
||||
return Infinitesimal(real, eps)
|
||||
|
||||
def inverse(self) -> "Infinitesimal":
|
||||
return Infinitesimal(1 / self.real, -self.eps / (self.real**2))
|
||||
|
||||
|
||||
@dataclass
|
||||
class WaveNumber:
|
||||
"""A number represented by a simple sine wave.
|
||||
|
||||
``amplitude`` scales the wave while ``frequency`` stretches it. The
|
||||
operations below follow a loose physical intuition where addition
|
||||
combines amplitudes and multiplication combines frequencies.
|
||||
"""
|
||||
|
||||
amplitude: NumberLike
|
||||
frequency: NumberLike
|
||||
|
||||
def __add__(self, other: "WaveNumber") -> "WaveNumber":
|
||||
if not isinstance(other, WaveNumber):
|
||||
return NotImplemented
|
||||
# Frequencies simply average to keep the result bounded.
|
||||
freq = (self.frequency + other.frequency) / 2
|
||||
return WaveNumber(self.amplitude + other.amplitude, freq)
|
||||
|
||||
def __mul__(self, other: "WaveNumber") -> "WaveNumber":
|
||||
if not isinstance(other, WaveNumber):
|
||||
return NotImplemented
|
||||
amp = self.amplitude * other.amplitude
|
||||
freq = self.frequency + other.frequency
|
||||
return WaveNumber(amp, freq)
|
||||
|
||||
def inverse(self) -> "WaveNumber":
|
||||
return WaveNumber(1 / self.amplitude, -self.frequency)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Demonstrate basic arithmetic for each number system.
|
||||
s_a = SurrealNumber(1, 2)
|
||||
s_b = SurrealNumber(3, 4)
|
||||
print("Surreal sample:", s_a + s_b, s_a * s_b, s_a.inverse())
|
||||
|
||||
i_a = Infinitesimal(1, 1)
|
||||
i_b = Infinitesimal(2, -0.5)
|
||||
print("Infinitesimal sample:", i_a + i_b, i_a * i_b, i_a.inverse())
|
||||
|
||||
w_a = WaveNumber(2, 1)
|
||||
w_b = WaveNumber(0.5, 3)
|
||||
print("Wave sample:", w_a + w_b, w_a * w_b, w_a.inverse())
|
||||
68
packs/research-lab/math/lucidia_math_forge/operators.py
Normal file
68
packs/research-lab/math/lucidia_math_forge/operators.py
Normal file
@@ -0,0 +1,68 @@
|
||||
"""Custom symbolic operators for the Lucidia Math Forge.
|
||||
|
||||
Three playful operators are provided:
|
||||
|
||||
``paradox_merge`` (⊕)
|
||||
Combines two values in a way that remembers both sum and difference.
|
||||
``infinite_fold`` (⊗)
|
||||
Repeatedly applies a binary function, emulating an infinite folding.
|
||||
``collapse`` (↯)
|
||||
Reduces a nested structure to a single value.
|
||||
|
||||
These operators are intentionally whimsical and serve as examples of how
|
||||
one might craft new algebraic toys in Python.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from functools import reduce
|
||||
from typing import Any, Callable, Iterable, Tuple
|
||||
|
||||
|
||||
def paradox_merge(a: Any, b: Any) -> Tuple[Any, Any]:
|
||||
"""Paradox merge operator ⊕.
|
||||
|
||||
Returns a tuple of ``(a + b, a - b)``. The operator is non-
|
||||
commutative and non-associative, encouraging exploration of unusual
|
||||
algebraic structures.
|
||||
"""
|
||||
|
||||
try:
|
||||
return a + b, a - b
|
||||
except TypeError as exc: # pragma: no cover - demonstration only
|
||||
raise TypeError("operands must support + and -") from exc
|
||||
|
||||
|
||||
def infinite_fold(func: Callable[[Any, Any], Any], values: Iterable[Any]) -> Any:
|
||||
"""Infinite fold operator ⊗.
|
||||
|
||||
Conceptually applies ``func`` across ``values`` endlessly. In
|
||||
practice we simply use :func:`functools.reduce` but expose the idea of
|
||||
a never-ending fold.
|
||||
"""
|
||||
|
||||
return reduce(func, values)
|
||||
|
||||
|
||||
def collapse(value: Any) -> Any:
|
||||
"""Collapse operator ↯.
|
||||
|
||||
If ``value`` is iterable the items are combined using ``paradox_merge``
|
||||
until a single value remains; otherwise it is returned unchanged.
|
||||
"""
|
||||
|
||||
if isinstance(value, Iterable) and not isinstance(value, (str, bytes)):
|
||||
items = list(value)
|
||||
if not items:
|
||||
return None
|
||||
result = items[0]
|
||||
for item in items[1:]:
|
||||
result = paradox_merge(result, item)[0]
|
||||
return result
|
||||
return value
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Example usage of the custom operators.
|
||||
print("Paradox merge of 3 and 1:", paradox_merge(3, 1))
|
||||
print("Infinite fold sum:", infinite_fold(lambda x, y: x + y, [1, 2, 3, 4]))
|
||||
print("Collapse list:", collapse([1, 2, 3]))
|
||||
86
packs/research-lab/math/lucidia_math_forge/proofs.py
Normal file
86
packs/research-lab/math/lucidia_math_forge/proofs.py
Normal file
@@ -0,0 +1,86 @@
|
||||
"""A lightweight symbolic proof engine with paradox logging.
|
||||
|
||||
The engine tracks assumptions and inferred statements. When a
|
||||
contradiction is detected it is appended to ``creative_contradictions.json``
|
||||
located alongside this module.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
LOG_FILE = Path(__file__).with_name("creative_contradictions.json")
|
||||
|
||||
|
||||
def log_contradiction(message: str) -> None:
|
||||
"""Append a contradiction message to ``creative_contradictions.json``."""
|
||||
|
||||
data: List[str] = []
|
||||
if LOG_FILE.exists():
|
||||
try:
|
||||
data = json.loads(LOG_FILE.read_text())
|
||||
except json.JSONDecodeError:
|
||||
data = []
|
||||
data.append(message)
|
||||
LOG_FILE.write_text(json.dumps(data, indent=2))
|
||||
|
||||
|
||||
@dataclass
|
||||
class ProofNode:
|
||||
statement: str
|
||||
reason: str
|
||||
children: List["ProofNode"] = field(default_factory=list)
|
||||
|
||||
def to_dict(self) -> Dict[str, object]:
|
||||
return {
|
||||
"statement": self.statement,
|
||||
"reason": self.reason,
|
||||
"children": [c.to_dict() for c in self.children],
|
||||
}
|
||||
|
||||
def __str__(self, level: int = 0) -> str:
|
||||
indent = " " * level
|
||||
lines = [f"{indent}{self.statement} ({self.reason})"]
|
||||
for child in self.children:
|
||||
lines.append(child.__str__(level + 1))
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
class ProofEngine:
|
||||
"""Minimal proof tracker allowing contradictions."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.statements: Dict[str, ProofNode] = {}
|
||||
|
||||
def assume(self, statement: str) -> ProofNode:
|
||||
node = ProofNode(statement, "assumption")
|
||||
if f"not {statement}" in self.statements:
|
||||
log_contradiction(f"Assumption {statement} contradicts its negation")
|
||||
self.statements[statement] = node
|
||||
return node
|
||||
|
||||
def infer(self, statement: str, *reasons: str) -> ProofNode:
|
||||
parents = [self.statements[r] for r in reasons if r in self.statements]
|
||||
node = ProofNode(statement, "inference", parents)
|
||||
if f"not {statement}" in self.statements:
|
||||
log_contradiction(f"Inference {statement} contradicts existing statement")
|
||||
self.statements[statement] = node
|
||||
return node
|
||||
|
||||
def prove(self, statement: str) -> Optional[ProofNode]:
|
||||
return self.statements.get(statement)
|
||||
|
||||
def print_tree(self, statement: str) -> None:
|
||||
node = self.statements.get(statement)
|
||||
if node:
|
||||
print(node)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
engine = ProofEngine()
|
||||
engine.assume("p")
|
||||
engine.assume("not p") # triggers a logged contradiction
|
||||
engine.infer("q", "p")
|
||||
engine.print_tree("q")
|
||||
51
packs/research-lab/math/lucidia_math_forge/sinewave.py
Normal file
51
packs/research-lab/math/lucidia_math_forge/sinewave.py
Normal file
@@ -0,0 +1,51 @@
|
||||
"""Sine wave algebra and paradox testing."""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
from .proofs import log_contradiction
|
||||
|
||||
|
||||
@dataclass
|
||||
class SineWave:
|
||||
amplitude: float
|
||||
frequency: float
|
||||
|
||||
def __add__(self, other: "SineWave") -> "SineWave":
|
||||
if not isinstance(other, SineWave):
|
||||
return NotImplemented
|
||||
return SineWave(self.amplitude + other.amplitude, self.frequency)
|
||||
|
||||
def __mul__(self, other: "SineWave") -> "SineWave":
|
||||
if not isinstance(other, SineWave):
|
||||
return NotImplemented
|
||||
return SineWave(self.amplitude * other.amplitude, self.frequency + other.frequency)
|
||||
|
||||
def inverse(self) -> "SineWave":
|
||||
return SineWave(-self.amplitude, -self.frequency)
|
||||
|
||||
|
||||
IDENTITY = SineWave(0.0, 0.0)
|
||||
|
||||
|
||||
def test_properties(waves: List[SineWave]) -> None:
|
||||
"""Test algebraic properties and log paradoxes if they fail."""
|
||||
|
||||
a, b, c = waves[:3]
|
||||
if (a + b) + c != a + (b + c):
|
||||
log_contradiction("SineWave addition is not associative")
|
||||
if a + b != b + a:
|
||||
log_contradiction("SineWave addition is not commutative")
|
||||
if (a * b) * c != a * (b * c):
|
||||
log_contradiction("SineWave multiplication is not associative")
|
||||
if a * b != b * a:
|
||||
log_contradiction("SineWave multiplication is not commutative")
|
||||
if a * (b + c) != (a * b) + (a * c):
|
||||
log_contradiction("SineWave distributive law fails")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
waves = [SineWave(1, 1), SineWave(2, 3), SineWave(-1, -0.5)]
|
||||
test_properties(waves)
|
||||
print("Identity element:", IDENTITY)
|
||||
83
packs/research-lab/math/lucidia_math_lab/__init__.py
Normal file
83
packs/research-lab/math/lucidia_math_lab/__init__.py
Normal file
@@ -0,0 +1,83 @@
|
||||
"""Lucidia Math Lab modules.
|
||||
|
||||
This package exposes several mathematical utilities. Some of these
|
||||
utilities depend on optional third party libraries. Importing the
|
||||
package previously tried to import all submodules eagerly which caused
|
||||
an immediate failure if an optional dependency (for example
|
||||
``networkx`` required by :class:`TrinaryLogicEngine`) was missing.
|
||||
|
||||
To make the package more robust we now lazily import submodules only
|
||||
when their attributes are accessed. This allows users to work with the
|
||||
prime exploration helpers without needing the heavier trinary logic
|
||||
requirements installed.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from importlib import import_module
|
||||
from typing import Any
|
||||
"""Lucidia Math Lab modules."""
|
||||
|
||||
from .trinary_logic import TrinaryLogicEngine
|
||||
from .prime_explorer import (
|
||||
ulam_spiral,
|
||||
residue_grid,
|
||||
fourier_prime_gaps,
|
||||
)
|
||||
from .recursion_sandbox import RecursiveSandbox
|
||||
from .sine_wave_codex import (
|
||||
superposition,
|
||||
classify_wave,
|
||||
)
|
||||
from .quantum_finance import QuantumFinanceSimulator
|
||||
|
||||
__all__ = [
|
||||
"TrinaryLogicEngine",
|
||||
"ulam_spiral",
|
||||
"residue_grid",
|
||||
"fourier_prime_gaps",
|
||||
"RecursiveSandbox",
|
||||
"superposition",
|
||||
"classify_wave",
|
||||
"QuantumFinanceSimulator",
|
||||
]
|
||||
|
||||
|
||||
_MODULE_MAP = {
|
||||
"TrinaryLogicEngine": ("trinary_logic", "TrinaryLogicEngine"),
|
||||
"ulam_spiral": ("prime_explorer", "ulam_spiral"),
|
||||
"residue_grid": ("prime_explorer", "residue_grid"),
|
||||
"fourier_prime_gaps": ("prime_explorer", "fourier_prime_gaps"),
|
||||
"RecursiveSandbox": ("recursion_sandbox", "RecursiveSandbox"),
|
||||
"superposition": ("sine_wave_codex", "superposition"),
|
||||
"classify_wave": ("sine_wave_codex", "classify_wave"),
|
||||
"QuantumFinanceSimulator": ("quantum_finance", "QuantumFinanceSimulator"),
|
||||
}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
"""Lazily import submodules when their attributes are requested.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
name:
|
||||
The attribute name to retrieve.
|
||||
|
||||
Raises
|
||||
------
|
||||
AttributeError
|
||||
If ``name`` is not one of the exposed attributes.
|
||||
"""
|
||||
|
||||
if name not in _MODULE_MAP:
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
|
||||
module_name, attr_name = _MODULE_MAP[name]
|
||||
module = import_module(f".{module_name}", __name__)
|
||||
return getattr(module, attr_name)
|
||||
|
||||
|
||||
def __dir__() -> list[str]:
|
||||
"""Return available attributes for auto-completion tools."""
|
||||
|
||||
return sorted(list(__all__))
|
||||
69
packs/research-lab/math/lucidia_math_lab/interface.py
Normal file
69
packs/research-lab/math/lucidia_math_lab/interface.py
Normal file
@@ -0,0 +1,69 @@
|
||||
"""Command line interface for the Lucidia Math Lab."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from .prime_explorer import PrimeVisualizer, plot_fourier, plot_residue, plot_ulam,
|
||||
from .prime_explorer import fourier_prime_gaps, residue_grid, ulam_spiral
|
||||
from .quantum_finance import QuantumFinanceSimulator
|
||||
from .sine_wave_codex import plot_waves
|
||||
from .trinary_logic import TrinaryLogicEngine
|
||||
|
||||
OUTPUT_DIR = Path("output")
|
||||
|
||||
|
||||
def timestamp() -> str:
|
||||
return datetime.utcnow().strftime("%Y%m%d_%H%M%S")
|
||||
|
||||
|
||||
def main() -> None: # pragma: no cover - interactive
|
||||
engine = TrinaryLogicEngine.from_json(Path(__file__).with_name("trinary_operators.json"))
|
||||
visualizer = PrimeVisualizer(OUTPUT_DIR)
|
||||
finance = QuantumFinanceSimulator(price=100.0, volatility=1.0)
|
||||
|
||||
menu = """
|
||||
Lucidia Math Lab
|
||||
-----------------
|
||||
1. Show trinary logic AND table
|
||||
2. Plot Ulam spiral
|
||||
3. Plot residue grid (mod 10)
|
||||
4. Fourier of prime gaps
|
||||
5. Plot sine waves
|
||||
6. Simulate quantum finance step
|
||||
q. Quit
|
||||
"""
|
||||
while True:
|
||||
choice = input(menu).strip()
|
||||
if choice == "1":
|
||||
print(engine.truth_table_ascii("AND"))
|
||||
elif choice == "2":
|
||||
grid, mask = ulam_spiral(25)
|
||||
fig = plot_ulam(grid, mask)
|
||||
visualizer.save_fig(fig, f"ulam_{timestamp()}")
|
||||
elif choice == "3":
|
||||
grid = residue_grid(10)
|
||||
fig = plot_residue(grid)
|
||||
visualizer.save_fig(fig, f"residue_{timestamp()}")
|
||||
elif choice == "4":
|
||||
gaps, fft = fourier_prime_gaps(100)
|
||||
fig = plot_fourier(gaps, fft)
|
||||
visualizer.save_fig(fig, f"fourier_{timestamp()}")
|
||||
elif choice == "5":
|
||||
fig = plot_waves([(1, 0, 1), (2, 0, 0.5)])
|
||||
visualizer.save_fig(fig, f"waves_{timestamp()}")
|
||||
elif choice == "6":
|
||||
dist = finance.step()
|
||||
price = finance.observe(dist)
|
||||
fig = finance.plot(dist)
|
||||
visualizer.save_fig(fig, f"finance_{timestamp()}")
|
||||
print(f"Collapsed price: {price:.2f}")
|
||||
elif choice.lower() == "q":
|
||||
break
|
||||
else:
|
||||
print("Unknown option")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
156
packs/research-lab/math/lucidia_math_lab/iterative_math_build.py
Normal file
156
packs/research-lab/math/lucidia_math_lab/iterative_math_build.py
Normal file
@@ -0,0 +1,156 @@
|
||||
"""Utilities that capture the "Iterative Math-Build Loop" workflow.
|
||||
|
||||
This module turns a simple mathematical seed—the logistic map—into a tiny
|
||||
experiment that can be iterated forever. The workflow mirrors Block 43:
|
||||
|
||||
Step 1 (Seed)
|
||||
Use the logistic function :math:`x_{n+1} = r x_n (1 - x_n)` as the pattern.
|
||||
|
||||
Step 2 (Three translations)
|
||||
Physics
|
||||
Population dynamics where energy input (sunlight, nutrients) is limited.
|
||||
Code
|
||||
A feedback loop that maps the state into itself with a tunable gain.
|
||||
Hardware
|
||||
A single-transistor logistic oscillator driven by a biasing envelope.
|
||||
|
||||
Step 3 (Build a toy)
|
||||
Simulate the logistic loop over a configurable number of pulses.
|
||||
|
||||
Step 4 (Measure & rename)
|
||||
Rename raw variables once we observe their behaviour: ``population`` becomes
|
||||
``pulse_level`` and ``r`` becomes ``gain`` to match the oscillation view.
|
||||
|
||||
Step 5 (Archive & fork)
|
||||
Export snapshots that carry timestamped tags ready for storage.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from statistics import fmean
|
||||
from typing import Iterable, List
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class LoopSnapshot:
|
||||
"""Stores a measured sequence from the logistic loop.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
tag:
|
||||
Timestamped identifier that makes it easy to fork future experiments.
|
||||
gain:
|
||||
The effective amplification factor observed during the run.
|
||||
pulse_levels:
|
||||
Measured levels after each iteration of the loop.
|
||||
phase_drift:
|
||||
The final-step change, useful for spotting bifurcations.
|
||||
mean_level:
|
||||
Average pulse level over the capture window.
|
||||
"""
|
||||
|
||||
tag: str
|
||||
gain: float
|
||||
pulse_levels: List[float]
|
||||
phase_drift: float
|
||||
mean_level: float
|
||||
|
||||
|
||||
def iterate_logistic_loop(*, gain: float, seed_level: float, pulses: int) -> List[float]:
|
||||
"""Run the logistic map for the requested number of pulses.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
gain:
|
||||
Feedback intensity of the loop (commonly ``r`` in the logistic map).
|
||||
seed_level:
|
||||
Initial pulse level. Must be in ``(0, 1)`` for the canonical map.
|
||||
pulses:
|
||||
Number of iterations to execute.
|
||||
|
||||
Returns
|
||||
-------
|
||||
list of float
|
||||
Observed pulse levels after each iteration.
|
||||
"""
|
||||
|
||||
if pulses <= 0:
|
||||
raise ValueError("pulses must be positive")
|
||||
if not 0.0 < seed_level < 1.0:
|
||||
raise ValueError("seed_level must be strictly between 0 and 1")
|
||||
|
||||
level = seed_level
|
||||
pulse_levels: List[float] = []
|
||||
for _ in range(pulses):
|
||||
level = gain * level * (1.0 - level)
|
||||
pulse_levels.append(level)
|
||||
return pulse_levels
|
||||
|
||||
|
||||
def capture_snapshot(
|
||||
*,
|
||||
gain: float = 3.72,
|
||||
seed_level: float = 0.21,
|
||||
pulses: int = 128,
|
||||
tag_prefix: str = "symmetry_break",
|
||||
) -> LoopSnapshot:
|
||||
"""Simulate the loop and bundle the measurement in a :class:`LoopSnapshot`.
|
||||
|
||||
The timestamp embedded into ``tag`` makes it trivial to archive runs and
|
||||
reference them the next time the "Next!" impulse hits.
|
||||
"""
|
||||
|
||||
pulse_levels = iterate_logistic_loop(gain=gain, seed_level=seed_level, pulses=pulses)
|
||||
phase_drift = pulse_levels[-1] - pulse_levels[-2]
|
||||
mean_level = fmean(pulse_levels)
|
||||
timestamp = datetime.now(timezone.utc).strftime("%Y_%m_%dT%H%M%SZ")
|
||||
tag = f"{tag_prefix}_{timestamp}"
|
||||
return LoopSnapshot(
|
||||
tag=tag,
|
||||
gain=gain,
|
||||
pulse_levels=pulse_levels,
|
||||
phase_drift=phase_drift,
|
||||
mean_level=mean_level,
|
||||
)
|
||||
|
||||
|
||||
def export_snapshot(snapshot: LoopSnapshot) -> str:
|
||||
"""Serialise a snapshot into a minimal archival string.
|
||||
|
||||
The format favours human parsing (CSV-like) so it can be dropped into a
|
||||
notebook, pasted into a README, or stored alongside lab photos.
|
||||
"""
|
||||
|
||||
header = "tag,gain,mean_level,phase_drift,pulse_levels"
|
||||
levels = " ".join(f"{level:.6f}" for level in snapshot.pulse_levels)
|
||||
body = f"{snapshot.tag},{snapshot.gain:.6f},{snapshot.mean_level:.6f},{snapshot.phase_drift:.6f},{levels}"
|
||||
return f"{header}\n{body}\n"
|
||||
|
||||
|
||||
def sweep_gains(
|
||||
*,
|
||||
gains: Iterable[float],
|
||||
seed_level: float,
|
||||
pulses: int,
|
||||
) -> List[LoopSnapshot]:
|
||||
"""Generate snapshots for a sequence of gains.
|
||||
|
||||
This helper makes it easy to scan for bifurcations and immediately archive
|
||||
the most interesting regimes.
|
||||
"""
|
||||
|
||||
return [
|
||||
capture_snapshot(gain=gain, seed_level=seed_level, pulses=pulses, tag_prefix=f"gain_{gain:.3f}")
|
||||
for gain in gains
|
||||
]
|
||||
|
||||
|
||||
__all__ = [
|
||||
"LoopSnapshot",
|
||||
"capture_snapshot",
|
||||
"export_snapshot",
|
||||
"iterate_logistic_loop",
|
||||
"sweep_gains",
|
||||
]
|
||||
98
packs/research-lab/math/lucidia_math_lab/prime_explorer.py
Normal file
98
packs/research-lab/math/lucidia_math_lab/prime_explorer.py
Normal file
@@ -0,0 +1,98 @@
|
||||
"""Prime pattern exploration and visualization utilities."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Tuple
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
import sympy as sp
|
||||
|
||||
|
||||
@dataclass
|
||||
class PrimeVisualizer:
|
||||
output_dir: Path
|
||||
|
||||
def save_fig(self, fig: plt.Figure, name: str) -> None:
|
||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
||||
png = self.output_dir / f"{name}.png"
|
||||
svg = self.output_dir / f"{name}.svg"
|
||||
fig.savefig(png)
|
||||
fig.savefig(svg)
|
||||
plt.close(fig)
|
||||
|
||||
|
||||
def ulam_spiral(size: int) -> Tuple[np.ndarray, np.ndarray]:
|
||||
"""Generate an Ulam spiral and mask of prime numbers."""
|
||||
|
||||
grid = np.zeros((size, size), dtype=int)
|
||||
x = y = size // 2
|
||||
dx, dy = 0, -1
|
||||
for n in range(1, size * size + 1):
|
||||
if -size // 2 <= x < size // 2 and -size // 2 <= y < size // 2:
|
||||
grid[y + size // 2, x + size // 2] = n
|
||||
if x == y or (x < 0 and x == -y) or (x > 0 and x == 1 - y):
|
||||
dx, dy = -dy, dx
|
||||
x, y = x + dx, y + dy
|
||||
prime_mask = np.vectorize(sp.isprime)(grid)
|
||||
return grid, prime_mask
|
||||
|
||||
|
||||
def plot_ulam(grid: np.ndarray, mask: np.ndarray) -> plt.Figure:
|
||||
fig, ax = plt.subplots()
|
||||
ax.imshow(mask, cmap="Greys")
|
||||
ax.set_xticks([])
|
||||
ax.set_yticks([])
|
||||
return fig
|
||||
|
||||
|
||||
def residue_grid(mod: int, size: int = 100) -> np.ndarray:
|
||||
"""Compute a modular residue grid.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
mod:
|
||||
The modulus used for the residue computation.
|
||||
size:
|
||||
Total number of integers to include. ``size`` must be a perfect
|
||||
square so that the numbers can be reshaped into a square grid.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If ``size`` is not a perfect square.
|
||||
"""
|
||||
|
||||
numbers = np.arange(1, size + 1)
|
||||
side = int(np.sqrt(size))
|
||||
if side * side != size:
|
||||
raise ValueError("size must be a perfect square")
|
||||
return numbers.reshape(side, side) % mod
|
||||
|
||||
|
||||
def plot_residue(grid: np.ndarray) -> plt.Figure:
|
||||
fig, ax = plt.subplots()
|
||||
ax.imshow(grid, cmap="viridis")
|
||||
ax.set_xticks([])
|
||||
ax.set_yticks([])
|
||||
return fig
|
||||
|
||||
|
||||
def fourier_prime_gaps(limit: int) -> Tuple[np.ndarray, np.ndarray]:
|
||||
"""Return prime gaps and their Fourier transform magnitude."""
|
||||
|
||||
primes = list(sp.primerange(2, limit))
|
||||
gaps = np.diff(primes)
|
||||
fft = np.abs(np.fft.fft(gaps))
|
||||
return gaps, fft
|
||||
|
||||
|
||||
def plot_fourier(gaps: np.ndarray, fft: np.ndarray) -> plt.Figure:
|
||||
fig, ax = plt.subplots(2, 1, figsize=(6, 6))
|
||||
ax[0].plot(gaps)
|
||||
ax[0].set_title("Prime gaps")
|
||||
ax[1].plot(fft)
|
||||
ax[1].set_title("FFT magnitude")
|
||||
return fig
|
||||
37
packs/research-lab/math/lucidia_math_lab/quantum_finance.py
Normal file
37
packs/research-lab/math/lucidia_math_lab/quantum_finance.py
Normal file
@@ -0,0 +1,37 @@
|
||||
"""Quantum-style finance sandbox."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
|
||||
@dataclass
|
||||
class QuantumFinanceSimulator:
|
||||
price: float
|
||||
volatility: float
|
||||
history: List[float] = field(default_factory=list)
|
||||
|
||||
def step(self, samples: int = 1000) -> np.ndarray:
|
||||
"""Simulate a probabilistic price distribution."""
|
||||
|
||||
distribution = np.random.normal(self.price, self.volatility, samples)
|
||||
self.history.append(distribution.mean())
|
||||
return distribution
|
||||
|
||||
def observe(self, distribution: np.ndarray) -> float:
|
||||
"""Collapse the distribution to a single price."""
|
||||
|
||||
self.price = float(np.random.choice(distribution))
|
||||
return self.price
|
||||
|
||||
def plot(self, distribution: np.ndarray) -> plt.Figure:
|
||||
fig, ax = plt.subplots(2, 1, figsize=(6, 6))
|
||||
ax[0].hist(distribution, bins=30)
|
||||
ax[0].set_title("Probability distribution")
|
||||
ax[1].plot(self.history + [self.price])
|
||||
ax[1].set_title("Collapsed price over time")
|
||||
return fig
|
||||
@@ -0,0 +1,34 @@
|
||||
"""Recursive equation sandbox for detecting contradictions."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
import sympy as sp
|
||||
|
||||
|
||||
@dataclass
|
||||
class RecursiveSandbox:
|
||||
log_path: Path = Path("contradiction_log.json")
|
||||
log: List[dict] = field(default_factory=list)
|
||||
|
||||
def parse_equation(self, equation: str) -> sp.Eq:
|
||||
lhs, rhs = equation.split("=")
|
||||
return sp.Eq(sp.sympify(lhs.strip()), sp.sympify(rhs.strip()))
|
||||
|
||||
def detect_contradiction(self, equation: str) -> bool:
|
||||
"""Detect simple self-referential contradictions."""
|
||||
|
||||
if "f(f(" in equation:
|
||||
self.log_contradiction(equation, "self_reference")
|
||||
return True
|
||||
return False
|
||||
|
||||
def log_contradiction(self, equation: str, reason: str) -> None:
|
||||
entry = {"equation": equation, "reason": reason}
|
||||
self.log.append(entry)
|
||||
with open(self.log_path, "w", encoding="utf-8") as fh:
|
||||
json.dump(self.log, fh, indent=2)
|
||||
38
packs/research-lab/math/lucidia_math_lab/sine_wave_codex.py
Normal file
38
packs/research-lab/math/lucidia_math_lab/sine_wave_codex.py
Normal file
@@ -0,0 +1,38 @@
|
||||
"""Symbolic sine-wave superposition utilities."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Iterable, Tuple
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
Wave = Tuple[float, float, float] # frequency, phase, amplitude
|
||||
|
||||
|
||||
def superposition(waves: Iterable[Wave], samples: int = 1000) -> Tuple[np.ndarray, np.ndarray]:
|
||||
"""Compute the superposition of sine waves."""
|
||||
|
||||
t = np.linspace(0, 2 * np.pi, samples)
|
||||
result = np.zeros_like(t)
|
||||
for freq, phase, amp in waves:
|
||||
result += amp * np.sin(freq * t + phase)
|
||||
return t, result
|
||||
|
||||
|
||||
def classify_wave(value: float, eps: float = 1e-3) -> str:
|
||||
"""Classify wave value into truth/false/paradox."""
|
||||
|
||||
if value > eps:
|
||||
return "truth"
|
||||
if value < -eps:
|
||||
return "false"
|
||||
return "paradox"
|
||||
|
||||
|
||||
def plot_waves(waves: Iterable[Wave]) -> plt.Figure:
|
||||
t, result = superposition(waves)
|
||||
fig, ax = plt.subplots()
|
||||
ax.plot(t, result)
|
||||
ax.set_title("Sine wave superposition")
|
||||
return fig
|
||||
129
packs/research-lab/math/lucidia_math_lab/trinary_logic.py
Normal file
129
packs/research-lab/math/lucidia_math_lab/trinary_logic.py
Normal file
@@ -0,0 +1,129 @@
|
||||
"""Trinary logic engine supporting custom operators and visualizations."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
import json
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, NamedTuple, Union
|
||||
|
||||
import numpy as np
|
||||
|
||||
|
||||
_NX_SPEC = importlib.util.find_spec("networkx")
|
||||
if _NX_SPEC is not None: # pragma: no cover - exercised indirectly
|
||||
import networkx as nx # type: ignore
|
||||
else: # pragma: no cover - exercised indirectly
|
||||
nx = None # type: ignore
|
||||
|
||||
|
||||
class SimpleEdge(NamedTuple):
|
||||
source: Any
|
||||
target: Any
|
||||
attrs: Dict[str, Any]
|
||||
|
||||
|
||||
@dataclass
|
||||
class SimpleDiGraph:
|
||||
"""Lightweight stand-in when ``networkx`` is unavailable."""
|
||||
|
||||
edges: List[SimpleEdge] = field(default_factory=list)
|
||||
|
||||
def add_edge(self, source: Any, target: Any, **attrs: Any) -> None:
|
||||
self.edges.append(SimpleEdge(source, target, dict(attrs)))
|
||||
|
||||
|
||||
GraphReturn = Union["nx.DiGraph", SimpleDiGraph]
|
||||
|
||||
|
||||
import json
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Dict, List
|
||||
|
||||
import networkx as nx
|
||||
import numpy as np
|
||||
|
||||
|
||||
TRIT_VALUES: List[int] = [-1, 0, 1]
|
||||
|
||||
|
||||
@dataclass
|
||||
class TrinaryLogicEngine:
|
||||
"""Engine that evaluates trinary logic expressions."""
|
||||
|
||||
operators: Dict[str, Dict]
|
||||
|
||||
@classmethod
|
||||
def from_json(cls, path: str | Path) -> "TrinaryLogicEngine":
|
||||
with open(path, "r", encoding="utf-8") as fh:
|
||||
data = json.load(fh)
|
||||
return cls(data)
|
||||
|
||||
def operate(self, op: str, a: int, b: int | None = None) -> int:
|
||||
"""Evaluate a trinary operation."""
|
||||
|
||||
if op == "NOT":
|
||||
if b is not None:
|
||||
raise ValueError("NOT takes a single argument")
|
||||
return int(self.operators[op][str(a)])
|
||||
if b is None:
|
||||
raise ValueError("Binary operator requires two arguments")
|
||||
return int(self.operators[op][str(a)][str(b)])
|
||||
|
||||
def truth_table(self, op: str) -> np.ndarray:
|
||||
"""Return the truth table for an operator as a matrix."""
|
||||
|
||||
if op == "NOT":
|
||||
table = np.zeros((len(TRIT_VALUES), 2), dtype=int)
|
||||
for i, a in enumerate(TRIT_VALUES):
|
||||
table[i] = [a, self.operate(op, a)]
|
||||
return table
|
||||
table = np.zeros((len(TRIT_VALUES), len(TRIT_VALUES)), dtype=int)
|
||||
for i, a in enumerate(TRIT_VALUES):
|
||||
for j, b in enumerate(TRIT_VALUES):
|
||||
table[i, j] = self.operate(op, a, b)
|
||||
return table
|
||||
|
||||
def truth_table_ascii(self, op: str) -> str:
|
||||
"""Render a truth table as ASCII art."""
|
||||
|
||||
table = self.truth_table(op)
|
||||
return "\n".join(" ".join(f"{v:+d}" for v in row) for row in table)
|
||||
|
||||
def to_graph(self, op: str) -> GraphReturn:
|
||||
"""Visualize operator relations as a directed graph.
|
||||
|
||||
Returns a :class:`networkx.DiGraph` when the optional ``networkx``
|
||||
dependency is installed. Otherwise a :class:`SimpleDiGraph` is used.
|
||||
"""
|
||||
|
||||
graph: GraphReturn
|
||||
if nx is not None:
|
||||
graph = nx.DiGraph() # type: ignore[call-arg]
|
||||
else:
|
||||
graph = SimpleDiGraph()
|
||||
|
||||
if op == "NOT":
|
||||
for a in TRIT_VALUES:
|
||||
res = self.operate(op, a)
|
||||
graph.add_edge(a, res, op=op)
|
||||
return graph
|
||||
def to_graph(self, op: str) -> nx.DiGraph:
|
||||
"""Visualize operator relations as a directed graph."""
|
||||
|
||||
g = nx.DiGraph()
|
||||
if op == "NOT":
|
||||
for a in TRIT_VALUES:
|
||||
res = self.operate(op, a)
|
||||
g.add_edge(a, res, op=op)
|
||||
return g
|
||||
|
||||
for a in TRIT_VALUES:
|
||||
for b in TRIT_VALUES:
|
||||
res = self.operate(op, a, b)
|
||||
graph.add_edge((a, b), res, op=op)
|
||||
return graph
|
||||
g.add_edge((a, b), res, op=op)
|
||||
return g
|
||||
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"AND": {
|
||||
"-1": {"-1": -1, "0": -1, "1": -1},
|
||||
"0": {"-1": -1, "0": 0, "1": 0},
|
||||
"1": {"-1": -1, "0": 0, "1": 1}
|
||||
},
|
||||
"OR": {
|
||||
"-1": {"-1": -1, "0": 0, "1": 1},
|
||||
"0": {"-1": 0, "0": 0, "1": 1},
|
||||
"1": {"-1": 1, "0": 1, "1": 1}
|
||||
},
|
||||
"XOR": {
|
||||
"-1": {"-1": -1, "0": 1, "1": 0},
|
||||
"0": {"-1": 1, "0": 0, "1": -1},
|
||||
"1": {"-1": 0, "0": -1, "1": 1}
|
||||
},
|
||||
"NOT": {
|
||||
"-1": 1,
|
||||
"0": 0,
|
||||
"1": -1
|
||||
}
|
||||
}
|
||||
178
packs/research-lab/pack.yaml
Normal file
178
packs/research-lab/pack.yaml
Normal file
@@ -0,0 +1,178 @@
|
||||
# BlackRoad OS Research Lab Pack
|
||||
# Mathematical and Quantum Research Infrastructure
|
||||
|
||||
id: pack.research-lab
|
||||
name: "Research Lab"
|
||||
version: "1.0.0"
|
||||
description: "Mathematical research, quantum computing experiments, and computational proofs"
|
||||
owning_team: blackroad-research
|
||||
repo: blackroad-os-pack-research-lab
|
||||
status: active
|
||||
risk_level: medium
|
||||
|
||||
# Domain areas
|
||||
domains:
|
||||
- mathematics
|
||||
- quantum_computing
|
||||
- proof_systems
|
||||
- number_theory
|
||||
- computational_research
|
||||
|
||||
# Associated agents
|
||||
agents:
|
||||
- agent.lucidia.core
|
||||
- agent.lucidia.math
|
||||
- agent.research.assistant
|
||||
|
||||
# Module registry
|
||||
modules:
|
||||
# Core Mathematics
|
||||
math:
|
||||
- id: hilbert_core
|
||||
name: "Hilbert Space Symbolic Reasoning"
|
||||
path: math/hilbert_core.py
|
||||
description: "Quantum-inspired symbolic reasoning with density matrices and truth degrees"
|
||||
capabilities:
|
||||
- projector_construction
|
||||
- density_matrices
|
||||
- truth_degree_computation
|
||||
- luders_measurement
|
||||
- tensor_products
|
||||
dependencies: [numpy]
|
||||
|
||||
- id: collatz
|
||||
name: "Collatz Conjecture Verification"
|
||||
path: math/collatz/
|
||||
description: "Distributed Collatz counterexample search with LLM-assisted verification"
|
||||
capabilities:
|
||||
- chunk_orchestration
|
||||
- worker_distribution
|
||||
- verification_pipeline
|
||||
components:
|
||||
- orchestrator.py
|
||||
- worker.py
|
||||
- verifier.py
|
||||
- db.py
|
||||
|
||||
- id: linmath
|
||||
name: "Linear Mathematics Library"
|
||||
path: math/linmath/
|
||||
description: "C header library for vectors, matrices, and transformations"
|
||||
capabilities:
|
||||
- vector_operations
|
||||
- matrix_operations
|
||||
- quaternions
|
||||
- transformations
|
||||
language: c
|
||||
|
||||
- id: lucidia_math_forge
|
||||
name: "Lucidia Math Forge"
|
||||
path: math/lucidia_math_forge/
|
||||
description: "Symbolic proof engine with contradiction tracking"
|
||||
capabilities:
|
||||
- symbolic_proofs
|
||||
- operator_definitions
|
||||
- number_theory
|
||||
- dimensional_analysis
|
||||
- fractal_generation
|
||||
components:
|
||||
- proofs.py
|
||||
- operators.py
|
||||
- numbers.py
|
||||
- dimensions.py
|
||||
- fractals.py
|
||||
- sinewave.py
|
||||
|
||||
- id: lucidia_math_lab
|
||||
name: "Lucidia Math Lab"
|
||||
path: math/lucidia_math_lab/
|
||||
description: "Experimental mathematics with interactive exploration"
|
||||
capabilities:
|
||||
- prime_exploration
|
||||
- trinary_logic
|
||||
- quantum_finance
|
||||
- recursion_experiments
|
||||
- iterative_construction
|
||||
components:
|
||||
- prime_explorer.py
|
||||
- trinary_logic.py
|
||||
- quantum_finance.py
|
||||
- recursion_sandbox.py
|
||||
- iterative_math_build.py
|
||||
|
||||
# Quantum Computing
|
||||
quantum:
|
||||
- id: lucidia_quantum
|
||||
name: "Lucidia Quantum Core"
|
||||
path: quantum/lucidia_quantum/
|
||||
description: "Quantum simulation and computation"
|
||||
|
||||
- id: quantum_engine
|
||||
name: "Quantum Engine"
|
||||
path: quantum/quantum_engine/
|
||||
description: "Quantum circuit simulation and execution"
|
||||
|
||||
# Experiments
|
||||
experiments:
|
||||
- id: br_math
|
||||
name: "BlackRoad Math Experiments"
|
||||
path: experiments/br_math/
|
||||
description: "Experimental mathematical concepts and gap analysis"
|
||||
components:
|
||||
- godel_gap.py # Gödel incompleteness gap scoring
|
||||
- qt3.py # Quantum computation experiments
|
||||
- abacus_gate.py # Gate operations
|
||||
- noether_care.py # Noether's theorem applications
|
||||
|
||||
# Research Areas
|
||||
research_areas:
|
||||
number_theory:
|
||||
description: "Prime numbers, Collatz conjecture, Riemann hypothesis"
|
||||
modules: [collatz, lucidia_math_lab]
|
||||
|
||||
proof_systems:
|
||||
description: "Symbolic proofs, contradiction detection, formal methods"
|
||||
modules: [lucidia_math_forge, hilbert_core]
|
||||
|
||||
quantum_computing:
|
||||
description: "Quantum circuits, simulation, quantum-classical hybrid"
|
||||
modules: [lucidia_quantum, quantum_engine]
|
||||
|
||||
computational_geometry:
|
||||
description: "Linear algebra, transformations, spatial computing"
|
||||
modules: [linmath, hilbert_core]
|
||||
|
||||
logic_systems:
|
||||
description: "Trinary logic, non-classical reasoning, fuzzy logic"
|
||||
modules: [lucidia_math_lab]
|
||||
|
||||
# Integration points
|
||||
integrations:
|
||||
- qlm_lab:
|
||||
description: "Integration with Quantum Logic Math Lab in BlackRoad-Operating-System"
|
||||
path: /qlm_lab
|
||||
sync: bidirectional
|
||||
|
||||
- agents_registry:
|
||||
description: "Agent personality integration for research assistants"
|
||||
path: /blackroad-os-agents-work
|
||||
sync: outbound
|
||||
|
||||
# Scheduled tasks
|
||||
schedules:
|
||||
daily_verification:
|
||||
cron: "0 2 * * *"
|
||||
task: "Run Collatz verification status check"
|
||||
module: collatz
|
||||
|
||||
weekly_analysis:
|
||||
cron: "0 3 * * 0"
|
||||
task: "Generate research progress report"
|
||||
module: all
|
||||
|
||||
# Documentation
|
||||
docs:
|
||||
- README.md
|
||||
- RESEARCH_AREAS.md
|
||||
- API_REFERENCE.md
|
||||
- EXPERIMENT_LOG.md
|
||||
44
packs/research-lab/quantum/lucidia_quantum/__init__.py
Normal file
44
packs/research-lab/quantum/lucidia_quantum/__init__.py
Normal file
@@ -0,0 +1,44 @@
|
||||
"""Quantum ML module for Lucidia.
|
||||
|
||||
This package is optional and guarded by the ``LUCIDIA_QML`` environment
|
||||
variable. Only local simulators are used; remote providers are disabled
|
||||
when ``LUCIDIA_QML_REMOTE`` is unset or false.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from typing import Dict, Type
|
||||
|
||||
from .backends import AerCPUBackend, QuantumBackend
|
||||
|
||||
_QML_ENABLED = os.getenv("LUCIDIA_QML", "off").lower() in {"1", "true", "on"}
|
||||
_REMOTE_OK = os.getenv("LUCIDIA_QML_REMOTE", "false").lower() in {"1", "true", "on"}
|
||||
|
||||
# Registry of available backends
|
||||
_BACKENDS: Dict[str, Type[QuantumBackend]] = {"aer_cpu": AerCPUBackend}
|
||||
|
||||
|
||||
def is_enabled() -> bool:
|
||||
"""Return True if the Quantum ML feature flag is on."""
|
||||
|
||||
return _QML_ENABLED
|
||||
|
||||
|
||||
def get_backend(name: str = "aer_cpu") -> QuantumBackend:
|
||||
"""Instantiate and return a backend by name.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
name:
|
||||
Registered backend key. Defaults to ``aer_cpu``.
|
||||
"""
|
||||
|
||||
if not _QML_ENABLED:
|
||||
raise RuntimeError("Quantum ML disabled")
|
||||
if not _REMOTE_OK and name not in _BACKENDS:
|
||||
raise RuntimeError("Remote backends are disabled")
|
||||
backend_cls = _BACKENDS.get(name)
|
||||
if backend_cls is None:
|
||||
raise ValueError(f"Unknown backend '{name}'")
|
||||
return backend_cls()
|
||||
50
packs/research-lab/quantum/lucidia_quantum/backends.py
Normal file
50
packs/research-lab/quantum/lucidia_quantum/backends.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""Backend interfaces for Quantum ML.
|
||||
|
||||
Currently only the local Aer CPU simulator is implemented. GPU support is
|
||||
stubbed out and disabled unless ``LUCIDIA_QML_GPU`` is set.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any
|
||||
|
||||
from qiskit import QuantumCircuit
|
||||
|
||||
|
||||
class QuantumBackend(ABC):
|
||||
"""Abstract quantum backend."""
|
||||
|
||||
@abstractmethod
|
||||
def run(self, circuit: QuantumCircuit, shots: int = 1024, seed: int | None = None) -> Any:
|
||||
"""Execute a circuit and return the raw result."""
|
||||
|
||||
|
||||
class AerCPUBackend(QuantumBackend):
|
||||
"""Qiskit Aer CPU simulator backend."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
from qiskit_aer import AerSimulator
|
||||
|
||||
self.simulator = AerSimulator(method="automatic")
|
||||
|
||||
def run(self, circuit: QuantumCircuit, shots: int = 1024, seed: int | None = None) -> Any:
|
||||
if seed is not None:
|
||||
circuit = circuit.copy()
|
||||
circuit.seed_simulator(seed)
|
||||
job = self.simulator.run(circuit, shots=shots)
|
||||
return job.result()
|
||||
|
||||
|
||||
class AerGPUBackend(AerCPUBackend):
|
||||
"""Stub GPU backend. Requires CUDA and qiskit-aer-gpu."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
if os.getenv("LUCIDIA_QML_GPU", "off").lower() not in {"1", "true", "on"}:
|
||||
raise RuntimeError("GPU backend disabled")
|
||||
super().__init__()
|
||||
try:
|
||||
self.simulator.set_options(device="GPU") # type: ignore[attr-defined]
|
||||
except Exception as exc: # pragma: no cover - fallback path
|
||||
raise RuntimeError("GPU backend not available") from exc
|
||||
26
packs/research-lab/quantum/lucidia_quantum/kernels.py
Normal file
26
packs/research-lab/quantum/lucidia_quantum/kernels.py
Normal file
@@ -0,0 +1,26 @@
|
||||
"""Quantum kernel utilities."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Optional
|
||||
|
||||
import numpy as np
|
||||
from qiskit_machine_learning.algorithms import PegasosQSVC
|
||||
from qiskit_machine_learning.kernels import QuantumKernel
|
||||
|
||||
from .backends import AerCPUBackend, QuantumBackend
|
||||
|
||||
|
||||
def fit_qsvc(
|
||||
x: np.ndarray,
|
||||
y: np.ndarray,
|
||||
kernel_opts: Optional[dict[str, Any]] = None,
|
||||
backend: Optional[QuantumBackend] = None,
|
||||
) -> PegasosQSVC:
|
||||
"""Train a PegasosQSVC on the given data using a local quantum kernel."""
|
||||
|
||||
backend = backend or AerCPUBackend()
|
||||
kernel = QuantumKernel(quantum_instance=backend.simulator, **(kernel_opts or {}))
|
||||
model = PegasosQSVC(quantum_kernel=kernel)
|
||||
model.fit(x, y)
|
||||
return model
|
||||
18
packs/research-lab/quantum/lucidia_quantum/policies.py
Normal file
18
packs/research-lab/quantum/lucidia_quantum/policies.py
Normal file
@@ -0,0 +1,18 @@
|
||||
"""Resource policies for quantum execution."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from qiskit import QuantumCircuit
|
||||
|
||||
MAX_QUBITS = 8
|
||||
MAX_DEPTH = 40
|
||||
MAX_SHOTS = 1024
|
||||
|
||||
|
||||
def validate_circuit(circuit: QuantumCircuit) -> None:
|
||||
"""Raise ``ValueError`` if the circuit exceeds policy limits."""
|
||||
|
||||
if circuit.num_qubits > MAX_QUBITS:
|
||||
raise ValueError("too many qubits")
|
||||
if circuit.depth() > MAX_DEPTH:
|
||||
raise ValueError("circuit too deep")
|
||||
58
packs/research-lab/quantum/lucidia_quantum/qnn.py
Normal file
58
packs/research-lab/quantum/lucidia_quantum/qnn.py
Normal file
@@ -0,0 +1,58 @@
|
||||
"""Helper builders for Qiskit EstimatorQNN and SamplerQNN."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from qiskit import QuantumCircuit
|
||||
from qiskit.circuit import ParameterVector
|
||||
from qiskit_machine_learning.neural_networks import EstimatorQNN, SamplerQNN
|
||||
|
||||
from .backends import AerCPUBackend, QuantumBackend
|
||||
|
||||
|
||||
def build_estimator_qnn(
|
||||
feature_map: QuantumCircuit,
|
||||
ansatz: QuantumCircuit,
|
||||
observable: QuantumCircuit | None,
|
||||
input_size: int,
|
||||
weight_size: int,
|
||||
backend: Optional[QuantumBackend] = None,
|
||||
) -> EstimatorQNN:
|
||||
"""Construct an :class:`EstimatorQNN` with gradients enabled."""
|
||||
|
||||
backend = backend or AerCPUBackend()
|
||||
input_params = ParameterVector("x", length=input_size)
|
||||
weight_params = ParameterVector("w", length=weight_size)
|
||||
return EstimatorQNN(
|
||||
feature_map=feature_map,
|
||||
ansatz=ansatz,
|
||||
observable=observable,
|
||||
input_params=input_params,
|
||||
weight_params=weight_params,
|
||||
backend=backend.simulator,
|
||||
input_gradients=True,
|
||||
)
|
||||
|
||||
|
||||
def build_sampler_qnn(
|
||||
feature_map: QuantumCircuit,
|
||||
ansatz: QuantumCircuit,
|
||||
input_size: int,
|
||||
weight_size: int,
|
||||
num_classes: int,
|
||||
backend: Optional[QuantumBackend] = None,
|
||||
) -> SamplerQNN:
|
||||
"""Construct a probabilistic :class:`SamplerQNN`."""
|
||||
|
||||
backend = backend or AerCPUBackend()
|
||||
input_params = ParameterVector("x", length=input_size)
|
||||
weight_params = ParameterVector("w", length=weight_size)
|
||||
return SamplerQNN(
|
||||
feature_map=feature_map,
|
||||
ansatz=ansatz,
|
||||
input_params=input_params,
|
||||
weight_params=weight_params,
|
||||
output_shape=num_classes,
|
||||
backend=backend.simulator,
|
||||
)
|
||||
51
packs/research-lab/quantum/lucidia_quantum/tests/test_qml.py
Normal file
51
packs/research-lab/quantum/lucidia_quantum/tests/test_qml.py
Normal file
@@ -0,0 +1,51 @@
|
||||
"""Basic tests for the Quantum ML module."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
pytest.importorskip("torch")
|
||||
pytest.importorskip("qiskit")
|
||||
pytest.importorskip("qiskit_machine_learning")
|
||||
|
||||
import torch
|
||||
from qiskit.circuit.library import RealAmplitudes, ZZFeatureMap
|
||||
|
||||
import lucidia.quantum as qml
|
||||
from lucidia.quantum.kernels import fit_qsvc
|
||||
from lucidia.quantum.qnn import build_sampler_qnn
|
||||
from lucidia.quantum.torch_bridge import QModule
|
||||
|
||||
|
||||
def test_feature_flag_off(monkeypatch):
|
||||
monkeypatch.setenv("LUCIDIA_QML", "off")
|
||||
importlib.reload(qml)
|
||||
assert not qml.is_enabled()
|
||||
with pytest.raises(RuntimeError):
|
||||
qml.get_backend()
|
||||
|
||||
|
||||
def test_sampler_qnn_gradients(monkeypatch):
|
||||
monkeypatch.setenv("LUCIDIA_QML", "on")
|
||||
importlib.reload(qml)
|
||||
feature_map = ZZFeatureMap(2)
|
||||
ansatz = RealAmplitudes(2, reps=1)
|
||||
qnn = build_sampler_qnn(feature_map, ansatz, input_size=2, weight_size=ansatz.num_parameters, num_classes=2)
|
||||
module = QModule(qnn, seed=1)
|
||||
x = torch.zeros((1, 2), requires_grad=True)
|
||||
out = module(x)
|
||||
out.backward(torch.ones_like(out))
|
||||
assert torch.all(torch.isfinite(x.grad))
|
||||
|
||||
|
||||
def test_qsvc_training(monkeypatch):
|
||||
monkeypatch.setenv("LUCIDIA_QML", "on")
|
||||
importlib.reload(qml)
|
||||
x = np.array([[0, 0], [1, 1]])
|
||||
y = np.array([0, 1])
|
||||
model = fit_qsvc(x, y)
|
||||
preds = model.predict(x)
|
||||
assert preds.shape == (2,)
|
||||
37
packs/research-lab/quantum/lucidia_quantum/torch_bridge.py
Normal file
37
packs/research-lab/quantum/lucidia_quantum/torch_bridge.py
Normal file
@@ -0,0 +1,37 @@
|
||||
"""PyTorch bridge for Quantum Neural Networks."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import random
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from qiskit_machine_learning.connectors import TorchConnector
|
||||
from qiskit_machine_learning.neural_networks import NeuralNetwork
|
||||
|
||||
|
||||
class QModule(torch.nn.Module):
|
||||
"""Wrap a Qiskit ``NeuralNetwork`` as a ``torch.nn.Module``."""
|
||||
|
||||
def __init__(self, qnn: NeuralNetwork, seed: int | None = 42) -> None:
|
||||
super().__init__()
|
||||
if seed is not None:
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
torch.manual_seed(seed)
|
||||
self.qnn = qnn
|
||||
self.connector = TorchConnector(neural_network=self.qnn)
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
if not isinstance(x, torch.Tensor): # pragma: no cover - defensive
|
||||
raise TypeError("expected torch.Tensor")
|
||||
return self.connector(x)
|
||||
|
||||
def to(self, device: Any) -> "QModule": # type: ignore[override]
|
||||
if device not in {"cpu", torch.device("cpu")}: # pragma: no cover - gpu path
|
||||
if os.getenv("LUCIDIA_QML_GPU", "off").lower() not in {"1", "true", "on"}:
|
||||
raise RuntimeError("GPU disabled")
|
||||
self.connector.to(device)
|
||||
return super().to(device)
|
||||
9
packs/research-lab/quantum/quantum_engine/__init__.py
Normal file
9
packs/research-lab/quantum/quantum_engine/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
"""Lucidia Quantum Engine."""
|
||||
from .policy import enforce_import_block, guard_env, set_seed
|
||||
|
||||
enforce_import_block()
|
||||
__all__ = [
|
||||
'enforce_import_block',
|
||||
'guard_env',
|
||||
'set_seed',
|
||||
]
|
||||
64
packs/research-lab/quantum/quantum_engine/cli.py
Normal file
64
packs/research-lab/quantum/quantum_engine/cli.py
Normal file
@@ -0,0 +1,64 @@
|
||||
"""CLI entrypoint for the quantum engine."""
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import torch
|
||||
|
||||
from .policy import guard_env, set_seed
|
||||
from .models import PQCClassifier, QAOAModel, VQEModel
|
||||
from .device import Device
|
||||
|
||||
|
||||
def _run(args: argparse.Namespace) -> None:
|
||||
model_map = {
|
||||
'vqe': VQEModel,
|
||||
'qaoa': QAOAModel,
|
||||
'qkernel': PQCClassifier,
|
||||
}
|
||||
model = model_map[args.example](n_wires=args.wires)
|
||||
x = torch.zeros(args.shots, 1, device=args.device)
|
||||
out = model(x)
|
||||
print(out.mean().item())
|
||||
|
||||
|
||||
def _bench(args: argparse.Namespace) -> None:
|
||||
print(f"running {args.suite} bench")
|
||||
|
||||
|
||||
def _qasm(args: argparse.Namespace) -> None:
|
||||
dev = Device(n_wires=2)
|
||||
with open(args.outfile, 'w', encoding='utf-8') as fh:
|
||||
fh.write(dev.qasm())
|
||||
|
||||
|
||||
def main() -> None:
|
||||
guard_env()
|
||||
parser = argparse.ArgumentParser(prog='lucidia-quantum')
|
||||
parser.add_argument('--seed', type=int, default=0)
|
||||
sub = parser.add_subparsers(dest='cmd', required=True)
|
||||
|
||||
runp = sub.add_parser('run')
|
||||
runp.add_argument('--example', choices=['vqe', 'qaoa', 'qkernel'], required=True)
|
||||
runp.add_argument('--wires', type=int, default=4)
|
||||
runp.add_argument('--shots', type=int, default=1024)
|
||||
runp.add_argument('--device', type=str, default='cpu')
|
||||
|
||||
benchp = sub.add_parser('bench')
|
||||
benchp.add_argument('--suite', choices=['smoke', 'full'], default='smoke')
|
||||
|
||||
qasmp = sub.add_parser('qasm')
|
||||
qasmp.add_argument('--in', dest='infile', required=True)
|
||||
qasmp.add_argument('--out', dest='outfile', required=True)
|
||||
|
||||
args = parser.parse_args()
|
||||
set_seed(args.seed)
|
||||
if args.cmd == 'run':
|
||||
_run(args)
|
||||
elif args.cmd == 'bench':
|
||||
_bench(args)
|
||||
elif args.cmd == 'qasm':
|
||||
_qasm(args)
|
||||
|
||||
|
||||
if __name__ == '__main__': # pragma: no cover
|
||||
main()
|
||||
19
packs/research-lab/quantum/quantum_engine/device.py
Normal file
19
packs/research-lab/quantum/quantum_engine/device.py
Normal file
@@ -0,0 +1,19 @@
|
||||
"""Device wrapper around torchquantum.QuantumDevice."""
|
||||
from __future__ import annotations
|
||||
|
||||
from third_party import torchquantum as tq
|
||||
|
||||
from .policy import guard_env, set_seed
|
||||
|
||||
|
||||
class Device:
|
||||
"""Lightweight wrapper providing deterministic setup and QASM export."""
|
||||
|
||||
def __init__(self, n_wires: int, bsz: int = 1, device: str = 'cpu', seed: int | None = None):
|
||||
guard_env()
|
||||
set_seed(seed)
|
||||
self.qdev = tq.QuantumDevice(n_wires=n_wires, bsz=bsz, device=device)
|
||||
|
||||
def qasm(self) -> str:
|
||||
"""Return a QASM-like string of the operations."""
|
||||
return self.qdev.qasm()
|
||||
21
packs/research-lab/quantum/quantum_engine/layers.py
Normal file
21
packs/research-lab/quantum/quantum_engine/layers.py
Normal file
@@ -0,0 +1,21 @@
|
||||
"""Layer helpers wrapping TorchQuantum primitives."""
|
||||
from __future__ import annotations
|
||||
|
||||
from torch import nn
|
||||
from third_party import torchquantum as tq
|
||||
|
||||
|
||||
class RandomLayer(tq.RandomLayer):
|
||||
"""Expose torchquantum.RandomLayer."""
|
||||
|
||||
|
||||
class QFTLayer(tq.QFTLayer):
|
||||
"""Expose torchquantum.QFTLayer."""
|
||||
|
||||
|
||||
class QutritEmulator(nn.Module):
|
||||
"""Encode a trit with two qubits and penalise the forbidden state."""
|
||||
|
||||
def forward(self, qdev, wires):
|
||||
# Probability of reaching the forbidden |11> state acts as a penalty
|
||||
return qdev.prob_11(wires)
|
||||
26
packs/research-lab/quantum/quantum_engine/metrics.py
Normal file
26
packs/research-lab/quantum/quantum_engine/metrics.py
Normal file
@@ -0,0 +1,26 @@
|
||||
"""Utility metrics for quantum circuits."""
|
||||
from __future__ import annotations
|
||||
|
||||
import torch
|
||||
|
||||
|
||||
def expval(qdev) -> torch.Tensor:
|
||||
return qdev.measure_all()
|
||||
|
||||
|
||||
def kl_div(p: torch.Tensor, q: torch.Tensor) -> torch.Tensor:
|
||||
p = p + 1e-9
|
||||
q = q + 1e-9
|
||||
return torch.sum(p * (torch.log(p) - torch.log(q)), dim=-1)
|
||||
|
||||
|
||||
def energy(values: torch.Tensor) -> torch.Tensor:
|
||||
return values.sum(dim=-1)
|
||||
|
||||
|
||||
def depth(qdev) -> int:
|
||||
return len(qdev.ops)
|
||||
|
||||
|
||||
def two_qubit_count(qdev) -> int:
|
||||
return 0
|
||||
44
packs/research-lab/quantum/quantum_engine/models.py
Normal file
44
packs/research-lab/quantum/quantum_engine/models.py
Normal file
@@ -0,0 +1,44 @@
|
||||
"""Example quantum models."""
|
||||
from __future__ import annotations
|
||||
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
from torch import nn
|
||||
from third_party import torchquantum as tq
|
||||
|
||||
|
||||
class PQCClassifier(nn.Module):
|
||||
def __init__(self, n_wires: int = 4):
|
||||
super().__init__()
|
||||
self.n_wires = n_wires
|
||||
self.measure = tq.MeasureAll(tq.PauliZ)
|
||||
self.rx0 = tq.RX(True, True)
|
||||
self.ry0 = tq.RY(True, True)
|
||||
self.rz0 = tq.RZ(True, True)
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
bsz = x.shape[0]
|
||||
qdev = tq.QuantumDevice(n_wires=self.n_wires, bsz=bsz, device=x.device)
|
||||
self.rx0(qdev, wires=0)
|
||||
self.ry0(qdev, wires=1)
|
||||
self.rz0(qdev, wires=2)
|
||||
meas = self.measure(qdev)
|
||||
if meas.shape[-1] < 4:
|
||||
pad = torch.zeros(bsz, 4 - meas.shape[-1], device=meas.device, dtype=meas.dtype)
|
||||
meas = torch.cat([meas, pad], dim=-1)
|
||||
logits = meas[..., :4].reshape(bsz, 2, 2).sum(-1)
|
||||
return F.log_softmax(logits, dim=1)
|
||||
|
||||
|
||||
class VQEModel(nn.Module):
|
||||
"""Placeholder VQE model."""
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
return torch.tensor(0.0)
|
||||
|
||||
|
||||
class QAOAModel(nn.Module):
|
||||
"""Placeholder QAOA model."""
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
return torch.tensor(0.0)
|
||||
56
packs/research-lab/quantum/quantum_engine/policy.py
Normal file
56
packs/research-lab/quantum/quantum_engine/policy.py
Normal file
@@ -0,0 +1,56 @@
|
||||
"""Runtime guardrails for the quantum engine."""
|
||||
from __future__ import annotations
|
||||
|
||||
import builtins
|
||||
import os
|
||||
import random
|
||||
import socket
|
||||
from typing import Optional
|
||||
|
||||
MAX_WIRES = 8
|
||||
MAX_SHOTS = 2048
|
||||
TIMEOUT = 60
|
||||
|
||||
DENYLIST = (
|
||||
'torchquantum.plugins',
|
||||
'qiskit',
|
||||
'qiskit_ibm_runtime',
|
||||
)
|
||||
|
||||
|
||||
def enforce_import_block() -> None:
|
||||
"""Block imports of disallowed modules."""
|
||||
real_import = builtins.__import__
|
||||
|
||||
def guarded(name, *args, **kwargs):
|
||||
for bad in DENYLIST:
|
||||
if name.startswith(bad):
|
||||
raise ImportError(f'{bad} is disabled')
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
builtins.__import__ = guarded
|
||||
|
||||
|
||||
def guard_env() -> None:
|
||||
"""Fail closed on hardware flags and block outbound sockets."""
|
||||
if os.getenv('LUCIDIA_QHW') == '1':
|
||||
raise RuntimeError('Hardware backends are disabled')
|
||||
|
||||
class _BlockedSocket(socket.socket):
|
||||
def __init__(self, *args, **kwargs):
|
||||
raise RuntimeError('Network access disabled')
|
||||
|
||||
socket.socket = _BlockedSocket
|
||||
|
||||
|
||||
def set_seed(seed: Optional[int] = None) -> int:
|
||||
if seed is None:
|
||||
seed = 0
|
||||
random.seed(seed)
|
||||
try:
|
||||
import torch
|
||||
|
||||
torch.manual_seed(seed)
|
||||
except Exception:
|
||||
pass
|
||||
return seed
|
||||
20
packs/research-lab/quantum/quantum_engine/search.py
Normal file
20
packs/research-lab/quantum/quantum_engine/search.py
Normal file
@@ -0,0 +1,20 @@
|
||||
"""Circuit search scaffolding."""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List
|
||||
|
||||
from third_party import torchquantum as tq
|
||||
|
||||
|
||||
@dataclass
|
||||
class SearchSpace:
|
||||
layers: List[tq.RandomLayer] = field(default_factory=list)
|
||||
|
||||
def add_layer(self, layer: tq.RandomLayer) -> None:
|
||||
self.layers.append(layer)
|
||||
|
||||
|
||||
def noise_aware_score(circuit: SearchSpace, noise: float | None = None) -> float:
|
||||
"""Placeholder for future noise-aware scoring."""
|
||||
return float(len(circuit.layers))
|
||||
Reference in New Issue
Block a user