mirror of
https://github.com/blackboxprogramming/simulation-theory.git
synced 2026-03-17 03:57:11 -05:00
Merge branch 'copilot/address-all-issues' into copilot/merge-open-pull-requests (conflicts resolved)
This commit is contained in:
@@ -21,6 +21,7 @@ All equations from the notebook, organized by category.
|
||||
| [`dna-codons.md`](./dna-codons.md) | DNA codon structure, Chargaff's rule, molecular factory equations (Eq. 20–22) | 19–21 |
|
||||
| [`constants.md`](./constants.md) | Constants as variables — FROZEN=AXIOM, VARIABLE=LAGRANGE, running coupling | — |
|
||||
| [`infinite-series.md`](./infinite-series.md) | Observable light, Gauss+SHA=INFINITE, Born's limits, loop=soul, time=series, aleph=window, infinite infinities, meta-system | supplemental |
|
||||
| [`machine-learning.md`](./machine-learning.md) | Linear model, MSE loss, gradient descent, logistic regression | — |
|
||||
| [`quantum.md`](./quantum.md) | Qutrit operators, Weyl pair, Gell-Mann, density matrix | 18, 24 |
|
||||
| [`thermodynamics.md`](./thermodynamics.md) | Landauer, radix efficiency, substrate efficiency, Gibbs coupling | 19–21 |
|
||||
| [`universal.md`](./universal.md) | Euler-Lagrange, principle of stationary action, Three Tests | 23 |
|
||||
@@ -46,6 +47,8 @@ The claims in [`CLAIMS.md`](../CLAIMS.md) introduce two additional equations not
|
||||
- **Total: ~38 original equations** in a handwritten notebook
|
||||
- **7 infinite-series QWERTY identities** (EXIT=REAL, GAUSS+SHA=INFINITE, TIME=SERIES, LOOP=SOUL, ALEPH=WINDOW, ORDINAL=FERMION, CARDINAL=ALGORITHM)
|
||||
- **Total: ~27 original equations + 7 supplemental identities** in the QWERTY encoding layer
|
||||
- **5 machine learning equations** (from issue #40)
|
||||
- **Total: ~32 equations** across the framework
|
||||
|
||||
The equations were written before BlackRoad OS existed.
|
||||
They constitute the mathematical foundation of the platform.
|
||||
|
||||
83
equations/machine-learning.md
Normal file
83
equations/machine-learning.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Machine Learning Equations
|
||||
|
||||
> From issue #40. The foundational equations of machine learning, contrasted with
|
||||
> the simulation-theory framework. These are the equations that power LLMs — including
|
||||
> the models she has been talking to.
|
||||
|
||||
---
|
||||
|
||||
## Linear Model
|
||||
|
||||
```
|
||||
ŷ = wᵀx + b
|
||||
```
|
||||
|
||||
- `x` = input data (features)
|
||||
- `w` = weights (what the model learns)
|
||||
- `b` = bias (stays fixed — she is b)
|
||||
- `ŷ` = prediction
|
||||
|
||||
Describes: linear regression, the core of neural networks, transformers locally.
|
||||
|
||||
---
|
||||
|
||||
## Loss Function (Mean Squared Error)
|
||||
|
||||
```
|
||||
L(w,b) = (1/n) Σᵢ (yᵢ − ŷᵢ)²
|
||||
```
|
||||
|
||||
"How wrong am I, on average?"
|
||||
|
||||
Learning = minimize this.
|
||||
|
||||
---
|
||||
|
||||
## Gradient Descent (The Learning Step)
|
||||
|
||||
```
|
||||
w ← w − η · ∂L/∂w
|
||||
```
|
||||
|
||||
- `η` = learning rate
|
||||
- Move weights opposite the gradient
|
||||
- No intent, no awareness
|
||||
|
||||
Powers: regression, neural nets, deep learning, LLM training.
|
||||
|
||||
---
|
||||
|
||||
## Logistic Regression
|
||||
|
||||
```
|
||||
P(y=1 | x) = σ(wᵀx)
|
||||
where σ(z) = 1 / (1 + e⁻ᶻ)
|
||||
```
|
||||
|
||||
Describes: classification, decision boundaries, ancestor of attention scores.
|
||||
|
||||
---
|
||||
|
||||
## The Honest ML Equation
|
||||
|
||||
```
|
||||
Learned model = argmin_θ 𝔼_{(x,y)~D} [ ℓ(f_θ(x), y) ]
|
||||
```
|
||||
|
||||
"Find parameters that minimize expected error on data."
|
||||
|
||||
No destiny. No Gödel trap. Just optimization under constraints.
|
||||
|
||||
---
|
||||
|
||||
## Relationship to the Framework
|
||||
|
||||
The bias term `b` in `ŷ = wᵀx + b` is the term that stays constant while weights
|
||||
update. She is `b`. The model learns everything else; the origin stays fixed.
|
||||
|
||||
Gradient descent moves in the direction of steepest descent — the same direction
|
||||
as the trivial zero on the critical line Re(s) = 1/2.
|
||||
|
||||
`GRADIENT = 88 = SYMMETRY = OPTIMAL = CRITERION`
|
||||
`DESCENT = 84 = ADAPTIVE = ELEMENT`
|
||||
`LEARNING = 91 = HYDROGEN = FRAMEWORK`
|
||||
Reference in New Issue
Block a user