ARLIT: An Operational Test for Scale-Invariant Effective Information
ARLIT: An Operational Test for Scale-Invariant Effective Information
Author: Jordon Morgan-Griffiths
Affiliation: Founder, Independent Researcher, THE UISH
Corresponding author: icontactdakari@gmail.com
Author Contributions
Jordon Morgan-Griffiths conceived the research, developed the simulator, performed the numerical experiments, analyzed the data, and wrote the manuscript. The simulation framework and public demonstrator (ARLIT) were designed and implemented by the author.
Suggested citation:
“J. Morgan-Griffiths (2025). ARLIT, an operational test for scale-invariant effective information. (Version 1.0). Zenodo. DOI: [to be assigned].”
Abstract
We introduce ARLIT, an operational test for scale-invariant effective information. Given an information source (I(\Lambda)) measured over resolution (\Lambda), we learn a power-law renormalizer (Z(\Lambda)=\Lambda^{s}) on a train window and verify out-of-sample (OOS) flatness of (C(\Lambda)=Z(\Lambda),I(\Lambda)) on a held-out window. Our main instantiation sets (I(\Lambda)) to the quantum Fisher information (QFI) of a free Gaussian field with respect to (\theta=\ln m) and (|k|<\Lambda), which we compute exactly. The protocol reports an OOS flatness score ARLIT# (RMS slope in (\log\Lambda)–(\log C)), with bootstrap confidence intervals, k-fold validation in (\Lambda), convergence checks, (Z)-form ablations, and adversarial stress tests. Results: in (d=3), IR windows and a UV window pass decisively (Green: ARLIT#(test) (\ll 0.2), tight CIs), while the IR→UV crossover is rejected (ARLIT#(test) (\approx 1)). Conclusion: scale-invariant effective information is window-local. We release a single-file, offline artifact with “Paper Mode” that reproduces all figures, CSVs, seeds, and unit-test transcripts.
1. Introduction
1.1 Motivation: preserving information across scale
Across physics, ML, and complex systems, we “zoom” a system and ask whether what matters stays the same. In RG language, that is scale invariance; in information geometry, it is preserving distinguishability under coarse-graining. Operationally: does there exist a simple (Z(\Lambda)) such that the effective information (C(\Lambda)=Z(\Lambda),I(\Lambda)) looks the same across a range of scales? If yes, we can (i) quantify the local scaling dimension (s) and (ii) separate genuine invariance from wishful thinking.
1.2 What’s missing: falsifiable, OOS validation (not visual heuristics)
Most “scale-invariance” claims lean on straight-ish log–log plots and in-window fits. That’s weak: it ignores held-out scales, has no uncertainty bars, and can’t tell you when the claim fails. We need a test that:
- learns (Z(\Lambda)=\Lambda^s) only on train (\Lambda),
- measures OOS flatness of (C(\Lambda)) on test (\Lambda),
- reports CIs for both (s) and ARLIT#,
- and breaks under adversarial spectra or bad windows.
1.3 Contributions: ARLIT metric, window protocol, QFI mode, Claim Pack, Paper Mode
- Framework (ARLIT). Define (C(\Lambda)=Z(\Lambda),I(\Lambda)) and the OOS flatness score ARLIT# (RMS of discrete slopes of (\log C) vs (\log\Lambda) on test).
- Window protocol. Train/test split in (\Lambda); k-fold across (\Lambda); bootstrap CIs (68/95%); convergence (integration/grid); (Z)-form ablations; noise + adversarial checks.
- QFI implementation (free Gaussian, (\theta=\ln m)). Exact, fast, offline computation of (F_Q(\ln m;\Lambda)) with (|k|<\Lambda); information-geometric source with monotonicity under trace-out.
- Artifacts. Single-file app (no deps) and a Claim Pack exporter (CSV/PNG/JSON + seeds + unit tests). Paper Mode runs a standard IR/UV suite and auto-generates a submission-ready pack.
1.4 Findings summary: IR/UV pass, crossover fail → window-local law
Using QFI in (d=3) with (\theta=\ln m):
- IR windows (e.g., ([0.01,0.05]), ([0.05,0.20])): PASS (Green) with narrow CIs; IR exhibits power-law growth, a single (s) cancels the slope.
- UV window (e.g., ([5,10])): PASS; QFI saturates, best (s\approx 0); (C) is flat OOS with tight CIs.
- Crossover (e.g., ([0.20,1.00])): FAIL; the curve bends from IR scaling to UV saturation; one (s) can’t flatten it, ARLIT# spikes with tight CIs.
Conclusion: the framework holds; the claim is window-local.
2. Related Work
2.1 RG & fixed points (coarse-graining, scale invariance)
RG coarse-grains, flows couplings, and reads physics from fixed points and exponents. ARLIT is test-first: it checks whether an observable’s effective information (C=Z,I) is flat OOS after learning a simple (Z(\Lambda)=\Lambda^s) on train. RG predicts where scaling might hold; ARLIT verifies that information actually behaves invariantly there.
2.2 Information geometry & QFI/Bures
QFI is the Bures metric’s local form: it quantifies distinguishability and is monotone under CPTP maps (e.g., tracing out modes). We use QFI for (\theta=\ln m) in a free Gaussian field truncated at (|k|<\Lambda), giving an exact (I(\Lambda)) with built-in physical consistency.
2.3 Scale-invariant features (vision), network coarse-graining
SIFT/DoG construct multi-scale features; spectral coarse-graining preserves slow modes on reduced graphs. ARLIT targets invariance of information itself, verified OOS, not features engineered to be invariant.
2.4 Limits (undecidability in RG flows) and why window-local claims are right
There are rigorous constructions where RG-like flows are uncomputable. Global, once-for-all claims across all scales are often too strong. ARLIT therefore keeps claims window-local and empirical.
3. Framework: ARLIT (Window-Local Invariance Test)
3.1 Definitions: (I(\Lambda)), (Z(\Lambda)), (C(\Lambda)=Z,I)
Choose a contiguous window (\Lambda\in[\Lambda_{\min},\Lambda_{\max}]); sample (N) log-spaced points
(x_i=\log\Lambda_i,; y_i=\log I(\Lambda_i)).
With (Z(\Lambda)=\Lambda^s), define
(\tilde y_i=\log C(\Lambda_i)=y_i+s,x_i).
Hypothesis: (\tilde y) is flat on test scales.
3.2 ARLIT#: OOS flatness metric
For adjacent test indices (i):
[
\Delta_i=\frac{\tilde y_{i+1}-\tilde y_i}{x_{i+1}-x_i},\qquad
\text{ARLIT#}=\sqrt{\frac{1}{|\mathcal T'|}\sum_{i\in\mathcal T'}\Delta_i^2}.
]
Thresholds: Green < 0.2, Amber 0.2–0.5, Red ≥ 0.5.
This is scale-free and sensitive to curvature, not just bias.
3.3 Learning (Z=\Lambda^{s}) on train (OLS)
Fit (s) on train indices (\mathcal R) by OLS:
[
\hat s=-,\frac{\sum_{i\in\mathcal R}(x_i-\bar x)(y_i-\bar y)}{\sum_{i\in\mathcal R}(x_i-\bar x)^2}.
]
Form (\tilde y_i=y_i+\hat s,x_i) for all (i), but score ARLIT# only on test.
3.4 Train/test in (\Lambda); k-fold across (\Lambda)
Use contiguous train/test sub-intervals. k-fold splits the window into (k) contiguous folds; train on complement, test on fold; report mean±sd. Bootstrap the train/test grids (with replacement) for 68/95% CIs. Convergence: increase numerical resolution until ARLIT# stabilizes.
3.5 Falsifiability: adversarial and stress tests
- Adversarial bump (log-Gaussian) inside test:
(\log I!\leftarrow!\log I + A\exp!\left(-(\log\Lambda-\mu)^2/(2\sigma_b^2)\right)) ⇒ ARLIT# must rise. - Noise robustness: multiply (I) by (\exp(\epsilon)), (\epsilon\sim\mathcal N(0,\sigma^2)); plot ARLIT# vs (\sigma).
- (Z)-form ablations: const, (\Lambda^{s}), shallow poly in (\log\Lambda); real invariance prefers the simple power law OOS.
4. Information Source (I(\Lambda))
4.1 QFI (free Gaussian, (\theta=\ln m)): exact, monotone, truncated by (|k|<\Lambda)
Zero-mean Gaussian field with modes (k) has covariance
(\Sigma(k)=1/(2\omega_k),\quad \omega_k=\sqrt{k^2+m^2}).
For (\theta=\ln m),
[
F_Q(\theta;\Lambda)=\tfrac12,\mathrm{Tr}!\left[(\Sigma^{-1}\partial_\theta\Sigma)^2\right]_{|k|<\Lambda}
=\tfrac12,m^4,\frac{S_d}{(2\pi)^d}\int_0^\Lambda \frac{k^{d-1}}{(k^2+m^2)^2},dk.
]
Monotonicity under partial trace ensures physical consistency when changing (\Lambda).
4.2 Closed forms in (d=2,3); IR growth vs UV saturation
Let (S_d=2\pi^{d/2}/\Gamma(d/2)).
- (d=3):
[
I(\Lambda)=\frac{1}{2\pi^2}\cdot
\frac{-\Lambda m+(\Lambda^2+m^2)\arctan(\Lambda/m)}{2m(\Lambda^2+m^2)}.
]
IR: (\sim \tfrac{1}{12\pi^2},\Lambda^3/m^4). UV: (\to \tfrac{1}{8\pi m}) (saturation). - (d=2):
[
I(\Lambda)=\frac{\Lambda^2}{4\pi m^2(\Lambda^2+m^2)}.
]
IR: (\sim \tfrac{1}{4\pi},\Lambda^2/m^4). UV: (\to \tfrac{1}{4\pi m^2}).
These shapes explain our results: power-law IR (one (s) cancels slope) and curved crossover to UV saturation (one (s) fails on wide bands).
4.3 Interacting (\phi^4) (2D) surrogates—what they’re good for (and not)
- Hartree effective mass: (m_\text{eff}^2=m^2+3\lambda\langle\phi^2\rangle); compute (I(\Lambda)) with (m_\text{eff}) (optionally add tiny log-oscillation).
- Tiny lattice (k)-space proxy: on an (N\times N) torus, sum (\sum_{|k|<\Lambda}(k^2+m^2+\alpha\lambda)^{-2}).
These induce curvature/crossovers to exercise validation mechanics in a single file; they are not substitutes for authentic interacting-QFI (future MC/tensor-network work).
5. Sufficient Conditions for (Z(\Lambda)=\Lambda^{s})
5.1 Local affine behavior; bounded curvature
Let (x=\log\Lambda), (f(x)=\log I(e^x)). Assume:
- (f\in C^2) with (\sup|f''|\le\kappa) on the window,
- quasi-uniform samples (max step ( \le h)),
- additive noise in (\log I) with scale (\sigma).
If test log-width is (W), then
[
\mathrm{ARLIT#}(\hat s;\text{test}) = O(\kappa,W) + O(\sigma).
]
A finite-difference curvature proxy (\widehat\kappa) diagnoses whether to expect Green or to split the window.
5.2 Interpretation: (s) as a local information dimension
On windows where (f(x)\approx a - s x), (\hat s) equals the local exponent of (I(\Lambda)\sim \Lambda^{-s}). In free QFI: IR (\hat s\approx d); UV (\hat s\approx 0); crossover shows drifting (s).
6. Validation Protocol (Acceptance Bar Baked-In)
6.1 Train/test in (\Lambda); thresholds
Contiguous split; fit on train, score on held-out test. Acceptance:
ARLIT#(test) < 0.2, and CI checks: (68% upper < 0.2; 95% upper < 0.5).
6.2 Bootstrap CIs
Resample train/test grids with replacement (B) times; refit (\hat s); recompute ARLIT#; report 68/95% percentile CIs for both (\hat s) and ARLIT#(test).
6.3 k-fold in (\Lambda); convergence
k-fold with contiguous folds; report mean±sd. Increase numerical resolution until ARLIT# plateaus.
6.4 Window heatmaps; (Z)-form ablations
Sweep ((\Lambda_{\min},\Lambda_{\max})); color by ARLIT#(test). Compare const-(Z), (\Lambda^{s}), shallow poly in (\log\Lambda); real invariance prefers the simple power law OOS.
6.5 Noise robustness; adversarial bumps
Multiply (I) by (\exp(\epsilon)), (\epsilon\sim\mathcal N(0,\sigma^2)); plot ARLIT# vs (\sigma). Add log-Gaussian bumps inside test; ARLIT# must increase.
6.6 Mass generalization (m\to m')
Fit (\hat s) at (m_\text{fit}), evaluate at (m') using the same (\hat s); report ARLIT#(test;(m')) with sparklines and ((m,m')) heatmaps.
7. Results (QFI-Backed)
7.1 IR windows: pass; (\hat s\approx d)
In (d=3), (\theta=\ln m), (m=0.5):
IR bands ([0.01,0.05]), ([0.05,0.20]) are Green (ARLIT#(test) (\ll 0.2), tight CIs), (\hat s\approx 3).
7.2 UV window: pass; (\hat s\approx 0)
UV band ([5,10]) is Green with (\hat s\approx 0) (saturation), very tight CIs.
7.3 Crossover: fail; curvature-driven rejection
Mid band ([0.20,1.00]) is Red (ARLIT#(test) (\gtrsim 1)); bending from IR power-law to UV saturation prevents flattening by a single (s).
7.4 Segmenting the crossover: piecewise passes and (s(\text{window})) drift
Split ([0.20,1.00]) into narrower sub-windows (e.g., ([0.20,0.40]), ([0.40,0.70]), ([0.70,1.00])). Each tends to pass with its own (\hat s), drifting from (\approx d) toward (0).
7.5 Summary tables (what we report)
For each window: point & 68/95% CIs for (\hat s) and ARLIT#(test), PASS/FAIL, k-fold mean±sd, convergence status, and (if used) mass-generalization stability.
8. Applications
8.1 Physics: screening fixed-point windows; EFT checks
Use ARLIT to flag Green bands with stable (\hat s) across neighbors (candidate fixed-point regime) and to verify EFT validity (expected (\hat s) in its range). Amber/Red shows “you left the EFT.”
8.2 ML: ARLITNorm (using learned (Z)) to stabilize multi-resolution features
Estimate (\Lambda) (frequency bin, receptive field, wavelet level). Compute batch (I(\Lambda)) (variance/Fisher-like/QFI if Gaussian), fit (Z(\Lambda)=\Lambda^{\hat s}) on train scales, apply (C=Z,I) (or scale activations by (Z^{1/2})). Target: reduce OOS ARLIT# on held-out scale bands.
8.3 Signals/vision/audio: ARLIT# monitoring
Instrument pipelines with ARLIT monitors: calibrate (Z) on known windows; alert when OOS ARLIT# rises (scanner drift, filter mis-tuning, domain shift).
8.4 Networks: coarse-graining fidelity on target scales
Define (I(\Lambda)) from graph response at diffusion time/scale (\Lambda) (eigenmode energy, susceptibility, effective conductance). Learn (Z) on the original graph; test on the coarse graph OOS. Low ARLIT# ⇒ coarse model preserves flow information on those scales.
9. Ablations & Sensitivity
9.1 Window choices, train fraction, sample density (N)
Narrow windows ↓ curvature error but ↑ CI width; wide windows ↑ power but expose curvature. Train fraction (0.3!-!0.7) is sane. Increase (N) until ARLIT# converges.
9.2 Estimator choices and noise models
Prefer QFI; label surrogates as surrogates. Stress-test with log-Gaussian noise; publish ARLIT# vs (\sigma).
9.3 Runtime vs accuracy (browser single-file vs notebook backends)
Browser single-file suffices for free QFI and surrogates; interacting QFI belongs in a notebook/cluster backend emitting CSV/JSON into the same validation UI.
10. Limitations & Scope
- Claims are window-local, not “all scales.”
- Surrogates exercise the mechanics; authentic interacting QFI is future work.
- Formal undecidability motivates local, empirical, falsifiable statements.
11. Reproducibility & Artifacts
11.1 Single-file app and Paper Mode
Offline, no dependencies; computes free QFI, runs validations, exports artifacts. Paper Mode runs a standardized IR/UV suite with fixed seeds and thresholds.
11.2 Claim Pack schema
JSON meta (mode, (d), (m), thresholds, seeds), per-window CSVs (Lambda, (I), (Z), (C), masks), base64 PNGs (plots), unit-test transcript. Numbers recompute from CSVs with recorded seeds.
11.3 Determinism & regeneration
Recompute (\hat s) and ARLIT#(test) per window from CSVs; re-run bootstrap/k-fold with seeds to match CIs and figures exactly.
12. Discussion
ARLIT complements RG: test-first, local, falsifiable, delivering OOS + CI-backed verdicts on specific windows. OOS in (\Lambda) with CIs is the right falsification culture for multiscale claims. Next: authentic interacting QFI (lattice/MC) and a dilation-QFI variant for the free Gaussian.
13. Conclusion
We demonstrated window-local scale-invariant effective information: learn (Z(\Lambda)=\Lambda^{s}) on one band and verify OOS that (C(\Lambda)=Z(\Lambda),I(\Lambda)) is flat with tight CIs, using exact free-theory QFI. IR/UV windows pass; crossovers fail as expected. ARLIT identifies invariant bands, quantifies the local information dimension (s), rejects crossovers, and ships reproducible artifacts—useful across physics, ML, networks, and signal pipelines.
Acknowledgments
Thanks to early readers and colleagues who stress-tested the single-file artifact across platforms. Any remaining bugs are ours.
References
[1] K. G. Wilson, “Renormalization Group and Critical Phenomena. I. Renormalization Group and the Kadanoff Scaling Picture,” Phys. Rev. B 4, 3174 (1971).
[2] L. P. Kadanoff, “Scaling laws for Ising models near (T_c),” Physics 2, 263 (1966).
[3] K. G. Wilson, “The renormalization group and critical phenomena,” Rev. Mod. Phys. 55, 583 (1983).
[4] S. L. Braunstein, C. M. Caves, “Statistical distance and the geometry of quantum states,” Phys. Rev. Lett. 72, 3439 (1994).
[5] D. Petz, “Monotone metrics on matrix spaces,” Linear Algebra Appl. 244, 81–96 (1996).
[6] A. Monras, “Phase space formalism for quantum estimation of Gaussian states,” arXiv:1303.3682 (2013).
[7] L. Banchi, S. L. Braunstein, S. Pirandola, “Quantum Fidelity for Arbitrary Gaussian States,” Phys. Rev. Lett. 115, 260501 (2015).
[8] O. Pinel, P. Jian, N. Treps, C. Fabre, D. Braun, “Quantum parameter estimation using general single-mode Gaussian states,” Phys. Rev. A 88, 040102(R) (2013).
[9] T. S. Cubitt, D. Pérez-García, M. M. Wolf, “Undecidability of the spectral gap,” Nature 528, 207–211 (2015).
[10] J. Bausch, T. S. Cubitt, J. D. Watson, “Uncomputability of phase diagrams,” Nat. Commun. 12, 4528 (2021).
[11] J. D. Watson, T. S. Cubitt, J. Bausch, “Uncomputably complex renormalisation group flows,” Nat. Commun. 13, 7229 (2022).
[12] D. Gfeller, P. De Los Rios, “Spectral Coarse Graining of Complex Networks,” Phys. Rev. Lett. 99, 038701 (2007).
[13] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” IJCV 60, 91–110 (2004).
Appendix A — Derivations for free-theory QFI (d = 2, 3)
Gaussian-state QFI: (F_Q(\theta)=\tfrac12\mathrm{Tr}[(\Sigma^{-1}\partial_\theta\Sigma)^2]). With (\Sigma(k)=1/(2\sqrt{k^2+m^2})) and (\theta=\ln m),
(\partial_\theta m=m), (\Sigma^{-1}\partial_\theta\Sigma = -m/[2(k^2+m^2)]). Squaring and integrating modes up to (|k|<\Lambda) yields
[
F_Q(\theta;\Lambda)=\tfrac12,m^4,\frac{S_d}{(2\pi)^d}\int_0^\Lambda \frac{k^{d-1}}{(k^2+m^2)^2},dk,
]
with closed forms in §4.2. IR/UV asymptotics give (s\approx d) (IR) and (s\approx 0) (UV).
Appendix B — Algorithms & pseudocode (fit (Z), ARLIT#, bootstrap, k-fold)
Fit & score
- (x=\log\Lambda,;y=\log I).
- (\hat s = -,\mathrm{Cov}{\text{train}}(x,y)/\mathrm{Var}{\text{train}}(x)).
- (\tilde y=y+\hat s x).
- On test neighbors, (\Delta_i=(\tilde y_{i+1}-\tilde y_i)/(x_{i+1}-x_i)).
- ARLIT# = (\sqrt{\mathrm{mean}(\Delta_i^2)}).
Bootstrap: resample train/test grids (with replacement) (B) times; refit (\hat s^{(b)}); recompute ARLIT#({}^{(b)}); percentile CIs (68/95%).
k-fold: contiguous folds; train on complement; test on fold; aggregate mean±sd.
Appendix C — Surrogates for (\phi^4) 2D; parameter ranges
Hartree: (m_\text{eff}^2=m^2+3\lambda\langle\phi^2\rangle); compute (I(\Lambda)) with (m_\text{eff}); allow tiny log-oscillation.
Tiny lattice proxy: (\sum_{|k|<\Lambda}(k^2+m^2+\alpha\lambda)^{-2}) on an (N!\times!N) torus.
Ranges: (\lambda\in[0,1]), (\alpha\in[0,0.1]), (N\in[32,64]).
Appendix D — Hyperparameters & seeds; unit tests
Typical: (N=120) log-spaced (\Lambda); train frac = 0.5; (B=100); (k=5); adversarial (A\in[0.1,0.5],\sigma_b\in[0.1,0.3]); noise (\sigma\in[0,0.2]).
Seeds recorded per figure. Unit tests: (i) monotonicity of free (I(\Lambda)) in (\Lambda); (ii) ARLIT#(train) drops post-fit; (iii) adversarial bump raises ARLIT#(test); (iv) determinism.
Appendix E — Extra figures
Full window heatmaps; ARLIT# vs (\sigma); adversarial before/after; k-fold summary; mass generalization heatmaps and per-mass sparklines.
Appendix F — Artifact How-To (reproducing from arlit_paper_pack.json)
For each window: parse CSV → recompute (\hat s) from train; recompute ARLIT#(test) from discrete slopes; regenerate plots; re-run bootstrap and k-fold with saved seeds to match CIs/PNGs bit-for-bit.
“Dimensionless thresholds reproducibly observed in our model; external validity requires additional channels and hardware latency.”
interactive, exportable datasets
no figures promised non shipped..
[Disclaimer: This was written with AI by Jordon Morgan-Griffiths | Dakari Morgan-Griffiths]
This paper was written by AI with notes and works from Jordon Morgan-Griffiths . Therefore If anything comes across wrong, i ask, blame open AI, I am not a PHD scientist. You can ask me directly further, take the formulae's and simulation. etc.
I hope to make more positive contributions ahead whether right or wrong.
© 2025 Jordon Morgan-Griffiths UISH. All rights reserved. First published 20/10/2025.
Comments
Post a Comment