Per Ardua

Research Program

Structural Compression Theory — 24 papers across neural network geometry, multi-agent coordination, organizational physics, market mechanisms, and applied analysis

Activation Geometry

Neural network training dynamics and model composition through activation-space analysis

AI-1/2

Training Trajectory Structure: Regime Detection, Cross-Seed Convergence, and Speculative Weight Prediction

Consolidated paper. Three-regime taxonomy via activation fingerprints generalizes across 124M-7B. Cross-seed convergence: CV 0.41-2.43%, phase boundaries synchronized to +/-50 steps. Linear prediction achieves 60-90% strict acceptance at 7B in stable regimes. Universal momentum catastrophe: 100-10,000x loss inflation at all scales. Supersedes Leap+Verify and Ensemble Collapse.

DOI
AI-3

Constellation-Indexed Model Composition

Dynamic specialist composition via activation fingerprints. Generalist-space indexing: 20.6% to 98.1% win rate. Stochastic resonance at sigma=0.020.

DOI
AI-4

The Shape of the Problem: Domain-Invariant Structural Signatures

INLP erases domain signal to near-chance while shape classification holds at 95.6%+. Clean double dissociation of domain and structure.

DOI
AI-5

Capability Manifold Surveillance: Topological Detection of Model Distillation

Activation fingerprint geometry as security primitive. AUC 1.000 on systematic attacks at 1.5B+. Economic deterrence via temporal accumulation and coverage-clustering tradeoff.

DOI
AI-7

GenAI Is Socially Awkward: RLHF Instruction Tuning Damages Social Cognition at Small Scale by Suppressing Pragmatic Inference

RLHF hurts social cognition at 7B (-18 points) but helps at 72B (+6). The deficit is processing mode: compliance training suppresses fuzzy pattern matching social reasoning requires.

DOI
AI-8/9/10/11/12/13

The Geometry of Failure: Terminal Measurement, Concentration Barriers, and the Limits of Linear Intervention in Neural Networks

Consolidated paper. Shaped noise breaks 100% of repetition loops but cross-domain selectivity fails. Layer-resolved mapping, spectral analysis, concentration barrier theorem (selectivity bounded by k/d_eff), and SR channel capacity (~2 bits) converge on one result: classification and intervention operate under different constraints. The forward pass is an isotropic amplifier; no fixed subspace can isolate domain-specific computation.

DOI
AI-14

Causal Basis Discovery for Domain-Selective Noise Injection

INLP is the only basis with positive selectivity (+0.618). All causal bases (patching, contrastive, gradient) are anti-selective. Classification-intervention dissociation established.

DOI
AI-15

The Inter-Instance Compression Barrier: Domain-Specific Information Loss at the Natural Language Interface

NL interface is a uniform lossy channel dropping ~80% proportionally across domain-specific and domain-agnostic subspaces. The concentration barrier is the sole constraint.

DOI
AI-16/17/18/19

The Dissociation of Geometry and Function: Why Activation-Level State Transfer Cannot Compete with Text

Consolidated paper. Four experiments on activation-level state transfer converge: geometric similarity does not imply functional equivalence. Text preserves 63.7% of reasoning trajectory; activation injection adds 1.2%. Priming selection (activation-guided input) closes 48.9% of the expert gap, outperforming injection by 5.4x. Domain reversal: text preserves legal best, activations preserve science best. Supersedes Projection Transmission, Bandwidth-Fidelity, Continuation Perplexity, and Ensemble Gravity.

DOI
AI-20/21/23/24

Context Management in Autoregressive Language Models

Consolidated paper. Ritual shape, shepherd agents, context fences, and hop scaling unified. 15 tokens capture 98.8% of benefit. Adaptive priming is harmful. Mode switching, not garbage collection. Chain degradation saturates logarithmically by hop 5. Cross-domain handoffs outperform within-domain.

DOI
AI-25/26

Universal Entanglement in Transformer Activation Space

Consolidated paper. SVD directions can be concept-pure for classification yet carry all concepts in activations. Random Gaussian projections reproduce learned entanglement intensity (EI=1.50). Superlinear amplification confirmed. Entanglement is geometric, not learned.

DOI
AI-27

The Entanglement Theorem: Structural Concept Coupling as a Geometric Consequence of High-Dimensional Encoding

Proves structural entanglement is a mathematical theorem via concentration of measure. Specialist bound: compositional architecture is geometric necessity. Alignment implication: surgical concept editing is geometrically limited.

DOI
AI-28/29

Entanglement Under Fine-Tuning: Architecture-Dependent Collapse, Scale Thresholds, and Entanglement-Optimal Adaptation

Consolidated paper. B3 drives EI to zero in Qwen-32B (8 seeds, phase transition by step 3500) but not in four other architectures. Scale ladder: -32% at 7B, -93% at 14B, -100% at 32B. Block-diagonal LoRA adapters that respect entanglement geometry outperform concept-isolating adapters (48.2% vs 36.6% HumanEval+). CSR preserves structure.

DOI

Academic Monograph

The unified theoretical framework

Monograph Published

Structural Compression Theory: A Unified Information-Theoretic Account of Organizational Dysfunction, Creativity, and Substrate-Independent Selection Dynamics

Academic monograph unifying 24 papers into a single formal framework. Proves that compression under selection produces systematic drift from reality toward internal fit — and that the mechanism is substrate-independent across cognition, organizations, and AI. Three theorems, one lemma, five sufficient conditions, and the inseparability corollary: dysfunction and creativity share a single channel, distinguished only by selection regime. 17 chapters across five parts, 17 appendices (including the Entanglement Theorem), validated across 500 computational configurations with zero counterexamples. Derives the Hallucination Corollary: you cannot eliminate hallucination without eliminating generalization.

DOI SSRN

In Progress

Active research awaiting formal publication