Skip to content

Research: Freivalds verification for llama.cpp backends#171

Draft
michaelneale wants to merge 1 commit intomainfrom
freivalds-experiment
Draft

Research: Freivalds verification for llama.cpp backends#171
michaelneale wants to merge 1 commit intomainfrom
freivalds-experiment

Conversation

@michaelneale
Copy link
Copy Markdown
Collaborator

What

Experiment measuring whether Freivalds' algebraic check can verify that remote mesh peers are running the correct model weights. This is a backend-agnostic alternative to CommitLLM's vLLM-specific kernel instrumentation.

How it works

  1. Run a forward pass through a GGUF model
  2. Capture activation tensors at matmul boundaries via llama.cpp's cb_eval callback
  3. Read the weight tensors from the model
  4. Perform the Freivalds check on CPU: verify v·y ≈ (v·W)·x where v is a secret random vector

If the peer used different weights, the check fails with high probability. The secret vector v is never shared, so the peer can't pre-compute a fake that passes.

Results (CPU inference, Apple M4 Max)

Model Quant Max Residual Avg Residual
Qwen2.5-0.5B Q4_K_M 6.4% 1.6%
SmolLM2-135M Q8_0 2.9% 1.2%
Qwen3-8B Q4_K_M 23.5%* 4.6%

*Last-layer outlier — other layers are <7%

Model substitution (running a wrong model) produces ~100% residual → clear separation for detecting basic cheating.

Key findings

  • ✅ Quantization format barely matters (Q4_K_M ≈ Q8_0 residuals)
  • ✅ Check is fast (~2ms for v·W pre-computation per matrix)
  • ✅ Uses existing llama.cpp eval callback — no kernel patches needed
  • ⚠️ Last layer of larger models shows outlier residuals — needs investigation
  • ❓ GPU residuals (Metal/CUDA) not yet measured — key open question

What's next

  1. Measure GPU residuals (Metal offload, CUDA)
  2. Investigate last-layer outlier on larger models
  3. Test with multiple random vectors for tighter confidence
  4. If viable: design commit-then-reveal protocol for mesh-llm

See research/freivalds/README.md for full writeup and protocol design.

Measures whether Freivalds' algebraic check can verify that remote peers
run the correct model weights, using llama.cpp's eval callback to capture
intermediate activations during inference.

Results on CPU (Apple M4 Max):
- Qwen2.5-0.5B Q4_K_M: max 6.4% relative residual, avg 1.6%
- SmolLM2-135M Q8_0: max 2.9% relative residual
- Qwen3-8B Q4_K_M: max 23.5% (last layer outlier), avg 4.6%

Model substitution (wrong model entirely) would produce ~100% residual,
so there is clear separation for detecting basic cheating. The last-layer
outlier on larger models needs investigation.

GPU residuals (Metal/CUDA) not yet measured — this is the key open question.

Contains:
- freivalds-check.cpp: measurement tool (builds inside llama.cpp tree)
- README.md: full writeup with protocol design and analysis
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants