You're filing a reproduction. Same model, same hardware. Tell us what you measured. If your numbers match the original within ±15%, the original benchmark gets a reproduction count and a confidence lift.
The form below is pre-filled with the original benchmark's model, hardware, quant format, and context size. Update any field that's different in your run (e.g. runtime version, quant format, OS). Reproductions still go through editorial review; we don't auto-publish, but a successful reproduction lifts the original's confidence tier and shows up on the original benchmark's reproduction tree.
1. Submit. Fill in the form below. Required: model + hardware + at least one measurement. Everything else is optional but helps reviewers triage faster.
2. Review. A named RunLocalAI editor reviews your submission within 1-7 days. We check that the numbers are plausible, the model + hardware combo makes sense, and there's nothing dishonest going on. Suspicious submissions are rejected; we don't publish what we can't trust.
3. Publish. Approved submissions render with a clear Community submitted badge. They never get mixed up with our editorial measurements. If we later reproduce them on similar hardware, the badge upgrades to Reproduced.
Why we moderate and what we won't accept.
Name and email are optional. If you provide them, we'll only use them to credit you on the published benchmark and to ask follow-up questions. We never sell or share submitter data.
We do hash your IP for rate-limiting and duplicate detection (max 3 submissions per hour). We never store the raw IP. The hash includes a daily salt so it can't be used to track you across days.
Got a correction or a workflow report instead? Send feedback →