Generating reliably-formatted JSON, XML, YAML, or schema-constrained output. Grammar-constrained generation libraries (Outlines, Guidance, llama.cpp grammars) are the canonical solution.
pip install llama-cpp-python with server extras.ollama pull llama3.1:8b or use llama.cpp directly with a GGUF file.from llama_cpp import Llama
llm = Llama(model_path="llama3.1-8b.Q4_K_M.gguf", n_ctx=4096)
grammar = r'''
root ::= object
object ::= "{" ws "\"" field "\"" ws ":" ws value ("," ws "\"" field "\"" ws ":" ws value)* "}"
field ::= "name" | "age" | "city"
value ::= string | number
string ::= "\"" [a-zA-Z0-9 ]* "\""
number ::= [0-9]+
ws ::= [ \t\n]*
'''
output = llm("Generate a person record:", grammar=grammar, max_tokens=200)
print(output["choices"][0]["text"]) # Guaranteed valid according to grammar
pip install outlines) — converts JSON Schema to a grammar automatically. outlines.generate.json(model, json_schema)(prompt).Structured generation is identical to text generation in hardware requirements. Llama 3.1 8B with grammar-constrained decoding runs at the same speed as unconstrained (50-80 tok/s on RTX 3060 12 GB, ~$200-250). Grammars add <5% compute overhead. For a production API that must return valid JSON 100% of the time: $400 handles it — same hardware as regular text generation. Pair with Ryzen 5 5600 + 16 GB DDR4 + 512 GB NVMe. Total: ~$360-405. Structured generation turns local models from "usually correct JSON" to "provably correct by construction" with the same hardware.
Used RTX 3090 24 GB (~$700-900, see /hardware/rtx-3090). Runs Qwen 2.5 32B or Llama 3.3 70B with grammar-constrained decoding — production-grade structured output at scale. For an API that generates complex nested JSON (multi-level objects, arrays of objects, conditional fields) from natural language queries: the grammar guarantee eliminates the entire class of "malformed JSON" errors. Serve via llama.cpp server with grammar support. Total: ~$1,800-2,200. For enterprise use: grammar-constrained generation is the difference between "prototype" and "production." No amount of retry logic beats "the model literally cannot output invalid JSON."
The mistake: Using prompt engineering to request JSON ("Output ONLY valid JSON, no explanation") and building a production pipeline that parses the response with json.loads() — then waking up to 5% of requests failing because the model added a trailing comma, forgot a closing brace, or added explanatory text. Why it fails: "Output ONLY JSON" is a request, not a constraint. The model generates tokens — on 95% of generations, those tokens happen to parse as JSON. On 5%, they don't. In production at 10K requests/day, 500 failures/day means constant monitoring and retry logic. The fix: Use grammar-constrained generation (llama.cpp GBNF, outlines, guidance). The grammar constrains the token sampler — every token must be valid according to the grammar. The model cannot output invalid JSON because the invalid tokens are literally not in the sampling pool. Grammar-constrained generation turns 95% reliability into 100% reliability for structured output. For production systems, this is non-negotiable.
Local AI workloads have real hardware constraints that vary by task type. VRAM ceiling decides what model fits; bandwidth decides decode speed; compute decides prefill speed. Pick the GPU tier that fits your actual workload, not the spec sheet.
The errors most operators hit when running structured output generation locally. Each links to a diagnose+fix walkthrough.
Verify your specific hardware can handle structured output generation before committing money.