RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Models
  4. /Jamba 1.5 Large
other
398B parameters
Commercial OK
·Reviewed May 2026

Jamba 1.5 Large

Jamba flagship at 398B total / 94B active. Frontier hybrid-architecture model with 256k context.

License: Jamba Open Model License·Released Aug 22, 2024·Context: 262,144 tokens

Overview

Jamba flagship at 398B total / 94B active. Frontier hybrid-architecture model with 256k context.

How to run it

Jamba 1.5 Large is AI21's SSM-hybrid model (94B total, ~52B active via MoE). The SSM backbone means KV cache overhead is lower than pure attention — practical for long-context workloads. Run at Q4_K_M via llama.cpp with -ngl 999 -fa -c 32768. Q4_K_M file size ~45-55 GB on disk. Minimum VRAM: 48 GB — RTX A6000 works at 32K context. RTX 4090 24GB needs Q3_K_M (35 GB) with partial offload or dual-card setup. Recommended: single RTX A6000 48GB at Q4_K_M, or dual RTX 3090 (24GB each) at Q3_K_M row-split. SSM layers reduce per-token KV cache growth, so 32K context is realistic on 48 GB. Throughput: ~15-25 tok/s on RTX A6000 at 32K bulk decode. The SSM architecture limits token-by-token generation speed compared to pure attention on GPU-heavy setups, but the reduced KV cache footprint enables much longer contexts.

Hardware guidance

Minimum: RTX 3090 24GB at Q3_K_M (4K context). Recommended: RTX A6000 48GB at Q4_K_M (32K context). VRAM math: ~94B total, active subset ~52B. SSM layers (Mamba) use fixed state size regardless of context — this is the key advantage. Q4_K_M ~0.7 bytes/param for active subset ≈ 36 GB for weights. KV cache for attention layers: ~8-15 GB at 32K context (less than pure-attention models because only ~30-50% of layers are attention). Total: ~44-51 GB at 32K on Q4_K_M — tight fit on 48 GB. Mac Studio M4 Max 64GB can run Q4_K_M at 16K context. RTX 6000 Ada 48GB gives headroom. Cloud: single A10 24GB at Q3_K_M works for testing.

What breaks first

  1. SSM kernel on older GPUs. Mamba kernels require CUDA 11.8+ and SM 7.5+ (Turing or newer). GTX 10-series (Pascal) won't run Jamba. Verify CUDA compute capability before deploying. 2. Ollama SSM support is immature. Jamba's hybrid architecture may not be fully wired in Ollama's default llama.cpp backend. Test with raw llama.cpp first. 3. Per-token latency on SSM layers. SSM decode is sequential — generation speed at small batch sizes is slower than attention on high-end GPUs. Jamba trades throughput for context efficiency. 4. Training data cutoff. Jamba 1.5's knowledge stops at its training date. RAG or web grounding needed for current information.

Runtime recommendation

llama.cpp with -ngl 999 is the primary option — it has the most mature Jamba/SSM support. vLLM may have experimental Jamba support but verify before committing. Avoid Ollama unless you confirm Jamba 1.5 Large is in their supported model list. Avoid MLX-LM — SSM kernel on Apple Silicon is not optimized.

Common beginner mistakes

Mistake: Assuming Jamba runs like a standard Llama. Fix: Jamba is SSM-hybrid — different architecture, different bottlenecks (sequential SSM decode vs parallel attention). Benchmark your workload specifically. Mistake: Expecting 100+ tok/s decode on RTX 4090. Fix: SSM layers bottleneck per-token generation. Jamba Large at Q4_K_M on RTX 4090-class hardware gets ~15-25 tok/s — not 80+. Long-context efficiency is the tradeoff. Mistake: Setting 256K context and expecting it to work. Fix: While SSM enables longer contexts than pure attention, 256K on 48 GB is unrealistic. Start at 32K, benchmark, and scale up. Mistake: Using automatic mixed precision (AMP) without testing. Fix: SSM precision sensitivity differs from attention. Q4_K_M is well-tested; custom quants may produce numerical instability.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
Jamba 1.5 Mini52B
Workstation
Family siblings (jamba-1.5)
Jamba 1.5 Mini52B
Workstation
Jamba 1.5 Large398B
You are here

Strengths

  • 256k context at frontier scale
  • Hybrid architecture

Weaknesses

  • Cluster-only deployment

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M230.0 GB260 GB

Get the model

HuggingFace

Original weights

huggingface.co/ai21labs/AI21-Jamba-1.5-Large

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Jamba 1.5 Large.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd

Frequently asked

What's the minimum VRAM to run Jamba 1.5 Large?

260GB of VRAM is enough to run Jamba 1.5 Large at the Q4_K_M quantization (file size 230.0 GB). Higher-quality quantizations need more.

Can I use Jamba 1.5 Large commercially?

Yes — Jamba 1.5 Large ships under the Jamba Open Model License, which permits commercial use. Always read the license text before deployment.

What's the context length of Jamba 1.5 Large?

Jamba 1.5 Large supports a context window of 262,144 tokens (about 262K).

Source: huggingface.co/ai21labs/AI21-Jamba-1.5-Large

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • Dual 3090 vs RTX 5090 (48 GB or 32 GB) →
  • RTX 3090 vs RTX 4090 →
Buyer guides
  • 16 GB vs 24 GB VRAM — what 70B-class models need →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
Alternatives
Jamba 1.5 Mini
Before you buy

Verify Jamba 1.5 Large runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • DeepSeek V4 Pro (1.6T MoE)
    deepseek · 1600B
    unrated
  • Qwen 3.5 235B-A17B (MoE)
    qwen · 397B
    unrated
  • Qwen 3 235B-A22B
    qwen · 235B
    unrated
  • DeepSeek V4 Flash (284B MoE)
    deepseek · 284B
    unrated
Step up
More capable — bigger memory footprint
No verdicted models in the next tier up yet.
Step down
Smaller — faster, runs on weaker hardware
  • Llama 3.3 70B Instruct
    llama · 70B
    9.1/10
  • DeepSeek R1 Distill Llama 70B
    deepseek · 70B
    9.0/10