RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Models
  4. /NV-Embed v2
other
7.85B parameters
Restricted
·Reviewed May 2026

NV-Embed v2

NVIDIA's research-grade embedding model. Mistral-7B base. Top of MTEB at release.

License: CC-BY-NC 4.0·Released Sep 9, 2024·Context: 32,768 tokens

Overview

NVIDIA's research-grade embedding model. Mistral-7B base. Top of MTEB at release.

Strengths

  • MTEB leader at release

Weaknesses

  • Non-commercial license

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
FP1615.0 GB18 GB

Get the model

HuggingFace

Original weights

huggingface.co/nvidia/NV-Embed-v2

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of NV-Embed v2.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
AMD Instinct MI300X
192GB · amd
NVIDIA B200
192GB · nvidia
NVIDIA H100 NVL
188GB · nvidia
NVIDIA H200
141GB · nvidia
Intel Gaudi 3
128GB · intel

Frequently asked

What's the minimum VRAM to run NV-Embed v2?

18GB of VRAM is enough to run NV-Embed v2 at the FP16 quantization (file size 15.0 GB). Higher-quality quantizations need more.

Can I use NV-Embed v2 commercially?

NV-Embed v2 is released under the CC-BY-NC 4.0, which has restrictions for commercial use. Review the license terms before using it in a product.

What's the context length of NV-Embed v2?

NV-Embed v2 supports a context window of 32,768 tokens (about 33K).

Source: huggingface.co/nvidia/NV-Embed-v2

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • 4060 Ti 16 GB vs 4070 Ti Super →
  • Arc B580 vs 4060 Ti 16 GB →
Buyer guides
  • Best budget GPU — for 7B-13B models →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • AMD Instinct MI300X →
  • NVIDIA B200 →
Before you buy

Verify NV-Embed v2 runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • DeepSeek R1 Distill Qwen 7B
    deepseek · 7B
    unrated
  • DeepSeek R1 Distill Llama 8B
    deepseek · 8B
    unrated
  • Codestral Mamba 7B
    mistral · 7B
    unrated
  • Llama 3.1 8B Instruct
    llama · 8B
    8.7/10
Step up
More capable — bigger memory footprint
  • Qwen 3 14B
    qwen · 14B
    8.8/10
  • Phi-4 14B
    phi · 14B
    8.6/10
Step down
Smaller — faster, runs on weaker hardware
  • Gemma 3 4B
    gemma · 4B
    7.5/10
  • Llama 3.2 3B Instruct
    llama · 3B
    7.4/10