Dolphin 3 Llama 3.3 70B
Eric Hartford's Dolphin 3 at 70B Llama 3.3 base. Less-restricted alternative for creative / unconstrained workflows.
Overview
Eric Hartford's Dolphin 3 at 70B Llama 3.3 base. Less-restricted alternative for creative / unconstrained workflows.
How to run it
Dolphin 3 Llama 3.3 70B is Cognitive Computations' uncensored fine-tune of Llama 3.3 70B. Designed to remove alignment guardrails and respond to any prompt that base Llama would refuse. Run at Q4_K_M via Ollama (ollama pull dolphin3:70b) or llama.cpp with -ngl 999 -fa -c 8192. Q4_K_M file size ~40 GB on disk. Minimum VRAM: 48 GB — RTX A6000 (48GB) at Q4_K_M for 4K context. RTX 4090 24GB: Q3_K_M with KV offload. Recommended: A100 80GB at AWQ-INT4. Throughput: ~15-25 tok/s on A6000 at Q4_K_M. Standard Llama 3.3 architecture — full compatibility. Dolphin's key characteristic: removed refusal training (will not say "as an AI I cannot..."). Use case: content generation without alignment filters, adversarial testing, creative writing with few constraints. Tradeoff: less guardrailed than standard Llama — may produce harmful content if prompted. License: Dolphin is typically Apache 2.0 but verify on hf. Context: Llama 3.3's 128K (practical 4-8K on 48 GB).
Hardware guidance
Minimum: RTX 3090 24GB at Q3_K_M (4K). Recommended: RTX A6000 48GB at Q4_K_M (8K). Optimal: A100 80GB at AWQ-INT4. VRAM math: identical to Llama 3.3 70B — 70B dense at Q4 ≈ 40 GB. KV cache at 8K: ~10 GB. Total ~50 GB. A6000 48GB: borderline. RTX 4090 24GB: Q3 ≈ 30 GB. Dual RTX 4090 48 GB: Q4 at 8K. Mac Studio M4 Max 64GB: Q4 at 5-10 tok/s. Cloud: A100 80GB at $5-10/hr. AWQ-INT4 enables 32K context. Hardware requirements are identical to base Llama 3.3 70B — the fine-tune doesn't change architecture or size.
What breaks first
- Unfiltered outputs. Dolphin will comply with harmful, toxic, or illegal requests that base Llama refuses. This is the point of "uncensored" — but it means you're responsible for output filtering in your application. 2. Quality regression on refused topics. Removing refusals can accidentally degrade quality on the topics base Llama was aligned to refuse — the model may produce lower-quality responses instead of refusing. 3. System prompt bypass. Standard system prompt safety instructions are less effective on Dolphin. If you want guardrails, implement them in your application layer, not the system prompt. 4. Abliteration artifacts. The uncensoring process (abliteration/RLHF removal) may introduce artifacts — repetition, logical inconsistencies, or degraded coherence on specific prompt types. Test your use case.
Runtime recommendation
Common beginner mistakes
Mistake: Deploying Dolphin in a customer-facing chatbot without output filtering. Fix: Dolphin has no refusal training. Implement content filtering in your application layer. The model will comply with harmful prompts. Mistake: Assuming Dolphin is "better" at all tasks because it's uncensored. Fix: Uncensored ≠ higher quality. Dolphin may have lower quality on standard benchmarks due to abliteration artifacts. Use base Llama 3.3 for most production tasks. Mistake: Using system prompt guardrails and expecting them to work. Fix: Dolphin's refusal mechanisms are removed. System prompt safety instructions are largely ignored. Don't rely on them. Mistake: Mixing Dolphin quant files with standard Llama 3.3 70B quant files. Fix: Dolphin is a fine-tune — the weights differ. Use Dolphin-specific GGUF files.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Less censored than base Llama
Weaknesses
- Smaller community than base Llama 3.3
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 40.0 GB | 48 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Dolphin 3 Llama 3.3 70B.
Frequently asked
What's the minimum VRAM to run Dolphin 3 Llama 3.3 70B?
Can I use Dolphin 3 Llama 3.3 70B commercially?
What's the context length of Dolphin 3 Llama 3.3 70B?
Source: huggingface.co/cognitivecomputations/Dolphin3-Llama3.3-70B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Dolphin 3 Llama 3.3 70B runs on your specific hardware before committing money.