RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
← Home·/apps·RAG app

Verba

Hybrid (offline or cloud)

Weaviate's open-source RAG demo turned production. Strong defaults, opinionated stack.

Editorial verdict: “Best for 'don't make me choose chunking strategy' teams. Opinionated stack works.”

RAG app
Free
BSD-3-Clause
★ 4.2 / 5
GitHub ★ 7,500
↗ GitHub

Compatibility at a glance

Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"

§ Runtimes supported
ollamaopenai-compatanthropic
§ OS / platform
linuxmacoswindows
§ Hardware + model hint
Minimum VRAM
8 GB
Recommended starter model
Llama 3.1 8B + Weaviate embeddings
→ Build the rest of the stack with /stack-builder→ Pick a GPU for this app

What it is

Verba is Weaviate's reference RAG app: ingestion → chunk → embed → store → retrieve → generate, with a clean React UI. The selling point is the opinionated stack — you don't pick a vector store, embedder, or chunking strategy, just point it at your data. Talks to Ollama, OpenAI, Cohere, Anthropic. Good for teams that want to ship internal-doc chat fast.

✓ Strengths

  • +Opinionated stack — fewer decisions
  • +Clean React UI with citation tracing
  • +Excellent default chunking + retrieval params

△ Caveats

  • −Tied to Weaviate (or you do the swap yourself)
  • −Less flexible than PrivateGPT if you want to swap components

About the RAG app category

Document retrieval + chat, fully offline-capable.

§ Other rag app apps
PrivateGPT

Best when air-gap compliance is the requirement. Less polished than AnythingLLM, more configurable.

Khoj

Best 'AI second brain' app. Self-hosted, local-first, works against Obsidian.

Where to go from here

Stack Builder →

Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.

Back to /apps →

The full directory — filter by category, runtime, OS, privacy posture, or VRAM.

Runtimes (/tools) →

What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.

Community benchmarks →

Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.