Weaviate's open-source RAG demo turned production. Strong defaults, opinionated stack.
Editorial verdict: “Best for 'don't make me choose chunking strategy' teams. Opinionated stack works.”
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
Verba is Weaviate's reference RAG app: ingestion → chunk → embed → store → retrieve → generate, with a clean React UI. The selling point is the opinionated stack — you don't pick a vector store, embedder, or chunking strategy, just point it at your data. Talks to Ollama, OpenAI, Cohere, Anthropic. Good for teams that want to ship internal-doc chat fast.
Document retrieval + chat, fully offline-capable.
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.