NVIDIA GeForce GTX 1650
Turing entry without RT/Tensor cores. 4 GB VRAM keeps it at the practical floor — 1-3B Q4 only. The 'I built a budget gaming PC' audience runs into VRAM walls almost immediately. Still works for tiny model experiments.
Extrapolated from 128 GB/s bandwidth — 15.4 tok/s estimated. No measured benchmarks yet.
Plain-English: Doesn't fit modern chat models usefully — vision models won't fit.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator who already owns it and wants to see if local AI can run at all on a shoestring budget. It is not a purchase recommendation for anyone building a dedicated inference rig. With 4 GB VRAM and 128 GB/s bandwidth, the GTX 1650 handles 1-3B parameter models at Q4 (e.g., Phi-2, TinyLlama) at roughly 15-25 tok/s. That is usable for simple chat or code completion experiments, but the experience is cramped. The VRAM ceiling is the hard stop: any model above 3B Q4 or 7B at any quantization spills into system RAM, dropping throughput to single digits. The lack of Tensor cores means no acceleration for CUDA-based inference optimizations, so operators rely entirely on raw shader performance. Pass on this card if the workload includes 7B models, RAG pipelines, or any multi-model setup. The used market price around $130 is fair only for a curiosity rig; a used RTX 3060 12 GB at $200 is the real entry point for practical local AI.
›Why this rating
The GTX 1650 is technically capable of running small models but is severely constrained by VRAM and lacks Tensor cores. It scores low because it cannot handle the 7B models that define the local AI baseline, making it a dead end for serious use.
Overview
Turing entry without RT/Tensor cores. 4 GB VRAM keeps it at the practical floor — 1-3B Q4 only. The 'I built a budget gaming PC' audience runs into VRAM walls almost immediately. Still works for tiny model experiments.
Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $130.
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Specs
| VRAM | 4 GB |
| Power draw | 75 W |
| Released | 2019 |
| MSRP | $149 |
| Backends | CUDA Vulkan |
Models that fit
Open-weight models small enough to run on NVIDIA GeForce GTX 1650 with usable context.
Frequently asked
What models can NVIDIA GeForce GTX 1650 run?
Does NVIDIA GeForce GTX 1650 support CUDA?
How much does NVIDIA GeForce GTX 1650 cost?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.