NVIDIA GeForce GTX 1050 Ti
Pascal-era entry GPU. 4 GB VRAM is the practical floor for any local model — fits 1-3B at Q4 with room for short context. CUDA-compatible but no FP16 acceleration on consumer Pascal, so quantized inference is the only viable path. The card families the second-hand floor for the 'do I need a new GPU?' audience.
Extrapolated from 112 GB/s bandwidth — 13.4 tok/s estimated. No measured benchmarks yet.
Plain-English: Doesn't fit modern chat models usefully — vision models won't fit.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator who already owns one and wants to know if it can run anything, or the budget buyer who needs a display adapter and might as well try a tiny model. It is not for anyone building a serious local AI rig.
On a 1-3B parameter model at Q4 (e.g., Phi-2, TinyLlama), expect 15-25 tok/s — usable for chat but not fast. A 7B Q4 model (4.5 GB weights) will not fit in 4 GB VRAM with any usable context; it will spill to system RAM and drop to <5 tok/s. The card is strictly for sub-3B models.
4 GB VRAM is the absolute floor. No 7B model fits at Q4. No FP16 acceleration on Pascal means only quantized inference is viable. Software support is limited; modern frameworks may lack optimized kernels for this architecture.
Pass on this card if you want to run any model larger than 3B parameters, or need more than ~5 tok/s on a 7B. A used GTX 1060 6GB or an RTX 3050 8GB costs slightly more and opens up 7B models.
At ~$90 used, this is a cheap way to experiment with tiny models, but the VRAM ceiling makes it a dead end for serious local AI work.
›Why this rating
The GTX 1050 Ti's 4 GB VRAM is the bare minimum for local AI, limiting it to sub-3B models at Q4. Lack of FP16 acceleration and low bandwidth further reduce its usability. It scores low because it cannot run the most common 7B models, which are the entry point for meaningful local AI workloads.
Overview
Pascal-era entry GPU. 4 GB VRAM is the practical floor for any local model — fits 1-3B at Q4 with room for short context. CUDA-compatible but no FP16 acceleration on consumer Pascal, so quantized inference is the only viable path. The card families the second-hand floor for the 'do I need a new GPU?' audience.
Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $90.
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Specs
| VRAM | 4 GB |
| Power draw | 75 W |
| Released | 2016 |
| MSRP | $139 |
| Backends | CUDA Vulkan |
Models that fit
Open-weight models small enough to run on NVIDIA GeForce GTX 1050 Ti with usable context.
Hardware worth comparing
Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.
Frequently asked
What models can NVIDIA GeForce GTX 1050 Ti run?
Does NVIDIA GeForce GTX 1050 Ti support CUDA?
How much does NVIDIA GeForce GTX 1050 Ti cost?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.