
Local LLMs vs APIs: A Quick Cost Reality Check
Super short blog post today, just something I was talking about with a buddy a couple weeks ago that I wanted to mention here. The capability to run large language models like Ollama and some of the distilled DeepSeek models on local hardware is really cool. The tech is fun to play around with, and solutions like LM Studio make