Running language models, image generators, and other AI systems on your own hardware — llama.cpp, Ollama, Stable Diffusion, and the ecosystem of tools for private, offline inference.