InfoWorld: “Chatbots like ChatGPT, Claude.ai, and phind can be quite helpful, but you might not always want your questions or sensitive data handled by an external application. That’s especially true on platforms where your interactions may be reviewed by humans and otherwise used to help train future models. One solution is to download a large language model (LLM) and run it on your own machine. That way, an outside company never has access to your data. This is also a quick option to try some new specialty models such as Meta’s recently announced Code Llama family of models, which are tuned for coding, and SeamlessM4T, aimed at text-to-speech and language translations. Running your own LLM might sound complicated, but with the right tools, it’s surprisingly easy. And the hardware requirements for many models aren’t crazy. I’ve tested the options presented in this article on two systems: a Dell PC with an Intel i9 processor, 64GB of RAM, and a Nvidia GeForce 12GB GPU (which likely wasn’t engaged running much of this software), and on a Mac with an M1 chip but just 16GB of RAM…”