Simon Willison’s Weblog – “A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.” A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments. In this article:
- The GPT-4 barrier was comprehensively broken
- Some of those GPT-4 models run on my laptop
- LLM prices crashed, thanks to competition and increased efficiency
- Multimodal vision is common, audio and video are starting to emerge
- Voice and live camera mode are science fiction come to life
- Prompt driven app generation is a commodity already
- Universal access to the best models lasted for just a few short months
- “Agents” still haven’t really happened yet
- Evals really matter
- Apple Intelligence is bad, Apple’s MLX library is excellent
- The rise of inference-scaling “reasoning” models
- Was the best currently available LLM trained in China for less than $6m?
- The environmental impact got better
- The environmental impact got much, much worse
- The year of slop
- Synthetic training data works great
- LLMs somehow got even harder to use
- Knowledge is incredibly unevenly distributed
- LLMs need better criticism
- Everything tagged “llms” on my blog in 2024
Sorry, comments are closed for this post.