Some thoughts on AI
2 min readApr 25, 2024
- 2 min read
Let’s cut through the hype
I’ve been professionally writing software for nearly 40 years. Here’s my thoughts:
- Large language models (LLMs) like ChatGPT are basically very large predictive text boxes. Instead of just guessing the next word, they can guess whole documents while burning enough energy to keep a family warm for a year.
- As a consequence of 1, and this cannot be stressed enough, they are not databases. You can ask questions, sure, but there are no facts as such. It will make things up. For example it’s been known to cite academic papers which perhaps should exist, but don’t.
- Machine learning AI as such now, has things like discriminator functions that can say whether a piece of data is likely to fall in a particular mathematical space. We now have the computing power to make millions of those calculations very quickly. This is where the self-driving car comes from, a very mechanical sifting of data very quickly. There’s nothing intelligent about it except the algorithms, and they are an entirely human creation. These machines can drive very successfully in dry conditions in California — and as far as I know nowhere else.
To illustrate point 2, the Dilbert creator, Scott Adams, recently had a “conversation” with ChatGPT where it convinced him of various things…