LLMs are lossy. So what?
aiLLMs are never going to perfectly simulate human consciousness. That's not the end goal. Like any massively earth-shattering technology, it will see just how far it be developed.
:quality(75)/https%3A%2F%2Fassets.lareviewofbooks.org%2Fuploads%2Flifecycle-softwareobjects-web__85640.jpg)
In a wide-ranging interview with Ted Chiang in the Los Angeles Review of Books (I love that their acronym is LARB), the acclaimed sci-fi author pushes back against the notion that mathematics could serve as a universal language: "Math can describe physical phenomena with incredible precision, but it's terrible at describing human experiences."
This made me stop for a moment. Let's say this is true, that we can't describe human experiences with math. And if we agree that what LLMs and the whole neural network project is trying to do exactly that (ie, simulate or approximate or generate human experiences with math — debate me), then we therefore have to conclude that this push towards AGI will fail. It could very well be that the ultimate goal (of OpenAI, Google, and et al) is not about humanity at all. We can generate experiences and create this thing that can do fantastic things, and yes we might be able to convince it to do things that might benefit humanity, but ultimately, the goal seems to me to be (let's be honest here): "Isn't it cool that we can do this with math?" Isn't it cool that we can fool people into having relationships with our neural network? Isn't it cool that we can make Excel spreadsheets and write papers and summarize all these documents (oh btw we need more money to buy more compute and electricity)? It's all math. All the text documents we're shoveling into the corpus in pre-training — it's all translated down to binary and the representations of connections in the neural network is all matrix multiplication. Even the reinforcement learning and synthetic data is ultimately numbers too.
IMHO the whole point that these neural networks can't truly approximate human interaction perfectly with math is moot. It just has to be good enough for most people. Pointing out that LLMs are lossy is like saying MP3s are lossy. The fact is that the inferior sound quality didn't doom the mass spread of MP3s. Why? Because most people aren't audiophiles. And now, many years later what has happened? With more compute and bandwidth, we're listening to higher and higher quality file formats. This is where we're headed with AI, like it or not.
Comments