Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Sam Altman Lies About AGI
OpenAI CEO Sam Altman was recently quoted as saying that “[the] biggest bottleneck for AGI is compute”.
Amazon defines the noun ‘compute’ as “a generic term used to reference processing power, memory, networking, storage, and other resources required for the computational success of any program.” In other words, Altman claims that what’s standing between us and AGI is a lack of processing power, memory, etc.
Altman’s claim is false. An AGI – artificial general intelligence – is the same software that runs on a human brain. It’s what makes humans, humans. Each human is a general intelligence (GI). Your GI is your mind. The reason we don’t call humans artificial general intelligences is that their GI runs on a computer made of wetware (the brain) as opposed to metal and silicon like a laptop. There’s no qualitative difference between a natural GI and an artificial GI – it’s the same algorithm.
Compute can’t possibly be the bottleneck since the brain has enough computing power to run a GI. You and I are living proof. However much compute your brain has is all that’s required to run a GI.
More compute may well improve chatbots such as the ones OpenAI is developing, but chatbots aren’t GIs, and increased compute alone doesn’t turn into them into GIs.
As I tweeted in response to Altman’s claim, the real bottleneck for AGI is a missing philosophical breakthrough explaining how minds create knowledge. Physicist David Deutsch, who has made by far the most sense out of anyone on the topic of AGI, explains:1
What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.
Even if compute were the bottleneck, one could only know that from such an epistemological theory. The theory has to come first. Without it, how could merely throwing more compute at the problem possibly help? How could one evaluate the performance characteristics of an algorithm one doesn’t even know? Deutsch explains this problem, too:
Others maintain that […] during most of the history of the field, computers had absurdly little speed and memory capacity [ie, compute] compared with today’s. Hence they continue to expect the breakthrough in the next few years.
This will not do either. It is not as though someone has written [an AGI that] would currently take a year to compute each reply. People would gladly wait. And in any case, if anyone knew how to write such a program, there would be no need to wait – for reasons that I shall get to shortly.
Further down, Deutsch continues:
How could we know that [a program had created knowledge in its own]? Only from a good explanation. For instance, we might know it because we ourselves wrote the program. Another way would be for the author of the program to explain to us how it works – how it creates knowledge […]. If the explanation was good, we should know that the program was an A[G]I. In fact, if we had only such an explanation but had not yet seen any output from the program – and even if it had not been written yet – we should still conclude that it was a genuine A[G]I program. […] That is why I said that if lack of computer power were the only thing preventing the achievement of AI, there would be no need to wait.
Sam “we just need moar compute” Altman has read at least these last two passages. He can’t claim ignorance. We have words for people who say things they know to be false. Liars, frauds, charlatans… The AI industry is already fraught with false advertising, and I have previously written about how Altman isn’t trustworthy. So the question is: why lie? Is it to get more money from investors to improve mediocre chatbots? To fool investors into believing that they will help create an AGI as long as they invest more?
Don’t be a sucker.
-
Minor quibble: I’d put it in terms of minds, not brains, to emphasize that the brain isn’t some special kind of computer and that the underlying hardware is irrelevant as long as its computationally universal and meets (presumably minor) performance and memory requirements. Creating AGI is a question of software, which you could then run on any Macbook, say. Deutsch knows this, of course – I’m just clarifying. ↩
What people are saying