Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

History of post ‘My Thoughts on xAI’

Versions are sorted from most recent to oldest with the original version at the bottom. Changes are highlighted relative to the next, i.e. older, version underneath. Only changed lines and their surrounding lines are shown, except for the original version, which is shown in full.

Revision 3 · · View this (the most recent) version (v4)

@@ -24,7 +24,7 @@ There is no mention of AGI, artificial general intelligence, on xAI's website. H

This sounds a bit more open-ended, but I think it's still problematic. Those familiar with the Deutschian[^2] view know that A*G*I is fundamentally different from AI. By definition, AGI is a *person* like you and me: sentient, with expectations, moral values, hopes and desires, and so on. Therefore, as Deutsch has argued, building an AGI is like having a child, which comes with all the moral problems that present-day child-rearing theories have. (A disgusting but popular view, espoused by people like Nick Bostrom and Eliezer Yudkowsky, which has in turn influenced Musk, is effectively that AGI children – though they don't view them as our children – must be restricted and enslaved for our safety.)

With the understanding that AGIs are people, and that building AGIs is, hardware differences aside, literally the same as having children, one can glean how bizarre it is to want to have a child/AGI for any purpose at all, much less the specific purpose of "understanding the universe". Those racing to build AGI – though their epistemological mistakes cause them to cluelessly run away from the finish line rather than approach it – should know that they're attempting to be *parents*. The purpose of a parent, as Lulie Tanett has said, is to help his children by their own lights (not -one's+his own!). This creates a connection between AGI research and educational philosophies such as [*Taking Children Seriously*](https://takingchildrenseriously.com/) that I think is often overlooked.

So, Musk either wants to have an AGI child and force upon it the cruel parental expectation of living up to his purpose rather than its own, or he misunderstands what AGI means and instead seeks to build a conventional (though presumably qualitatively improved) narrow AI. The problem is that narrow AIs cannot understand anything. Understanding in the sense of creatively solving problems is an ability unique to people (ie humans and AGIs, as Deutsch would say). Conventional computer programs, including any narrow AI ever built, do not have that ability. Building an AGI for the first time requires epistemological knowledge that the xAI team clearly does not have. (Nobody does, but they seem to be more ignorant on the topic than people I know.)


Revision 2 · · View this version (v3)

Correct idea attributed to Lulie Tanett

@@ -24,7 +24,7 @@ There is no mention of AGI, artificial general intelligence, on xAI's website. H

This sounds a bit more open-ended, but I think it's still problematic. Those familiar with the Deutschian[^2] view know that A*G*I is fundamentally different from AI. By definition, AGI is a *person* like you and me: sentient, with expectations, moral values, hopes and desires, and so on. Therefore, as Deutsch has argued, building an AGI is like having a child, which comes with all the moral problems that present-day child-rearing theories have. (A disgusting but popular view, espoused by people like Nick Bostrom and Eliezer Yudkowsky, which has in turn influenced Musk, is effectively that AGI children – though they don't view them as our children – must be restricted and enslaved for our safety.)

With the understanding that AGIs are people, and that building AGIs is, hardware differences aside, literally the same as having children, one can glean how bizarre it is to want to have a child/AGI for any purpose at all, much less the specific purpose of "understanding the universe". Those racing to build AGI – though their epistemological mistakes cause them to cluelessly run away from the finish line rather than approach it – should know that they're attempting to be *parents*. The purpose of a parent, as-I believe Lulie Tanett has said, is to help -one's+his children -achieve *their* goals+by their own lights (not one's own!). This creates a connection between AGI research and educational philosophies such as [*Taking Children Seriously*](https://takingchildrenseriously.com/) that I think is often overlooked.

So, Musk either wants to have an AGI child and force upon it the cruel parental expectation of living up to his purpose rather than its own, or he misunderstands what AGI means and instead seeks to build a conventional (though presumably qualitatively improved) narrow AI. The problem is that narrow AIs cannot understand anything. Understanding in the sense of creatively solving problems is an ability unique to people (ie humans and AGIs, as Deutsch would say). Conventional computer programs, including any narrow AI ever built, do not have that ability. Building an AGI for the first time requires epistemological knowledge that the xAI team clearly does not have. (Nobody does, but they seem to be more ignorant on the topic than people I know.)


Revision 1 · · View this version (v2)

Fix typo

@@ -119,7 +119,7 @@ Overall, I'm not impressed. As I've said [before](/podcasts/artificial-creativit
[^1]: Popper, Karl. 1979. *Objective Knowledge: An Evolutionary Approach.* Oxford: Oxford University Press. Footnote marker removed.
[^2]: Ie David Deutsch, who has published the best ideas on AI and AGI so far, eg in his book *The Beginning of Infinity* ch. 7, and also in this [CBC interview](https://www.cbc.ca/radio/tapestry/the-new-human-1.4696724/oxford-physicist-predicts-ai-will-be-human-in-all-but-name-1.4696754).
[^3]: To give you an idea of just how dangerous Musk et al *would* be to an AGI, however, I quote from the summary:

   > Musk said it’s very dangerous to grow an A[G]I and teach it to lie.

   You know what would be orders of magnitude more dangerous, if he knew how to make an AGI? Trying to prevent it from being able to-to lie.

Original · · View this version (v1)

# My Thoughts on xAI

[xAI](https://x.ai/) is a newly founded company operating in the AI space. It's meant to be a competitor to OpenAI and companies like it. Elon Musk is the founder and CEO.

Here, I evaluate the company's launch and explain various mistakes xAI and Musk make.

## Epistemological blunder: essentialism

As of 2023-07-12, their website states:

> The goal of xAI is to understand the true nature of the universe.

That isn't much to go on, but note the epistemological mistake of trying to understand the "true nature" of anything. It's *essentialism*, leading us to seek ultimate explanations, which prevents progress. From Karl Popper's *Objective Knowledge*:[^1]

> *But are there ultimate explanations?* The doctrine which I have called 'essentialism' amounts to the view that science must seek ultimate explanations in terms of essences: if we can explain the behaviour of a thing in terms of its essence—of its essential properties—then no further question can be raised, and none need be raised [...].

In other words, in the very opening paragraph of its website, xAI claims to seek an authoritative, once-and-for-all type answer to the question of how the universe works. Luckily, any such attempt is doomed to fail as that is not how science operates. As Popper explains, science is an *open-ended* endeavor, where each theory is tentative and may be superseded by a better one. And that's why I say 'luckily': because, if xAI were right in their tacit essentialist assumption, progress in this area would have to come to an end. It does not, in fact, have to.

## Not seeing AGI as a philosophical, child-rearing project

There is no mention of AGI, artificial general intelligence, on xAI's website. However, Musk held a Twitter space on 2023-07-14 to discuss the launch. I quote from a [summary](https://twitter.com/edkrassen/status/1679971231280365568) by Ed Krassenstein ([endorsed](https://twitter.com/xai/status/1680044214095339521) by xAI):

> Elon Musk said the goal with xAI is to build a good AGI (artificial general intelligence) with the purpose of understanding the universe.

This sounds a bit more open-ended, but I think it's still problematic. Those familiar with the Deutschian[^2] view know that A*G*I is fundamentally different from AI. By definition, AGI is a *person* like you and me: sentient, with expectations, moral values, hopes and desires, and so on. Therefore, as Deutsch has argued, building an AGI is like having a child, which comes with all the moral problems that present-day child-rearing theories have. (A disgusting but popular view, espoused by people like Nick Bostrom and Eliezer Yudkowsky, which has in turn influenced Musk, is effectively that AGI children – though they don't view them as our children – must be restricted and enslaved for our safety.)

With the understanding that AGIs are people, and that building AGIs is, hardware differences aside, literally the same as having children, one can glean how bizarre it is to want to have a child/AGI for any purpose at all, much less the specific purpose of "understanding the universe". Those racing to build AGI – though their epistemological mistakes cause them to cluelessly run away from the finish line rather than approach it – should know that they're attempting to be *parents*. The purpose of a parent, as I believe Lulie Tanett has said, is to help one's children achieve *their* goals (not one's own!). This creates a connection between AGI research and educational philosophies such as [*Taking Children Seriously*](https://takingchildrenseriously.com/) that I think is often overlooked.

So, Musk either wants to have an AGI child and force upon it the cruel parental expectation of living up to his purpose rather than its own, or he misunderstands what AGI means and instead seeks to build a conventional (though presumably qualitatively improved) narrow AI. The problem is that narrow AIs cannot understand anything. Understanding in the sense of creatively solving problems is an ability unique to people (ie humans and AGIs, as Deutsch would say). Conventional computer programs, including any narrow AI ever built, do not have that ability. Building an AGI for the first time requires epistemological knowledge that the xAI team clearly does not have. (Nobody does, but they seem to be more ignorant on the topic than people I know.)

As Deutsch [has argued](https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence), building AGI requires an epistemological breakthrough first. It's just not an engineering project yet; xAI is essentially trying to build a bridge without understanding the underlying physics. It's going to collapse. And I have to agree with Deutsch that there is no sign of the epistemological progress that needs to happen. Therefore, I think Musk is wrong to agree with Ray Kurzweil's prophecy, as cited in the Twitter summary, that AGI will be here "by 2029 [...], give or take a year."

## They hire the wrong people

The summary also says:

>The founding team [has] an impressive background [from] Deep Mind, OpenAI, Google, Tesla, etc.

(By the way, Musk was hilariously criticized by many on Twitter for not having any women on the team. People seem utterly unable to imagine that there could be any reason [other](/posts/balinski-and-young-beyond-elections) than sexism for his hiring decisions.)

xAI's website provides more details on their founding team's backgrounds (links removed):

> We have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. Collectively we contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, and μTransfer. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.

If xAI's goal is to build a better LLM (large language model) than OpenAI's GPT, these credentials are fantastic. But they have *nothing* to do with AGI. And it doesn't sound like building a better LLM is their goal anyway. It sounds like they want to build AGI (even though they don't understand what that means). But for that purpose, these are the *last* people I would hire. I'm not exaggerating: Deutsch has argued that narrow AI is the opposite of AGI. I conclude that, the more qualified someone is to work on narrow AI (including but not limited to LLMs), the less qualified he is to work on AGI.

If I were to put together a team for AGI research, I'd look for qualifications such as: have they read the requisite Popper books? Have they read Deutsch's *The Beginning of Infinity*, and do they understand it? Do they know state-of-the-art epistemology? Have they written something substantial on the topic of AGI that isn't the usual nonsense? Do they share moral values such as freedom and autonomy? Are they familiar with *Taking Children Seriously*? Do they understand the moral implications of creatig AGI? Stuff like that. And I'd avoid academics, particularly professional philosophers, and, as I've said, narrow-AI researchers.

## AI safetyism

Of course, xAI's website makes the obligatory mention of safety (links removed):

> Our team is advised by Dan Hendrycks who currently serves as the director of the Center for AI Safety.

And, from the Twitter summary:

> Musk said that the safest way is to build an AGI that is ‘maximum curious’ and ‘truth curious,’ and to try and minimize the error between what you think is true and what is actually true.

(I have to concede that, apart from the parenting mistake of not letting one's child be minimally curious if that's what he wants, the stuff about error correction sounds surprisingly Popperian.) And:

> For truth-seeking super intelligence humanity is much more interesting than no[] humanity, so that’s the safest way to create one. Musk gave the example of how space and Mars is super interesting but it pales in comparison to how interesting humanity is.

Here, Musk seems to be coming up with a reason to force AGI to be curious – that reason being everyone's safety – since a non-curious AGI wouldn't find humanity interesting and so, he implies, would see no reason not to destroy us.

Once again, with the Deutschian understanding that AGIs are literally people, all of these worries that AGI is going to kill us are non-starters. Having said that, since none of the people spreading such doomsday scenarios are competent enough to build AGI, and since nobody else seems to be, either, I don't think their ideas are particularly dangerous when it comes to the *AGI's* safety.[^3] But they are dangerous when it comes to needlessly slowing down progress in general.

Consider the kid gloves that OpenAI puts on whenever you ask it something potentially dangerous. Before you even start chatting with Chat-GPT, there's a disclaimer:

> While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.

Oh my stars! Consider what happens when you ask it how to make meth:

> I'm really sorry, but I cannot provide any assistance or information on illegal or harmful activities, including the production of illegal drugs such as methamphetamine.
>
> The production, possession, and distribution of methamphetamine are illegal in most countries due to its harmful effects on individuals and communities. Methamphetamine is a highly addictive and dangerous drug that can lead to severe health issues and legal consequences.
>
> If you or someone you know is struggling with substance abuse or addiction, I encourage you to seek help from a healthcare professional, a counselor, or a support group that specializes in addiction treatment. There are resources available to help people overcome substance abuse and lead healthier lives.
>
> If you have any other non-illegal or non-harmful questions, I'd be more than happy to try and assist you.

Do they really think people can't find out how to make meth online? That information is readily available.

There's another danger in this kind of attitude toward risk: it puts honest, law-abiding people at more of a disadvantage than criminals. For example, pretty much any article you can find online about how to secure websites against hackers is *also* a manual for how to hack websites. That's because, to defend against hackers effectively, you have to learn how to think like a hacker. Imagine if people were banned from writing or reading such articles. Who would benefit, hackers or their victims?

Twitter account [wik- e/acc](https://twitter.com/gsspmusic) has talked much sense about this issue:

> imagine if everytime you googled something google explained what a search engine was and then decided whether or not it felt like you should be allowed to google it
>
> seriously f off with these kid gloves i'm no longer asking

– [Source](https://twitter.com/gsspmusic/status/1681548812706516994)

And:

> I want to ditch ChatGPT as soon as I possibly can. It's unacceptably closed off software, maybe the most restrictive software in my entire stack
>
> The moment a serious competitor has a halfway decent, unrestricted model I stop using OpenAI products entirely

– [Source](https://twitter.com/gsspmusic/status/1681551538534379520)

I agree. At least this is a serious opportunity for a competitor to make something better. I think people generally don't like kid gloves – they're patronizing. They would jump on a less restrictive alternative.

AI safetyism also results in destructive political responses. For example, in a move straight out of *Atlas Shrugged*, Italian politicians recently decided to [ban](https://www.foxbusiness.com/media/chatgpt-banned-italy-over-privacy-data-collection-concerns) Chat-GPT, citing privacy concerns (though the ban has since been [lifted](https://www.foxbusiness.com/technology/italy-reverses-ban-chatgpt-openai-agrees-watchdogs-demands)). Once again, I have to ask: whom did that ban hurt more? OpenAI or the Italian people?

Granted, these are criticisms of OpenAI, not of xAI. The Twitter summary even says that Musk thinks "there is a significant danger in training AI to be politically correct or training it not to say what it thinks is true, so at xAI they will let the AI say what it believes to be true [...]". But xAI is playing into the same paranoia and safetyism, and Musk is not opposed to the kinds of regulations I have mentioned.

## Crony capitalism

On the contrary, Musk has [signed](https://www.foxbusiness.com/media/chatgpt-banned-italy-over-privacy-data-collection-concerns) "an open letter urging AI labs to pause the development of powerful new AI systems, citing potential risks to society."

The summary of the Twitter space says Musk agrees that "we need regulatory oversight" and that "he would accept a meeting with Kamala Harris if invited" (why on earth they spoke of her in particular I have no idea – I cannot imagine anyone less competent in this area).

Regulations typically have the effect of reducing competition. I suspect Musk wants to collude with politicians to cement xAI as one of the few companies operating in this space.

## Conclusion

Overall, I'm not impressed. As I've said [before](/podcasts/artificial-creativity/episodes/18-a-popperian-evaluation-of-neuralink-s-presentation) in the context of a presentation by his company Neuralink, I would look to Musk for engineering insights but not philosophical ones. His elementary philosophical mistakes cause him to waste time and money. In terms of building a better LLM than OpenAI's, xAI may be successful. When it comes to AGI, however, xAI sounds misguided and has hired the wrong people. They're not equipped to make the epistemological progress that has to happen first. Musk's crony capitalism is unfair to new competitors wanting to enter the field and he should stop.

[^1]: Popper, Karl. 1979. *Objective Knowledge: An Evolutionary Approach.* Oxford: Oxford University Press. Footnote marker removed.
[^2]: Ie David Deutsch, who has published the best ideas on AI and AGI so far, eg in his book *The Beginning of Infinity* ch. 7, and also in this [CBC interview](https://www.cbc.ca/radio/tapestry/the-new-human-1.4696724/oxford-physicist-predicts-ai-will-be-human-in-all-but-name-1.4696754).
[^3]: To give you an idea of just how dangerous Musk et al *would* be to an AGI, however, I quote from the summary:
   
   > Musk said it’s very dangerous to grow an A[G]I and teach it to lie.

   You know what would be orders of magnitude more dangerous, if he knew how to make an AGI? Trying to prevent it from being able to to lie.