Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

History of post ‘Sam Altman’s Hidden Motive’

Versions are sorted from most recent to oldest with the original version at the bottom. Changes are highlighted relative to the next, i.e. older, version underneath. Only changed lines and their surrounding lines are shown, except for the original version, which is shown in full.

Revision 2 · · View this (the most recent) version (v3)

@@ -6,7 +6,7 @@

Sam Altman, CEO of OpenAI, has been using the current pessimistic cultural background to peddle fears around AI safety for years. In a congressional hearing in 2023, he [proposed](https://www.pbs.org/newshour/politics/watch-live-openai-ceo-sam-altman-testifies-before-senate-judiciary-committee) the formation of a political agency to regulate AI systems. He typically phrases his ‘concerns’ in terms of responsibility and safety. I think he’s lying – safety isn’t his true motivation.

When companies ask to be regulated, you should be extremely cautious and skeptical. It’s what crony capitalists ([including -Musk](posts/my-thoughts-on-xai#crony-capitalism))+Musk](/posts/my-thoughts-on-xai#crony-capitalism)) do to prevent competition. In particular, open-source projects in this field, which are inherently non-profit, present a significant challenge to companies like OpenAI, which, despite its name, isn’t open at all. Open-source projects would enable anyone to run an AI assistant similar to OpenAI’s ChatGPT on their own devices for free. These systems are currently too slow on personal hardware to match the performance of ChatGPT, but they’re bound to improve in the near future.

There’s a similarly dishonest incentive structure for politicians. They can pretend to regulate AI in the name of their fight against ‘misinformation’ and ‘hateful content’ when what they really want is to censor speech to gain political power. Although Altman previously [distanced himself](https://twitter.com/sama/status/1489740774673584131) from worries about ‘misinformation’, OpenAI has since jumped on the misinformation bandwagon, presumably to align with government incentives, citing their fight against “misinformation” and “hateful content” in their obligatory [section on safety](https://web.archive.org/web/20240308105538/https://openai.com/sora#safety) accompanying the launch of their new text-to-video product Sora.


Revision 1 · · View this version (v2)

@@ -6,7 +6,7 @@

Sam Altman, CEO of OpenAI, has been using the current pessimistic cultural background to peddle fears around AI safety for years. In a congressional hearing in 2023, he [proposed](https://www.pbs.org/newshour/politics/watch-live-openai-ceo-sam-altman-testifies-before-senate-judiciary-committee) the formation of a political agency to regulate AI systems. He typically phrases his ‘concerns’ in terms of responsibility and safety. I think he’s lying – safety isn’t his true motivation.

When companies ask to be regulated, you should be extremely cautious and skeptical. It’s what crony capitalists +([including Musk](posts/my-thoughts-on-xai#crony-capitalism)) do to prevent competition. In particular, open-source projects in this field, which are inherently non-profit, present a significant challenge to companies like OpenAI, which, despite its name, isn’t open at all. Open-source projects would enable anyone to run an AI assistant similar to OpenAI’s ChatGPT on their own devices for free. These systems are currently too slow on personal hardware to match the performance of ChatGPT, but they’re bound to improve in the near future.

There’s a similarly dishonest incentive structure for politicians. They can pretend to regulate AI in the name of their fight against ‘misinformation’ and ‘hateful content’ when what they really want is to censor speech to gain political power. Although Altman previously [distanced himself](https://twitter.com/sama/status/1489740774673584131) from worries about ‘misinformation’, OpenAI has since jumped on the misinformation bandwagon, presumably to align with government incentives, citing their fight against “misinformation” and “hateful content” in their obligatory [section on safety](https://web.archive.org/web/20240308105538/https://openai.com/sora#safety) accompanying the launch of their new text-to-video product Sora.


Original · · View this version (v1)

# Sam Altman’s Hidden Motive

> % source: Elon Musk on OpenAI
> % link: https://youtu.be/bWr-DA5Wjfw?t=220
> It does seem weird that something can be [...] non-profit, open-source and somehow transform itself into [...] for-profit, closed-source. [T]his would be like [funding] an organization to save the Amazon rain forest, and instead they became a lumber company, [...] chopped down the forest, and sold it for money [...].

Sam Altman, CEO of OpenAI, has been using the current pessimistic cultural background to peddle fears around AI safety for years. In a congressional hearing in 2023, he [proposed](https://www.pbs.org/newshour/politics/watch-live-openai-ceo-sam-altman-testifies-before-senate-judiciary-committee) the formation of a political agency to regulate AI systems. He typically phrases his ‘concerns’ in terms of responsibility and safety. I think he’s lying – safety isn’t his true motivation.

When companies ask to be regulated, you should be extremely cautious and skeptical. It’s what crony capitalists do to prevent competition. In particular, open-source projects in this field, which are inherently non-profit, present a significant challenge to companies like OpenAI, which, despite its name, isn’t open at all. Open-source projects would enable anyone to run an AI assistant similar to OpenAI’s ChatGPT on their own devices for free. These systems are currently too slow on personal hardware to match the performance of ChatGPT, but they’re bound to improve in the near future.

There’s a similarly dishonest incentive structure for politicians. They can pretend to regulate AI in the name of their fight against ‘misinformation’ and ‘hateful content’ when what they really want is to censor speech to gain political power. Although Altman previously [distanced himself](https://twitter.com/sama/status/1489740774673584131) from worries about ‘misinformation’, OpenAI has since jumped on the misinformation bandwagon, presumably to align with government incentives, citing their fight against “misinformation” and “hateful content” in their obligatory [section on safety](https://web.archive.org/web/20240308105538/https://openai.com/sora#safety) accompanying the launch of their new text-to-video product Sora.

The government’s and OpenAI’s stated motivations share the same underlying mistake: just as it is increased, not decreased, competition that improves safety, the solution to misinformation and hateful speech is not censorship but more speech. Both are clearly acting counter to their stated intentions. *That’s dishonest.*

Don’t take my word for it, though. Listen to what long-time AI expert Andrew Ng, cofounder and former head of Google Brain, has to say on the issue [brackets mine]:

> % source: Andrew Ng
> % link: https://www.deeplearning.ai/the-batch/ai-on-the-agenda-at-the-world-economic-forum/
> [B]ig companies, especially ones that would rather not have to compete with open source, are still pushing for stifling, anti-competitive AI regulations in the name of safety. For example, some are still using the argument, “don't we want to know if your open-source LLMs [large language models] are safe?” to promote potentially onerous testing, reporting, and perhaps even licensing requirements on open-source software.

Such regulation would destroy any meaningful competition from open-source projects, which are often run by, or at least depend on contributions from, hobbyists and regular people who have neither the time nor the money to ensure compliance. Citing safety concerns is ironic in this regard: open source is often safer than closed-source, commercial counterparts. One reason is that, since open-source code, by definition, can be inspected by anyone, security holes can be found quickly. Due to this transparent nature and unstifled by corporate bureaucracy, literally anyone in the world with a computer can then fix security issues as soon as they are reported. And again, those really concerned with safety would want *more* competition, not less, as competition increases quality and disincentivizes bad actors such as Altman, who, having paid millions to develop their technology, have a vested interest in snuffing out competition that could provide the same service for free.

To address such criticisms, Altman has [stated](https://twitter.com/sama/status/1659341540580261888) that “regulation should take effect [only] above a capability threshold.” In other words, regulation should only prevent competition that could hold a candle to OpenAI. But why punish competitors for their success? Altman is just digging himself into a deeper hole here, not to mention that regulation is typically a slippery slope: it grows over time. And who would get to define this threshold? The government isn’t competent to do it – they have to rely on experts like Altman to do it for them.

Reliance by clueless government officials on lobbyists to draft legislation is neither new nor limited to tech: for example, an investigation by *USA TODAY*, [cited](https://publicintegrity.org/politics/state-politics/copy-paste-legislate/you-elected-them-to-write-new-laws-theyre-letting-corporations-do-it-instead/) by the Center for Public Integrity, found that “[t]he Asbestos Transparency Act didn’t help people exposed to asbestos” and was instead “written by corporations who wanted to make it harder for victims to recoup money.” They mention one of the lawmakers “who introduced it in Colorado”; he “said he didn’t write the bill and relied on ‘my experts’ to explain it [...].” One of those experts was a lawyer working to reduce litigation. Expect Altman or one of his lawyers to exploit this single point of failure that is government and dupe yet another dunce politician in a similar fashion.

Another reason I think Altman is lying is that [he’s read](https://twitter.com/sama/status/1602119635373105154) David Deutsch’s books. Deutsch is *the* preeminent philosopher making any sense on A(G)I (artificial *general* intelligence). Having the best ideas in the field, he is lightyears ahead of ‘experts’ working for big players such as OpenAI and Google DeepMind. The details of Deutsch’s position are out of scope for this article – read his book *The Beginning of Infinity* to learn more. Suffice it to say that his position is essentially that humans have already reached a sort of ‘ultimate’ universality he calls *explanatory universality*. AI may reach it one day, thereby achieving the ‘G’ in AGI, but it could not possibly surpass humans.

Therefore, there’s no need to be particularly worried about A(G)I, at least not any more than you would be about any other technology. Accordingly, views by Nick Bostrom, one of the original fear mongers about what he calls ‘superintelligence’, are completely blown out of proportion, and Altman is wrong to be impressed by them (as is Musk, I should add). *Bostrom’s arguments have been addressed;* fears around A(G)I are rooted in fundamental misunderstandings about what it *is*.

Deutsch has also written ample criticism of the *precautionary principle*, which says to avoid anything not known to be safe. Altman’s stance is effectively the precautionary principle applied to AI. I’m not aware of Altman advancing any refutations of Deutsch’s views, but evidently this hasn’t stopped Altman from holding on to his own regardless. Alan Forester explains why this is dishonest [here](https://philosophy.stackexchange.com/a/47802/36371).

Don’t trust companies that ask to be regulated, especially ones whose very name is a lie. And don’t trust Altman.