Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Published · revised (v3, latest) · 3-minute read · 2 revisions

Sam Altman’s Hidden Motive

It does seem weird that something can be […] non-profit, open-source and somehow transform itself into […] for-profit, closed-source. [T]his would be like [funding] an organization to save the Amazon rain forest, and instead they became a lumber company, […] chopped down the forest, and sold it for money […].

Sam Altman, CEO of OpenAI, has been using the current pessimistic cultural background to peddle fears around AI safety for years. In a congressional hearing in 2023, he proposed the formation of a political agency to regulate AI systems. He typically phrases his ‘concerns’ in terms of responsibility and safety. I think he’s lying – safety isn’t his true motivation.

When companies ask to be regulated, you should be extremely cautious and skeptical. It’s what crony capitalists (including Musk) do to prevent competition. In particular, open-source projects in this field, which are inherently non-profit, present a significant challenge to companies like OpenAI, which, despite its name, isn’t open at all. Open-source projects would enable anyone to run an AI assistant similar to OpenAI’s ChatGPT on their own devices for free. These systems are currently too slow on personal hardware to match the performance of ChatGPT, but they’re bound to improve in the near future.

There’s a similarly dishonest incentive structure for politicians. They can pretend to regulate AI in the name of their fight against ‘misinformation’ and ‘hateful content’ when what they really want is to censor speech to gain political power. Although Altman previously distanced himself from worries about ‘misinformation’, OpenAI has since jumped on the misinformation bandwagon, presumably to align with government incentives, citing their fight against “misinformation” and “hateful content” in their obligatory section on safety accompanying the launch of their new text-to-video product Sora.

The government’s and OpenAI’s stated motivations share the same underlying mistake: just as it is increased, not decreased, competition that improves safety, the solution to misinformation and hateful speech is not censorship but more speech. Both are clearly acting counter to their stated intentions. That’s dishonest.

Don’t take my word for it, though. Listen to what long-time AI expert Andrew Ng, cofounder and former head of Google Brain, has to say on the issue [brackets mine]:

[B]ig companies, especially ones that would rather not have to compete with open source, are still pushing for stifling, anti-competitive AI regulations in the name of safety. For example, some are still using the argument, “don’t we want to know if your open-source LLMs [large language models] are safe?” to promote potentially onerous testing, reporting, and perhaps even licensing requirements on open-source software.

Such regulation would destroy any meaningful competition from open-source projects, which are often run by, or at least depend on contributions from, hobbyists and regular people who have neither the time nor the money to ensure compliance. Citing safety concerns is ironic in this regard: open source is often safer than closed-source, commercial counterparts. One reason is that, since open-source code, by definition, can be inspected by anyone, security holes can be found quickly. Due to this transparent nature and unstifled by corporate bureaucracy, literally anyone in the world with a computer can then fix security issues as soon as they are reported. And again, those really concerned with safety would want more competition, not less, as competition increases quality and disincentivizes bad actors such as Altman, who, having paid millions to develop their technology, have a vested interest in snuffing out competition that could provide the same service for free.

To address such criticisms, Altman has stated that “regulation should take effect [only] above a capability threshold.” In other words, regulation should only prevent competition that could hold a candle to OpenAI. But why punish competitors for their success? Altman is just digging himself into a deeper hole here, not to mention that regulation is typically a slippery slope: it grows over time. And who would get to define this threshold? The government isn’t competent to do it – they have to rely on experts like Altman to do it for them.

Reliance by clueless government officials on lobbyists to draft legislation is neither new nor limited to tech: for example, an investigation by USA TODAY, cited by the Center for Public Integrity, found that “[t]he Asbestos Transparency Act didn’t help people exposed to asbestos” and was instead “written by corporations who wanted to make it harder for victims to recoup money.” They mention one of the lawmakers “who introduced it in Colorado”; he “said he didn’t write the bill and relied on ‘my experts’ to explain it […].” One of those experts was a lawyer working to reduce litigation. Expect Altman or one of his lawyers to exploit this single point of failure that is government and dupe yet another dunce politician in a similar fashion.

Another reason I think Altman is lying is that he’s read David Deutsch’s books. Deutsch is the preeminent philosopher making any sense on A(G)I (artificial general intelligence). Having the best ideas in the field, he is lightyears ahead of ‘experts’ working for big players such as OpenAI and Google DeepMind. The details of Deutsch’s position are out of scope for this article – read his book The Beginning of Infinity to learn more. Suffice it to say that his position is essentially that humans have already reached a sort of ‘ultimate’ universality he calls explanatory universality. AI may reach it one day, thereby achieving the ‘G’ in AGI, but it could not possibly surpass humans.

Therefore, there’s no need to be particularly worried about A(G)I, at least not any more than you would be about any other technology. Accordingly, views by Nick Bostrom, one of the original fear mongers about what he calls ‘superintelligence’, are completely blown out of proportion, and Altman is wrong to be impressed by them (as is Musk, I should add). Bostrom’s arguments have been addressed; fears around A(G)I are rooted in fundamental misunderstandings about what it is.

Deutsch has also written ample criticism of the precautionary principle, which says to avoid anything not known to be safe. Altman’s stance is effectively the precautionary principle applied to AI. I’m not aware of Altman advancing any refutations of Deutsch’s views, but evidently this hasn’t stopped Altman from holding on to his own regardless. Alan Forester explains why this is dishonest here.

Don’t trust companies that ask to be regulated, especially ones whose very name is a lie. And don’t trust Altman.


References

This post makes 1 reference to:

There is 1 reference to this post in:


What people are saying

What are your thoughts?

You are responding to comment #. Clear

Preview

Markdown supported. cmd + enter to comment. Your comment will appear upon approval. You are responsible for what you write. Terms, privacy policy
This small puzzle helps protect the blog against automated spam.

Preview