Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

History of post ‘I Don’t Get the Hype around GPT-4’

Versions are sorted from most recent to oldest with the original version at the bottom. Changes are highlighted relative to the next, i.e. older, version underneath. Only changed lines and their surrounding lines are shown, except for the original version, which is shown in full.

Revision 1 · · View this (the most recent) version (v2)

Link to article about Fibonacci sequence

@@ -10,7 +10,7 @@ As you add requirements in the same chat, it often ‘forgets’ previous requir

ChatGPT clearly lacks understanding and merely strings words together.

It’s pretty good at generating boilerplate code or something like a -Fibonacci algorithm,+[Fibonacci algorithm](/posts/a-better-way-to-generate-the-fibonacci-sequence), but that’s either too basic for real-world use cases or stuff programmers can just look up. Like, it seems decent at solving problems that are already well understood, but it’s stumped easily by new problems.

It’s sometimes useful for creating logos and other images. For example, I used it to create the [Quote Checker](https://www.quote-checker.com) logo. But once you ask it to make changes to an image it generated, all hell breaks loose. It seems unable to modify only the aspect you want modified – in all my attempts, it modified other parts of the image, too, and that usually reduces the quality because the previous version was already a local optimum, or close to it.


Original · · View this version (v1)

# I Don’t Get the Hype around GPT-4

I’ve been using GPT-4 through the chat interface for a few months and I don’t get the hype.

I routinely run into programming problems it can either only solve with great difficulty after lots of handholding or not at all.

When it does find a solution, just figuring the solution out myself might have taken around the same amount of time, at worst. And I usually have to do manual improvements at the end anyway.

As you add requirements in the same chat, it often ‘forgets’ previous requirements. You then have to remind it.

ChatGPT clearly lacks understanding and merely strings words together.

It’s pretty good at generating boilerplate code or something like a Fibonacci algorithm, but that’s either too basic for real-world use cases or stuff programmers can just look up. Like, it seems decent at solving problems that are already well understood, but it’s stumped easily by new problems.

It’s sometimes useful for creating logos and other images. For example, I used it to create the [Quote Checker](https://www.quote-checker.com) logo. But once you ask it to make changes to an image it generated, all hell breaks loose. It seems unable to modify only the aspect you want modified – in all my attempts, it modified other parts of the image, too, and that usually reduces the quality because the previous version was already a local optimum, or close to it.

Also, if you tell it NOT to do something, like not include an elephant in an image, say, the resulting image almost always includes an elephant. (There’s a somewhat interesting similarity here to how people can’t help but think of elephants when you tell them not to, but given that GPT works completely differently from how the mind works, I hesitate to ascribe any deep meaning to that similarity.)

My prediction is that ChatGPT will not get *qualitatively* better simply by throwing more training data at it. I think it’s asymptotically approaching the cap on how much a program can achieve without genuine understanding.

GPT-4 is useful; it’s a net positive and certainly better than tools like Siri, which hasn’t improved ~at all since its release. But GPT-4 is not deserving of the hype. I don’t turn to it for help often. There are also ethical issues around spending money on OpenAI products given their CEO Sam Altman’s [dishonesty](/posts/sam-altman-s-hidden-motive) and his spreading of immoral philosophical doctrines such as the precautionary principle.