Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Tweets

An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.

But in case I will, you can subscribe via RSS – without a Twitter account. Rationale

@DoqxaScott @Plinz @GoodNPlenty333 @aifdn

I'm basically arguing against historicism applied to the future.

And the further we try to look into the future, the less reliable (and meaningful) our predictions become.

That's not to say that making predictions isn't useful!

@dchackethal · · Show · Open on Twitter

@david_perell

Yes—I just recently discovered William Paley's "Natural Theology" to be a treasure chest of good ideas (albeit mistaken conclusions!).

Can be read for free here: google.com/books/edition/…

@dchackethal · · Show · Open on Twitter

@Plinz @nburn42 @GoodNPlenty333 @aifdn

"And solving problems itself depends on knowing how; so, external factors aside, unhappiness is caused by not knowing how."

@dchackethal · · Show · Open on Twitter

@Plinz @nburn42 @GoodNPlenty333 @aifdn

David Deutsch offers an interesting conjecture in his book "The Beginning of Infinity": "Happiness is a state of continually solving one's problems [...] “Unhappiness is caused by being chronically baulked in one’s attempts to do that."

@dchackethal · · Show · Open on Twitter

@J__Hein

The primary reason in any case we can know from meme theory: those inconsistencies above exist simply because their respective words managed to spread.

@dchackethal · · Show · Open on Twitter

@J__Hein

Ah yes, my mistake.

The problem remains, however, as oftentimes the opposite argument is made: that simplifications are made to words people use often (eg so that spelling gets easier/shorter etc).

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

A solution to a moral problem either works or it doesn’t. And the more moral problems one solves the more one has progressed morally. That’s what I mean by morals are objective.

@dchackethal · · Show · Open on Twitter

@Plinz @GoodNPlenty333 @aifdn

I don't think so. Morals are objective, and solutions to moral problems either work or they don't.

Besides, ethics as "the negotiation of conflicts of interest under conditions of shared purpose" sounds impressive (maybe) but it's vacuous.

@dchackethal · · Show · Open on Twitter

When I was working on my last ebook, I found unsplash.com to be immensely helpful in finding beautiful, free-to-use, high-resolution images of all sorts. May even save you $$ if you can find a photo good enough for your cover so you don't need to hire a cover designer.

@dchackethal · · Show · Open on Twitter

@Plinz @GoodNPlenty333 @aifdn

Simpler: ethics is the study or moral problems.

@dchackethal · · Show · Open on Twitter

@shl

Better yet, it solves some of your own problems. If you’re both the creator and consumer, you can find and improve its flaws more easily, and your product will be so much better for it 💪

@dchackethal · · Show · Open on Twitter

@dvassallo

Did you use any paid advertising at all?

@dchackethal · · Show · Open on Twitter

Custom implementation of 3 * 4 in JavaScript:

Array.from(Array(3).keys()).map(() => 4).reduce((acc, curr) => acc + curr);

Same in Berlin:

reduce(+ repeat(3 4))

Which do you prefer?

🙏

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

That's not to mention that wellbeing cannot be maximized because it can always get better.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

I don't think slavery was abolished to maximize the well being of society. It was abolished because it is abhorrent. It was an instance of error correction—in this case a (grave) moral error—not an instance of optimization.

@dchackethal · · Show · Open on Twitter

@Plinz @GoodNPlenty333 @aifdn

I'm afraid such a simulation is impossible because the main influencing factor—knowledge creation—is unpredictable in principle.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

(It isn't really a problem anyhow!)

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

Well, it's like I said: through conjecture and criticism we can solve problems of all kind—be they scientific, moral, or otherwise. This interplay between conjecture and criticism is at the heart of Popperian epistemology, and it has enough reach to solve the is-ought problem.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

Ah, I agree that science won't solve ethical problems, at least not most of the time. Moral problems are philosophical in nature, not scientific, yes.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

Then presumably you think that abolishing slavery was no moral progress—because there is no such thing as moral progress?

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

I know it. I favor Popperian epistemology. The famous "you can't derive an ought from an is" is not a problem after all: as David Deutsch once said, we're not after deriving, we're after explaining! And explaining is done via conjecture, both for morals and otherwise.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

I'm familiar with the work on "AI safety." Applying that stuff to narrow AI is morally ok, but applying it to AGI—who, by definition, are people (conscious, creative, etc)—is very sinister. Turns people into slaveholders.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

Intelligence is the ability to solve problems. Including morals ones. These things are not orthogonal.

@dchackethal · · Show · Open on Twitter

@HeuristicAndy

Boredom is your mind's way of telling you that something isn't for you. If someone thinks coding is dreadfully boring, they shouldn't do it.

Re GPT-3: Probably not.

@dchackethal · · Show · Open on Twitter

@techreview

An interesting read. Would be good to explore.

@dchackethal · · Show · Open on Twitter

When you put all of these things together: low bar to entry, good pay, beginner-friendliness, low cost, ability to make $$$ fast—how could you not recommend someone enduring hardship learn to code? It's their ticket out!

@dchackethal · · Show · Open on Twitter

Seventh, the pay is good. Really good, even for beginners. I've never heard of a junior developer struggling to make ends meet. Your skills are too valuable for that to ever happen.

@dchackethal · · Show · Open on Twitter

Related to that and sixth, because programming skills are incredibly valuable, you will rarely find yourself jobless, and if you do, it won't be for long. You can always find some job as a programmer to pay the bills.

@dchackethal · · Show · Open on Twitter

Fifth, there are waaay more jobs than developers out there. Writing halfway decent JavaScript code will make $$$ within a few months of writing your first line. Getting that first paycheck thanks to a self-taught skill is one of the most empowering things you can experience.

@dchackethal · · Show · Open on Twitter

Fourth, progress is rapid and transparent. Odds are whatever platform you're interested in working on gets improved at least every couple of months. That makes your life easier and increases your productivity. What other industry can say that?

@dchackethal · · Show · Open on Twitter

Third, the industry is mostly open. You can find millions of lines of code for free, ask questions about it, tweak it, run it, tweak it again, contribute. The support is unlike any other and especially friendly to beginners who show they want to contribute and offer value.

@dchackethal · · Show · Open on Twitter

That's part of what's so great about this industry: what matters is skills, not degrees. Talkers don't get far. Only qualified doers do.

@dchackethal · · Show · Open on Twitter

I have been a software engineer for just short of ten years and never once has anyone asked me for any degrees. All my clients/employers ever wanted to know was wether I could solve a problem. Whenever I demonstrated I could, they hired me.

@dchackethal · · Show · Open on Twitter

Second, you do not need a degree. In fact, I suggest you don't get one. You can learn everything you need online for free. And you can learn it in a matter of months. Compare that to a four-year degree and thousands of dollars of debt. It's a no-brainer.

@dchackethal · · Show · Open on Twitter

First of all, coding is relatively easy. There, I said it. Granted, it's not the easiest thing in the world, but it's easier than many other career paths, esp. those involving manual labor, long hours at night, or work outside. That coding requiring genius is misleading folklore.

@dchackethal · · Show · Open on Twitter

Detractors allege that telling someone who's in a bad spot to "learn to code" trivializes their hardship. But that's not at all the case: learning to code really is the best way to escape hardship. Why?

@dchackethal · · Show · Open on Twitter

Actually, suggesting someone learn to code is great career advice.

A thread 🧵

@dchackethal · · Show · Open on Twitter

What the let's-worry-about-AI folks don't know is: moral knowledge grows by the same logic as all other knowledge, and increased processing speeds make possible increased error correction of moral knowledge, too. So, we should give AGIs as much processing speed as possible!

@dchackethal · · Show · Open on Twitter

@tweetycami @PsychToday

Fairly smart, but not intelligent at all.

@dchackethal · · Show · Open on Twitter

@PrometheusAM

One of the greatest philosophers of all time.

@dchackethal · · Show · Open on Twitter

@gmaniatis @OpenSociety

Yes, highly relevant for today's discussions with social-justice warriors.

@dchackethal · · Show · Open on Twitter

@Julez_Norton

Yes, automation frees people up to be more creative. By not having to execute tasks mindlessly, they can solve problems creatively.

@dchackethal · · Show · Open on Twitter

@FLIxrisk

"Machine speeds rather than human speeds"—the important constraining factor is going to be the performance characteristics of software, not hardware, as an ungainly and slow piece of software runs slowly even on a fast computer.

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

Saw this interesting question on Reddit: if Corona hadn't happened, how would the past ~4 months have played out differently for you?

I would have been able to go to the gym. And I probably would have eaten out more.

How would things have been different for you?

@dchackethal · · Show · Open on Twitter

@rosalbavp @fuedicho

Exactamente. Tal vez le gusta también lo que dijo Popper de que todos somos iguales en nuestra ignorancia infinita.

@dchackethal · · Show · Open on Twitter

@FranklinAmoo @JulienSLauret @TDataScience

Unless GPT-3 brings us closer in our understand of how the human mind works, it's not a step toward AGI. Nor can AGI be achieved through incremental steps—it requires something wholly new and qualitatively different.

@dchackethal · · Show · Open on Twitter

@MicropoleBeLux

By definition AGI and human intelligence are equivalent.

@dchackethal · · Show · Open on Twitter

@techreview

Agreed, because AGI cannot be achieved in steps—it requires something wholly new and qualitatively different. We won't be able to build it unless we understand how the human mind works.

@dchackethal · · Show · Open on Twitter

@carlosmara

Sí, el principio de optimismo probablemente es uno de los principios más importantes que conocemos. También su conjetura que problemas son solubles y inevitables.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @Plinz @aifdn

Increased intelligence goes along with increased error correction, including the correction of moral errors and stability-related errors.

Regulation employs coercion and subdues creativity (which is powered by error correction) and therefore makes society less stable.

@dchackethal · · Show · Open on Twitter

@Ronald_vanLoon @DeepCaked @demishassabis @goodfellow_ian

Good stuff. Would be cool if they could make the voice sound younger, too, to go along with it.

@dchackethal · · Show · Open on Twitter

@J__Hein

Re the first one: I would guess the word "colleague" is used more frequently than the word "monolog."

@dchackethal · · Show · Open on Twitter

@J__Hein

Actually I don't know that you're a sir, maybe you're a ma'am, but you know what I mean.

@dchackethal · · Show · Open on Twitter

@J__Hein

You're too kind, sir!

@dchackethal · · Show · Open on Twitter

@J__Hein

In my head a quiet voice went, "huh, that could sound like 'fish'," but thinking I had to give the right answer, I pronounced it "go-tea."

@dchackethal · · Show · Open on Twitter

@J__Hein

It's funny you mention that one. My high-school English teacher showed us that one. She simply wrote "Ghoti" on the blackboard and asked us how we would pronounce it.

@dchackethal · · Show · Open on Twitter

The differences between British and American English are inconsistent.

It's monologue/monolog, dialogue/dialog—but why not colleague/colleag?

Similarly, it's metre/meter, centre/center—but why not table/tabel?

@dchackethal · · Show · Open on Twitter

“You can travel from anywhere on Earth to every other place on Earth in less than an hour if you go through space.”

Hopefully very, very soon! twitter.com/HumanProgress/…

@dchackethal · · Show · Open on Twitter

@bnielson01

Also interesting how some of the very people who are fascinated by its capabilities are automatically worried about its implications, too.

Instead of saying, “wow, this is neat. I can use this!” they say, “wow, this is neat. But most people are going to lose their jobs!”

@dchackethal · · Show · Open on Twitter

@DavidSatzinger

I like the self-delete. What scenes do you think that one and the dying one would have been used for?

@dchackethal · · Show · Open on Twitter

@thenumber8008 @RichardDawkins

I like “idiomemes” and “idemes.” Not bad ideas!

@dchackethal · · Show · Open on Twitter

@_Islamicat

You is back from holidayses?

@dchackethal · · Show · Open on Twitter

Wokid-19! Brilliant! twitter.com/Ayaan/status/1…

@dchackethal · · Show · Open on Twitter

@hunchandblunder

A good example of how well-meant positive rights turn into hellish nightmares.

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

@ks445599

They don't derive knowledge. They create it afresh.

@dchackethal · · Show · Open on Twitter

RT @PessimistsArc:
1897: Guy asks Washington D.C. for permission to use his new fangled horseless carriage.

Their response? Banning all ho…

@dchackethal · · Show · Open on Twitter

@lostintaut @RichardDawkins

Oh cool, wasn't familiar with "protologism."

"Intrameme" could work, but ideally the word would be as simple as "meme." Something single-syllable, maybe rhyming with meme...

@dchackethal · · Show · Open on Twitter

@lostintaut @RichardDawkins

One last thing: "inter" means "between" or "across," so "inter" may confuse people into thinking that we're talking about ideas that spread between minds.

"Intra" means "within" which fits better.

So there's a difference in meaning. E.g. intranational != international.

@dchackethal · · Show · Open on Twitter

@lostintaut @RichardDawkins

Hmm not bad. Intra maybe better. But a mouthful in any case.

@dchackethal · · Show · Open on Twitter

@pmaymin

  1. The Communication of Authority
  2. The Disambiguation of Hilarity
  3. The Artificiality of Anxiety
  4. The Machiavellianism of Uncertainty
  5. The Latitudinarianism of Diversity
  6. The Schizosaccharomycetaceae of Disparity

Your algorithm has reach.

@dchackethal · · Show · Open on Twitter

@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner

Interesting. I've had similar thoughts about humor (though not in terms of rewards).

@dchackethal · · Show · Open on Twitter

RT @realchrisrufo:
Seattle is quickly moving forward with its plan to "abolish prisons."

I've received a trove of leaked documents from w…

@dchackethal · · Show · Open on Twitter

@FallingIntoFilm

Yes. I think this tells us something new about meme theory: not only does a meme have to be good at spreading between people and getting its holders to enact certain behaviors, it must first spread within minds as a meon. That's how it causes behavior in the first place.

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

I believe computational universality follows from the laws of physics as a conclusion (see Deutsch's work IIRC), but even if it were purely conjectural, so what? All theories are conjectural (Popper), so that in itself is not a criticism.

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

The abstractions I wrote about are not at all arbitrary. They are deliberately placed in the context of a larger theory that's hard to vary, and their place in it is hard to vary, too.

@dchackethal · · Show · Open on Twitter

@dvassallo

Unfortunately way too widespread a belief.

@dchackethal · · Show · Open on Twitter

@Crit_Rat @ChipkinLogan

BTW, are you by chance referring to a specific theory of ideas that replicate within a mind/have you heard of that concept before? I've been trying to find evidence that maybe the neo-Darwinian theory of the mind isn't new.

@dchackethal · · Show · Open on Twitter

@Crit_Rat @ChipkinLogan

Ok but sometimes I want to speak only of the former not the latter, so it would be good to disambiguate them.

@dchackethal · · Show · Open on Twitter

@Crit_Rat @ChipkinLogan

Maybe!

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

If we understand the mind tomorrow, and build AGI on a computer, that AGI will not have anything to do with brain hardware—nor could it possibly, because it won't run on a brain, nor would the computer it runs on have to imitate the brain in any way because both are univrsl alrdy

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

That our progression in computer science was bottom-up does not refute computational universality or the substrate-independence of software.

And because computation is universal, there is no different kind of computation going on in a mind.

@dchackethal · · Show · Open on Twitter

@FallingIntoFilm

Maybe “meons”...

To be clear, they are already part of the neo-Darwinian evolution that occurs in a mind even if they never become memes. Both meme evolution and meon evolution are neo-Darwinian.

@dchackethal · · Show · Open on Twitter

Actually scratch that, “neon” is already a name for a chemical element.

@dchackethal · · Show · Open on Twitter

@ChipkinLogan

what do you think? As in the “neo” in “neo-Darwinian theory of the mind.”

@dchackethal · · Show · Open on Twitter

Maybe “neons”?

@dchackethal · · Show · Open on Twitter

Title suggestion for David’s next book: “The Beginning of Infinity 2: The Middle of Infinity.”

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

We don’t explain word processors in terms of hardware, do we? Why should we when it comes to the mind?

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

Why? The mind, like all software, is substrate-independent.

@dchackethal · · Show · Open on Twitter

@dino_rosati @RichardDawkins

If only the word weren’t already taken!

@dchackethal · · Show · Open on Twitter

@ChipkinLogan @RichardDawkins

Those are one’s gym-related ideas only. ;)

@dchackethal · · Show · Open on Twitter

@tomhyde_ @RichardDawkins

Hehe. Maybe.

@dchackethal · · Show · Open on Twitter

@lostintaut @RichardDawkins

Looks as though that distinction is already in use:

twitter.com/micahtredding/…

And I’d rather not have the word “meme” in the word. A rhyme of it would be cool maybe. Or something completely new. twitter.com/micahtredding/…

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

I would just focus on ideas replicating as software, independent of their substrate.

After all, the whole point of AGI research is to run the program on a computer that isn’t the brain.

@dchackethal · · Show · Open on Twitter

@fwmm @micahtredding @RichardDawkins

I would ignore the brain entirely and focus on the mind. Yes all information processing is physical but we have figured out how to translate abstractions into physical movements (by building and instructing computers) and the brain is a computer.

@dchackethal · · Show · Open on Twitter

@dino_rosati @RichardDawkins

As in “thought-memes”? :)

@dchackethal · · Show · Open on Twitter

@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner

Glad you read the article and appreciate the feedback.

@dchackethal · · Show · Open on Twitter

@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner

There are many “theories of consciousness” along those lines but I’m not particularly impressed with them. Self-referentiality has a woo-woo status somehow. May have to do with recursion being intimidating. Don’t see how that explains anything.

@dchackethal · · Show · Open on Twitter

@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner

Yes, I’ve had similar thoughts.

Also interesting how we forget most dreams quickly after waking up. Must be short-lived replicators. Adapted to the dream state, overwhelmed in the waking state.

@dchackethal · · Show · Open on Twitter

Search tweets

/
/mi
Accepts a case-insensitive POSIX regular expression. Most URLs won’t match. Tweets may contain raw markdown characters, which are not displayed.
Clear filters