Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Tweets

An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.

But in case I will, you can subscribe via RSS – without a Twitter account. Rationale

@sapepens @krazyander

In any case, it sounds like your mind is made up. Why keep discussing? How could one change your mind?

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

I used to think about slave holders the same way, but I recently learned from a podcast that many slave holders thought they were giving their slaves a better life than in Africa. Slave holders did not consider themselves evil, yet they were doing great evil.

@dchackethal · · Show · Open on Twitter

In other words, these algorithms are coerced into optimizing some predetermined criterion. That's why they couldn't possibly be AGIs: that requires freedom from coercion.

@dchackethal · · Show · Open on Twitter

One of the driving forces of evolution is replication, and selection is a phenomenon that emerges from differences in replication. Existing evolutionary algorithms force a fitness function onto their population of replicators—which is not how evolution works in reality.
👇

@dchackethal · · Show · Open on Twitter

@Plinz

Yup. Sad things happened to them.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

...and that his book is a slaveholder's manual instructing people how to keep "their" AGIs in check.

People look back in horror at slavery in the US and ask, "how could this happen." Today it's Bostrom's book. That's how.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

I read Bostrom's book. He mentions Deutsch in the acknowledgments but he clearly didn't take Deutsch's (superior) ideas seriously or he wouldn't have written it. He would have known that the very concept of superintelligence is an appeal to the supernatural...

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

The question is not whether some AGIs will be psychopaths (some might). The question is whether that warrants shackling all AGIs ("aligning" is just a euphemism for coercion/shackling).

It takes a shift in perspective to recognize how disgusting "alignment" really is.

@dchackethal · · Show · Open on Twitter

@EmpressBashAura

Exactly, an AGI would be capable of love, relationships, humor, etc.

@dchackethal · · Show · Open on Twitter

@ReachChristofer @david_hurn

Yup that’s where I fall, too, both seem true at different times.

@dchackethal · · Show · Open on Twitter

@david_perell @sivers

Yes, one should put in the time. But it shouldn’t be uncomfortable. It should be fun—and once it’s fun, there won’t be any distractions. If it’s fun, you’ll put it in the time happily and automatically.

@dchackethal · · Show · Open on Twitter

@itsDanielSuarez @Plinz @NASA @SpaceX @AstroBehnken @Astro_Doug

Yeah pretty nuts! People are awesome. Onward!

@dchackethal · · Show · Open on Twitter

@thatGuy57039455 @SpaceX

Right, so if it’s prior, and we need ten more minutes, wouldn’t + make more sense?

@dchackethal · · Show · Open on Twitter

@SpaceX

Awesome following this live!!

@dchackethal · · Show · Open on Twitter

@SpaceX

Always wondered why it’s T-x and not T+x...

@dchackethal · · Show · Open on Twitter

@krazyander

Yes, people can harm each other. But should we shackle them in advance because they might harm each other?

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Well, again, AGIs are people by definition, so they can feel love and be altruistic (let’s table for the moment whether altruism is a good thing). And they will be a product of the culture they’re born into, like all children.

@dchackethal · · Show · Open on Twitter

@Plinz

Why speak Latin? :)

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

AGIs are literally children. Read your tweets again while imagining you’re talking about children and maybe you’ll see how sinister those tweets are.

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

@david_perell

Indeed. Once something is automated, it’s time to move onto the next problem and solve it creatively.

@dchackethal · · Show · Open on Twitter

AGIs cannot work under regulations or bondage. They can only work in spite of them. Like economies and memes, the minds of all people, including AGIs, are evolutionary processes that self-regulate. Impose force, and they cease being people.

@dchackethal · · Show · Open on Twitter

@krazyander

That's not to mention that just because an AGIs interests may not align with ours doesn't mean they won't, and especially doesn't mean it wants to hurt us. If it actually does want to hurt us, we can defend ourselves. Until then, assume it's a potential friend, like all people.

@dchackethal · · Show · Open on Twitter

@krazyander

Also note than an AGI won't have an "end goal." It's a person. People don't have end goals. They follow their interests and want to learn/solve problems. After they find a solution they move on to the next problem.

@dchackethal · · Show · Open on Twitter

@krazyander

Processing power, yes. As long as we don't know for a fact that it wants to hurt us, yes, give it all the tools it needs to correct errors. Help it learn (if it wants the help). If it learns about morals it won't hurt us.

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @chophshiy @ks445599

I don't think we touched on free will on the podcast. In my book, I say that having free will means being the author and enactor of one's choices.

@dchackethal · · Show · Open on Twitter

@JulienSLauret @pmoinier @TDataScience

There could be. But AGI, by definition, simulates the human mind—how could one hope to write a program that simulates the mind without understanding it first?

@dchackethal · · Show · Open on Twitter

@chophshiy @SurviveThrive2 @ks445599

In other words, you admit you don't know either and are making up excuses.

In any case, would you be equally offended if I said "we don't know how to time travel"?

PS You don't need to put two spaces after each period, we don't write on typewriters anymore.

@dchackethal · · Show · Open on Twitter

Anyone can become a developer—here's how I did it:

medium.com/@hcd/anyone-ca…

@dchackethal · · Show · Open on Twitter

@chophshiy @SurviveThrive2 @ks445599

Ok, are you saying we know how the mind works? If so, can you please tell me how it works—I'd sincerely love to know.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

You keep dodging questions, doubt my understanding of BoI (first in the context of explainers, now suddenly in the context of instrumentalism), and expect me to magically understand your made-up terminology, and don't explain where that terminology comes from. our convo is over:)

@dchackethal · · Show · Open on Twitter

@chophshiy @SurviveThrive2 @ks445599

Neither @ks445599 nor I ever appealed to any authorities. You're mischaracterizing us.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I take your unwillingness to summarize my view as evidence that you haven't actually understood it, despite your claims that it's "entirely off."

@dchackethal · · Show · Open on Twitter

@IntuitMachine

In a rational discussion it's good practice to summarize the other person's view so well the other person has nothing to add. It also provides an opportunity to prevent talking past each other. You may be arguing against points I didn't make.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

"Creative explainer" is redundant btw.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I can't know what made-up terminology means if it isn't explained to me. I offered the explanation that perhaps you read the book in a different language, but you are dodging my questions.

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

Yes, absolutely.

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @ks445599

You've become hostile. I'm not interested in discussing with you further.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Can you summarize what you think my interpretation of Deutsch's book is so we know we're on the same page?

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Either way, I'm not interested in a competition over who knows BoI better. I've offered friendly criticism of your work in an effort to help—you're welcome to ignore it.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I'm familiar with it. But again, no mention of "good" explainers. Did you read the book in another language perhaps?

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

There is an approach called Whole-Brain Emulation which would instantiate AGI without programming it by simulating in sufficient detail the movements of a brain.

I go back and forth on which approach I find more promising—either way I am more interested in understanding the mind

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

I refer to Deutsch's yardstick for having understood a computational task: "If you can't program it, you haven't understood it." One can't program AGI if one hasn't understood it first.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I'm guessing something is getting lost in translation because you use terms like "good explainer" and "hierarchical structures of societies" in reference to Deutsch's work, even though he doesn't use those terms.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I have read it several times in great detail. I like to think I know a thing or two about Deutsch's work on creativity, and I've had the opportunity to ask him about it, too, on several occasions.

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @ks445599

One or two. Drop empiricism. Study Popperian epistemology. Read Deutsch's "The Beginning of Infinity." This is a good start, too: aeon.co/essays/how-clo…

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @ks445599

predicted this response 18 seconds before you posted it:

twitter.com/ks445599/statu…

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Not why it's "selected by societies," but why it evolved and how complex memes spread and why our species exists.

Nor does Deutsch ever speak of "good" explainers, IIRC—only universal ones.

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

Yes to both.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I don't know what you're talking about or how it relates to the topic of AGI.

@dchackethal · · Show · Open on Twitter

RT @Ayaan:
Dear all,
Please, please read this essay by Julian Christopher. We need to take this Woke stuff seriously.
https://t.co/FlIGDBCU…

@dchackethal · · Show · Open on Twitter

@SurviveThrive2

Ok, if you understand it all, why haven't you built AGI yet?

@dchackethal · · Show · Open on Twitter

@ks445599 @SurviveThrive2

I like to think I solved the problem of free will and choices in my book.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I don't think he claimed that civilization selected for good explainers. And I didn't write my book "just on that."

@dchackethal · · Show · Open on Twitter

@IntuitMachine

What's the additional layer?

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Induction is impossible (see Hume). We've known this for ~250 years, but almost everyone ignores this!

@dchackethal · · Show · Open on Twitter

@saljyns

David Deutsch's "Beginning of Infinity" and his article "Creative Blocks: How Close Are We to Creating Artificial Intelligence?"

There's also my "A Window on Intelligence" if you want to count that.

@dchackethal · · Show · Open on Twitter

@SurviveThrive2

Consciousness, for one.

@dchackethal · · Show · Open on Twitter

@saljyns

Me saying "widespread misconception" already implied that my definition was not common.

The other, common approaches to AGI have been refuted. So my definition of AGI isn't just "idiosyncratic." I can supply the necessary sources if interested.

@dchackethal · · Show · Open on Twitter

@saljyns

It makes all the difference because AGI is the project of explaining how the human mind works. That's an epistemological question.

It having to do with "learning tasks" is a widespread misconception.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Yes, I think that would be a good idea.

@dchackethal · · Show · Open on Twitter

@SurviveThrive2

I don't see how any of that tells us anything about how the human mind works.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

What who said? You or Deutsch?

@dchackethal · · Show · Open on Twitter

@IntuitMachine

He did cover that.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I suggest reading it again. Maybe more than once. Especially chapters 4, 5, and 7.

Judging by your book's outline on Gumroad, it may help you correct some errors so you don't head down blind alleys.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I wasn't talking about the brain, I was talking about the mind.

And, because intelligence is a universal phenomenon—recall Deutsch's concept of the universal explainer—any simulation of it is qualitatively the same (modulo implementation details).

@dchackethal · · Show · Open on Twitter

@IntuitMachine

So that's a "yes," you have read his book?

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Your prediction is self-contradictory because a "discovery of AGI" is an explanation of how the mind works.

@dchackethal · · Show · Open on Twitter

@IntuitMachine @tangled_zans

This relates to what you wrote here: twitter.com/IntuitMachine/…

No need to refer to any "properties of living things" because intelligence is an emergent phenomenon. We know this from computational universality.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Cool. I also see you mentioning "Jumps to Universality" and "Constructor Theory." So... you've read his book, just not chapter 5?

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Having glanced at the outline you present on Gumroad, you may benefit from reading aeon.co/essays/how-clo… before you publish your book.

@dchackethal · · Show · Open on Twitter

@saljyns

Not the brain: the mind. There's a difference.

@dchackethal · · Show · Open on Twitter

@IntuitMachine @tangled_zans

E.g. a boiling pot of water. We can explain what happens there without calculating each position of each molecule at every second. The behavior of the pot is simpler on a higher level of emergence and is best explained on that higher level.

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

@tangled_zans @IntuitMachine

A phenomenon is emergent when it is best explained without referring to its lower-level components. Compare David Deutsch’s “The Beginning of Infinity” chapter 5.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

No human can yet explain how the mind works. But one day, we shall. Problems are soluble.

@dchackethal · · Show · Open on Twitter

@PessimistsArc

“A woman had gone insane from excessive riding of the bicycle.” 😂

@dchackethal · · Show · Open on Twitter

@NASA @SpaceX

Amazing to follow this online.

@dchackethal · · Show · Open on Twitter

Though cool, GPT-3 is not a step toward AGI.

AGI is the project of explaining how the human mind works, and then implementing it on a computer.

I'm not aware of any insight GPT-3 gives us into how the mind works, nor is OpenAI after that (sadly).

@dchackethal · · Show · Open on Twitter

@SpaceX

Amazing what humans can build. Onward!

@dchackethal · · Show · Open on Twitter

@IAM__Network

By definition, AGI and human intelligence are the same.

@dchackethal · · Show · Open on Twitter

@ErickGalinkin

The project of AGI is to explain how the human mind works, and then implement it on a computer.

Since nature achieved human minds somehow, and since computers are universal, AGI must be possible.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

More generally: any good theory of AGI explains how the human mind works.

@dchackethal · · Show · Open on Twitter

@EmpressBashAura

Pretty far, I'm afraid. We need to explain how the human mind works. Only then can we build AGI. That's what most researchers fail to realize.

@dchackethal · · Show · Open on Twitter

@pmoinier @JulienSLauret @TDataScience

GPT-3 is very cool, but I'm afraid it's not AGI, because it doesn't explain how the mind works.

@dchackethal · · Show · Open on Twitter

@BantamCityGames

I'm a minute in and he mentions induction, which refers to an impossible process of knowledge creation. He won't build AGI if he goes down that road.

The primary question in building AGI is: how does the mind work?

@dchackethal · · Show · Open on Twitter

@seanmcbride

Yes. The primary task in building AGI is understanding the mind.

@dchackethal · · Show · Open on Twitter

@tjaulow

And yours truly is working to create a course for beginners, so stay tuned for that!

@dchackethal · · Show · Open on Twitter

@tjaulow

You bet. I started with a book called “HTML For Dummies.” Though not a fully-fledged programming language, HTML will feel like one, and the visual element will help you correct errors quickly. After that I learned CSS and JavaScript.

@dchackethal · · Show · Open on Twitter

@krazyander

I think what’s good or bad is objective.

Regardless—yes, we can’t predict its preferences and goals either way.

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

Not at all! I’ve seen people successfully switch careers to coding in their forties and fifties (eg from law). I think people at any age and any career stage can do it.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @nburn42 @Plinz @aifdn

I had already given a metric: that of whether moral knowledge solves a problem, and how many. But no there’s no metric to just solve all moral problems.

This is just another surface issue though. Like I said, we will keep having disagreements if we don’t agree on epistemology.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @nburn42 @Plinz @aifdn

Not much hinges on this as we disagree on a thousand surface issues but maybe just one underlying issue: epistemology. Not much point in discussing unless we agree on epistemology.

@dchackethal · · Show · Open on Twitter

@J__Hein

We can then ask: but why were those genes better able to spread? Which seems to me analogous to what you're looking for (correct me if I'm wrong).

@dchackethal · · Show · Open on Twitter

@J__Hein

Well, it's a bit like asking: why do male peacocks have such elaborate tails?

One answer is: because female peacocks like them. The deeper answer is: because genes that happened to code for slightly more elaborate tails spread better.

I think it's the same with memes.

@dchackethal · · Show · Open on Twitter

@campeters4 @david_perell

*watch on a heath ;-) But yes! A great thought experiment.

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @nburn42 @Plinz @aifdn

Guys don't get laid by fighting other guys, either.

@dchackethal · · Show · Open on Twitter

@nburn42 @Plinz @GoodNPlenty333 @aifdn

Ok which is?

@dchackethal · · Show · Open on Twitter

@GoodNPlenty333 @nburn42 @Plinz @aifdn

I'm happy when I play video games. Video games don't get me laid nor do they make me rich.

@dchackethal · · Show · Open on Twitter

@nburn42 @Plinz @GoodNPlenty333 @aifdn

Again, mental phenomena are emergent. We should explain them without referring to any underlying hardware. We know from computational universality that you can run the same phenomena as software on a laptop, even though laptops are not made of neurons.

@dchackethal · · Show · Open on Twitter

Search tweets

/
/mi
Accepts a case-insensitive POSIX regular expression. Most URLs won’t match. Tweets may contain raw markdown characters, which are not displayed.
Clear filters