Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Tweets

An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.

But in case I will, you can subscribe via RSS – without a Twitter account. Rationale

@krazyander

... to a more positive, open, people-friend (and, thereby, AGI-friendly) worldview, in which rapid progress and error correction are things not to be feared and avoided, but celebrated.

@dchackethal · · Show · Open on Twitter

@krazyander

technology ultimately requires a shift in attitude from the kind of cynical, pessimistic and AGI-skeptical view I explained in this tweet: twitter.com/dchackethal/st…

@dchackethal · · Show · Open on Twitter

@krazyander

And, as I have argued in another thread with someone else, most of these surface issues are really only addressed properly by getting on the same page about epistemology.

Lastly, at the end of the day, embracing AGI and seeing it as a fascinating and ultimately positive...

@dchackethal · · Show · Open on Twitter

@krazyander

That software is the decisive limiting factor does not make for a trivial case—indeed, it follows from our best explanations about computation.

But yes, as I have argued before, more processing speed can help with faster error correction, including the correction of moral errors

@dchackethal · · Show · Open on Twitter

Most of the anti-AGI worries translate into this: sentencing AGIs to death for crimes against humanity they didn’t commit, nor could have committed because they haven’t been born yet, but might possibly commit.

Basically pre-natal thought crimes.

@dchackethal · · Show · Open on Twitter

I get VERY excited watching videos like this one: youtube.com/watch?v=y6VlzF…

@dchackethal · · Show · Open on Twitter

@MRMV84 @paulg

I've considered that, and it might be an interesting approach. But again, the tech was easy—a browser extension would also involve getting people to install it, use it frequently, etc.

Unless I could somehow integrate the browser extension with an existing social network...

@dchackethal · · Show · Open on Twitter

@G_Langenderfer @paulg

No.

@dchackethal · · Show · Open on Twitter

@paulg

I built a social network once that had the ability to compute the quality of a post. The tech was easy; getting people to join more difficult.

@dchackethal · · Show · Open on Twitter

@paulg

Reminds me I should spend more time reading and less time on Twitter.

@dchackethal · · Show · Open on Twitter

@nburn42 @Plinz

Well, the brain is a computer, but the mind is software running on the brain. The mind attempts to explain the world by repeatedly conjecturing and criticizing solutions to problems (Popper).

@dchackethal · · Show · Open on Twitter

@krazyander

(and no, faster hardware doesn't make an AGI child even significantly different in the relevant ways)

@dchackethal · · Show · Open on Twitter

@krazyander

In other words, faster hardware by itself doesn't make an AGI child not a child.

If this still doesn't convince you, then I suspect that your condition to convince you wasn't sufficient in the first place.

@dchackethal · · Show · Open on Twitter

@krazyander

If you're like most AI-safety worriers, you may reply "but this is different! This time, it's orders of magnitude!" So were computers when they first came around, and they have been gaining orders of magnitude every decade since. Which has again been overwhelmingly a good thing

@dchackethal · · Show · Open on Twitter

@krazyander

That's not to mention Deutsch's point that people have been increasing their hardware over the centuries through pen, paper, computers, etc and that's overwhelmingly been a good thing.

@dchackethal · · Show · Open on Twitter

@krazyander

Hardware performance is overrated for this particular issue. It needs to be accompanied by improved software performance, the AGI child needs to learn how to use fast hardware, etc...

@dchackethal · · Show · Open on Twitter

@Plinz

No, "a human mind is a function that takes in observations and yields behavior and policy updates" is an empiricist and behaviorist misrepresentation of the mind. Human minds are creative, and continue do be so without sense input.

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

I've enjoyed videos from Bucky Roberts' channel: youtube.com/user/thenewbos…

And Treehouse has good tutorials as well (paid though!): teamtreehouse.com/courses

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

@krazyander

The same question remains: but why? Where does “greater potential for rapidly gaining power and influence over the world” come from? And where do the bad inborn ideas come from?

@dchackethal · · Show · Open on Twitter

@david_perell

Or better yet, no school at all, just voluntary learning and fun.

@dchackethal · · Show · Open on Twitter

@krazyander

Btw I already explained how and why an AGI child is just like a human child and you ignored that and made an unargued assertion to the contrary when, if I’m not mistaken, I had met your condition to change your mind.

@dchackethal · · Show · Open on Twitter

@krazyander

Right, an AGI child might not be bright at all. Or mediocre. Or very smart.

To your second point: but why?!

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Ok. To learn about Popperian epistemology, I recommend reading "The Beginning of Infinity" by David Deutsch, or Karl Popper's "Objective Knowledge."

@dchackethal · · Show · Open on Twitter

@krazyander

That's not to mention that no parent of a human child ever goes "thank god my child doesn't develop quickly enough to harm the human race."

A good parent will want to help their child develop as quickly as possible.

@dchackethal · · Show · Open on Twitter

@krazyander

Yes, an AGI is literally a child. It will have a few inborn ideas and needs to figure out everything else for itself. Learn to perceive, learn to communicate, learn to move, etc.

How quickly that happens depends entirely on the child, just like with human children.

@dchackethal · · Show · Open on Twitter

@bnielson01 @ChipkinLogan @HeuristicAndy @RealtimeAI @mizroba

I understand it was tongue in cheek, just wanted to clarify. Didn’t mean to take away the humor. :)

@dchackethal · · Show · Open on Twitter

@krazyander

AGI is different in that regard from any other technology in that one wouldn’t instantiate an AGI to benefit from it, but because one wants to raise it and help it learn. Same reason people (should) have behind raising children.

But again: any other way to change your mind?

@dchackethal · · Show · Open on Twitter

@SpaceX

Holy shit

@dchackethal · · Show · Open on Twitter

@joerogan

Lovely doge

@dchackethal · · Show · Open on Twitter

@bnielson01 @ChipkinLogan @HeuristicAndy @RealtimeAI @mizroba

I think Logan was talking about the creation of wealth, not its redistribution.

@dchackethal · · Show · Open on Twitter

@shl

Yes, and are fun to do.

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

I think the fact that ideas in favor of authority conflict with Popperian epistemology is merely interesting and worthy of exploration. It points to a problem. There lies a fascinating truth to be found. There is no need to call those who want to find that solution "idiotarians."

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

But they would also contend that privately-owned security companies would do a better job at it.

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

And it doesn't seem that people try to understand the libertarian position. For example, Andy claims that they "sloganeer that anything the government does by definition is evil."

But they don't. I think they would say, if a cop prevents a murder, that's a good thing.

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

Those skeptical of regulations are considered flawed somehow, or dumb. "Idiotarians." Or thinking along the lines of "only an idiot could think that way." Which begs the related questions of "why that happened."

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

What's telling is that these "disturbing idiotarians" happily grant that people in favor of regulations have reasons for thinking that and are not idiots. That good will is not reciprocated.

@dchackethal · · Show · Open on Twitter

@bnielson01

Please elaborate?

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

We will have a hard time agreeing on the surface issue of AI safety if we don't resolve the underlying disagreement about epistemology.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

The problem is that Bayesian epistemology is completely false, and the only good epistemology we have found is Popper's. With Popper's, it is much easier to see that AI-safety concerns do not apply any more than to any other people.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Your bio says you're interested in effective altruism, which makes me think that you quite possibly subscribe to Bayesian epistemology and other related rationalist ideas about epistemology.

Those who worry about AI safety usually come from that direction.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

AGIs could not be created as psychopaths, because psychopathic ideas cannot be "induced"—there is no instruction from without (Popper).

Which brings me to my main point. I doubt any of this will convince you, because the underlying disagreement is an epistemological one.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

You cannot prevent someone from becoming a psychopath by coercing them, or through "education," or any other restrictive measures.

To think it is our concern to "manage its development" is rather sinister. An AGI is a child, free to develop in any way it wants.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Since AGIs, by definition, are people, they will automatically be capable of feeling love and of being psychopaths, and everything else/in between.

Whether an AGI—like any person—becomes a psychopaths depends on the development of ideas within that person's mind.

@dchackethal · · Show · Open on Twitter

When the conflict is solved, that leads to happiness, both in society and within the mind. Coercion, in turn, can only lead to misery: something's gotta give.

Source on coercion (highly recommended): takingchildrenseriously.com/node/50#Coerci…

@dchackethal · · Show · Open on Twitter

Solving the conflict would mean that both sides within oneself are happy. That is also always possible, but also takes creativity and is, therefore, hard.

@dchackethal · · Show · Open on Twitter

That's bound to make oneself unhappy, just like the lockdown is leading to unrest. And any gym routine based on coercion won't last long.

@dchackethal · · Show · Open on Twitter

An example of coercion within a mind is when the idea of going to the gym arbitrarily wins over the idea of staying in bed. In other words, one forces oneself to go to the gym even though a conflicting idea is still present in one's mind.

@dchackethal · · Show · Open on Twitter

Truly resolving the conflict would mean both sides are happy. That is always possible, but takes creativity, so it can be hard.

@dchackethal · · Show · Open on Twitter

A timely example of coercion across minds is the lockdown some societies have implemented. There is coercion because the idea of locking down wins arbitrarily over the idea of not locking down without solving the conflict.

@dchackethal · · Show · Open on Twitter

The evolution of ideas in a mind is analogous to the evolution of ideas across minds. We can see this when we study coercion.

A thread 🧵👇

@dchackethal · · Show · Open on Twitter

@krazyander

That way won’t work because it’s not on AGIs to benefit humanity.

Any other way?

@dchackethal · · Show · Open on Twitter

@julepparadox @alvarlagerlof @iamdevloper

You can do a whole-word search on “i”.

@dchackethal · · Show · Open on Twitter

More buggy animal programming twitter.com/_Islamicat/sta…

@dchackethal · · Show · Open on Twitter

@iamdevloper

In exchange for Corona, all the null pointer exceptions will fix themselves on Christmas.

@dchackethal · · Show · Open on Twitter

@_Islamicat

They is did not expect this.

@dchackethal · · Show · Open on Twitter

@krazyander

Okay how could I change your mind?

@dchackethal · · Show · Open on Twitter

@EmpressBashAura

No, that's one of the things that differentiates AGI from narrow AI—it doesn't require any training data.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

That doesn't answer my question about how one could change your mind. To be clear, I didn't ask for a concession.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

In any case, it sounds like your mind is made up. Why keep discussing? How could one change your mind?

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

I used to think about slave holders the same way, but I recently learned from a podcast that many slave holders thought they were giving their slaves a better life than in Africa. Slave holders did not consider themselves evil, yet they were doing great evil.

@dchackethal · · Show · Open on Twitter

In other words, these algorithms are coerced into optimizing some predetermined criterion. That's why they couldn't possibly be AGIs: that requires freedom from coercion.

@dchackethal · · Show · Open on Twitter

One of the driving forces of evolution is replication, and selection is a phenomenon that emerges from differences in replication. Existing evolutionary algorithms force a fitness function onto their population of replicators—which is not how evolution works in reality.
👇

@dchackethal · · Show · Open on Twitter

@Plinz

Yup. Sad things happened to them.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

...and that his book is a slaveholder's manual instructing people how to keep "their" AGIs in check.

People look back in horror at slavery in the US and ask, "how could this happen." Today it's Bostrom's book. That's how.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

I read Bostrom's book. He mentions Deutsch in the acknowledgments but he clearly didn't take Deutsch's (superior) ideas seriously or he wouldn't have written it. He would have known that the very concept of superintelligence is an appeal to the supernatural...

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

The question is not whether some AGIs will be psychopaths (some might). The question is whether that warrants shackling all AGIs ("aligning" is just a euphemism for coercion/shackling).

It takes a shift in perspective to recognize how disgusting "alignment" really is.

@dchackethal · · Show · Open on Twitter

@EmpressBashAura

Exactly, an AGI would be capable of love, relationships, humor, etc.

@dchackethal · · Show · Open on Twitter

@ReachChristofer @david_hurn

Yup that’s where I fall, too, both seem true at different times.

@dchackethal · · Show · Open on Twitter

@david_perell @sivers

Yes, one should put in the time. But it shouldn’t be uncomfortable. It should be fun—and once it’s fun, there won’t be any distractions. If it’s fun, you’ll put it in the time happily and automatically.

@dchackethal · · Show · Open on Twitter

@itsDanielSuarez @Plinz @NASA @SpaceX @AstroBehnken @Astro_Doug

Yeah pretty nuts! People are awesome. Onward!

@dchackethal · · Show · Open on Twitter

@thatGuy57039455 @SpaceX

Right, so if it’s prior, and we need ten more minutes, wouldn’t + make more sense?

@dchackethal · · Show · Open on Twitter

@SpaceX

Awesome following this live!!

@dchackethal · · Show · Open on Twitter

@SpaceX

Always wondered why it’s T-x and not T+x...

@dchackethal · · Show · Open on Twitter

@krazyander

Yes, people can harm each other. But should we shackle them in advance because they might harm each other?

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Well, again, AGIs are people by definition, so they can feel love and be altruistic (let’s table for the moment whether altruism is a good thing). And they will be a product of the culture they’re born into, like all children.

@dchackethal · · Show · Open on Twitter

@Plinz

Why speak Latin? :)

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

AGIs are literally children. Read your tweets again while imagining you’re talking about children and maybe you’ll see how sinister those tweets are.

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

@david_perell

Indeed. Once something is automated, it’s time to move onto the next problem and solve it creatively.

@dchackethal · · Show · Open on Twitter

AGIs cannot work under regulations or bondage. They can only work in spite of them. Like economies and memes, the minds of all people, including AGIs, are evolutionary processes that self-regulate. Impose force, and they cease being people.

@dchackethal · · Show · Open on Twitter

@krazyander

That's not to mention that just because an AGIs interests may not align with ours doesn't mean they won't, and especially doesn't mean it wants to hurt us. If it actually does want to hurt us, we can defend ourselves. Until then, assume it's a potential friend, like all people.

@dchackethal · · Show · Open on Twitter

@krazyander

Also note than an AGI won't have an "end goal." It's a person. People don't have end goals. They follow their interests and want to learn/solve problems. After they find a solution they move on to the next problem.

@dchackethal · · Show · Open on Twitter

@krazyander

Processing power, yes. As long as we don't know for a fact that it wants to hurt us, yes, give it all the tools it needs to correct errors. Help it learn (if it wants the help). If it learns about morals it won't hurt us.

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @chophshiy @ks445599

I don't think we touched on free will on the podcast. In my book, I say that having free will means being the author and enactor of one's choices.

@dchackethal · · Show · Open on Twitter

@JulienSLauret @pmoinier @TDataScience

There could be. But AGI, by definition, simulates the human mind—how could one hope to write a program that simulates the mind without understanding it first?

@dchackethal · · Show · Open on Twitter

@chophshiy @SurviveThrive2 @ks445599

In other words, you admit you don't know either and are making up excuses.

In any case, would you be equally offended if I said "we don't know how to time travel"?

PS You don't need to put two spaces after each period, we don't write on typewriters anymore.

@dchackethal · · Show · Open on Twitter

Anyone can become a developer—here's how I did it:

medium.com/@hcd/anyone-ca…

@dchackethal · · Show · Open on Twitter

@chophshiy @SurviveThrive2 @ks445599

Ok, are you saying we know how the mind works? If so, can you please tell me how it works—I'd sincerely love to know.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

You keep dodging questions, doubt my understanding of BoI (first in the context of explainers, now suddenly in the context of instrumentalism), and expect me to magically understand your made-up terminology, and don't explain where that terminology comes from. our convo is over:)

@dchackethal · · Show · Open on Twitter

@chophshiy @SurviveThrive2 @ks445599

Neither @ks445599 nor I ever appealed to any authorities. You're mischaracterizing us.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I take your unwillingness to summarize my view as evidence that you haven't actually understood it, despite your claims that it's "entirely off."

@dchackethal · · Show · Open on Twitter

@IntuitMachine

In a rational discussion it's good practice to summarize the other person's view so well the other person has nothing to add. It also provides an opportunity to prevent talking past each other. You may be arguing against points I didn't make.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

"Creative explainer" is redundant btw.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I can't know what made-up terminology means if it isn't explained to me. I offered the explanation that perhaps you read the book in a different language, but you are dodging my questions.

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

Yes, absolutely.

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @ks445599

You've become hostile. I'm not interested in discussing with you further.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Can you summarize what you think my interpretation of Deutsch's book is so we know we're on the same page?

@dchackethal · · Show · Open on Twitter

@IntuitMachine

Either way, I'm not interested in a competition over who knows BoI better. I've offered friendly criticism of your work in an effort to help—you're welcome to ignore it.

@dchackethal · · Show · Open on Twitter

@IntuitMachine

I'm familiar with it. But again, no mention of "good" explainers. Did you read the book in another language perhaps?

@dchackethal · · Show · Open on Twitter

Search tweets

/
/mi
Accepts a case-insensitive POSIX regular expression. Most URLs won’t match. Tweets may contain raw markdown characters, which are not displayed.
Clear filters