Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Tweets

An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.

But in case I will, you can subscribe via RSS – without a Twitter account. Rationale

@krlwlzn

Yup the selection can be natural or artificial.

@dchackethal · · Show · Open on Twitter

@neiltyson

Though of course, that is still granting agency on the part of the elephant, which it doesn't have. I'm just trying to draw attention to the fact that its "motivation" need not have been to save the human.

@dchackethal · · Show · Open on Twitter

@neiltyson

Another possible caption: "Ooh, looks like it's a human, I've had fun playing with them before and they give me treats. I'll go play with that one again, maybe I'll get treats again this time!"

@dchackethal · · Show · Open on Twitter

Correction:

"Any time we're impressed by what non-human animals do, it's simply because we forget that they have genetically inherited sophisticated knowledge, which was created by biological evolution, not by those animals."

@dchackethal · · Show · Open on Twitter

Common error: mistaking the sophistication of an animal's knowledge with the animal being intelligent.

For the former, the animal need only contain sophisticated knowledge—never mind where that knowledge came from—for the latter, it must have created the knowledge itself. twitter.com/neiltyson/stat…

@dchackethal · · Show · Open on Twitter

@DoqxaScott @dwarkesh_sp

Not sure either labels strong/weak or mind virus apply to either of those theories.

But generally speaking, if most people have Newton's idea of gravity, it's because his meme is better at spreading than Einstein's.

@dchackethal · · Show · Open on Twitter

From here, it's just one more step to meeting a necessary condition for AGI: making it possible for the program to correct its own errors.

@dchackethal · · Show · Open on Twitter

Applied to programming, it means we shouldn't judge a program by how well it solves a given problem, but by how easy it makes it to detect and correct errors it already contains.

@dchackethal · · Show · Open on Twitter

Similarly, in science, we do not judge institutions by how much they produce or entrench good theories, but by how easy they make it to criticize and replace bad theories (ibid).

@dchackethal · · Show · Open on Twitter

How do we get from Popper to AGI? 🧵👇

Popper's criterion of democracy entails that we shouldn't judge a political system by whether it produces good policies or leaders, but by how easy it makes it "to remove bad ones that are already there." — The Beginning of Infinity, ch. 9

@dchackethal · · Show · Open on Twitter

@dwarkesh_sp

Wait—I could see that weaker strains of virus spread farther, but is it true that strains that spread farther get weaker? Seems as though the causation might be reversed.

@dchackethal · · Show · Open on Twitter

@leeerob

Oh, what’s SSG?

@dchackethal · · Show · Open on Twitter

@leeerob

And ~1% use Reagent, even though it is superior and doesn’t need any explicit Redux.

@dchackethal · · Show · Open on Twitter

@SpaceX

Amazing craftsmanship

@dchackethal · · Show · Open on Twitter

@SpaceX

Amazing. Onward!

@dchackethal · · Show · Open on Twitter

@HeuristicAndy @MartvMegen

I checked those out, I may post in themotte in the future.

@dchackethal · · Show · Open on Twitter

@HeuristicAndy @MartvMegen

The Four Strands, obv ;-)

@dchackethal · · Show · Open on Twitter

@MartvMegen

I’ve had mixed experiences with them for sure. May not post there again.

@dchackethal · · Show · Open on Twitter

@PessimistsArc

Just like today people believe in superintelligences!

@dchackethal · · Show · Open on Twitter

@Crit_Rat

Due to its superhuman stretching abilities it may potentially be dangerous. Cat breeding should be regulated and we should be prepared to destroy the cats when they rise up and stretch everywhere.

@dchackethal · · Show · Open on Twitter

@_Islamicat

Is great super hero

@dchackethal · · Show · Open on Twitter

@DoqxaScott @Plinz @BasilMarte @Levi7hart @nosilverv

Yes but my point was merely that the evolution of ideas itself has no purpose. People and their minds do contain purposes, of course.

@dchackethal · · Show · Open on Twitter

@Plinz @BasilMarte @Levi7hart @nosilverv

No, there is no purpose in the evolution of ideas. Ideas get themselves adopted because they’re good at spreading. Some ideas help improve world views and intellectual developments etc, but some don’t.

@dchackethal · · Show · Open on Twitter

@Plinz @BasilMarte @Levi7hart @nosilverv

Agreed but people are creative so the knowledge of how to fit in need not be encoded genetically, they can create it during their lifetime.

@dchackethal · · Show · Open on Twitter

@Plinz @BasilMarte @Levi7hart @nosilverv

Plus memes evolve too quickly to evolve genetic defenses against them. Not to mention that even if it were possible, memes are much more powerful than genes and easily override genetic instructions (as evidenced by the religiously-spread meme of celibacy).

@dchackethal · · Show · Open on Twitter

@Plinz @BasilMarte @Levi7hart @nosilverv

Also, following Dawkins, I believe memetic and genetic evolution are largely separate. But even if they weren't, parasitic genes and memes can manage to spread well despite hurting their hosts by keeping them alive well enough to spread, but without promoting their wellbeing.

@dchackethal · · Show · Open on Twitter

@Plinz @BasilMarte @Levi7hart @nosilverv

I'm not. I agree that the evolution of religion is rich in memetic terms. That doesn't contradict what I said.

@dchackethal · · Show · Open on Twitter

@ella_hoeppner

Slightly better, maybe. Luckily, color contrast is objectively measurable. Paste your background color as hex (#041311) into this tool and it will let you pick a well-contrasted text color:

colorsafe.co

@dchackethal · · Show · Open on Twitter

@Plinz @BasilMarte @Levi7hart @nosilverv

I think you're granting way too much genius and agency to the "authors" of any religion.

Religions are memeplexes that spread because they happen to be adapted to spreading, not because they explain the world or are good for people etc.

@dchackethal · · Show · Open on Twitter

People get mad when you take computational universality seriously. Or any idea, really. Other than the one that says not to take ideas seriously.

From reddit.com/r/Intellectual… https://t.co/3GF4t9AMlo

@dchackethal · · Show · Open on Twitter

@ella_hoeppner

You may benefit from increasing the color contrast on your color palette. I had to turn up the brightness on my phone quite a bit to make the UI legible, especially the code on the top right.

@dchackethal · · Show · Open on Twitter

@ella_hoeppner

Very cool. How have you been building the UI—is it written in HTML and displayed in a browser, or some other way?

@dchackethal · · Show · Open on Twitter

@bartholmberg

  1. I think AGI will be achieved in a qualitative jump from something much less powerful, but it will take time to get to that jump.

  2. Somewhere in the West because Western countries are the only ones with even a hint of good epistemology. No sign of AGI yet though.

@dchackethal · · Show · Open on Twitter

RT @deezzer:
For coders - A great discussion on benefits of Functional vs OOP youtube.com/watch?v=uu3tb3… #programming #better #reactjs #functi…

@dchackethal · · Show · Open on Twitter

@SpaceX

Gorgeous

@dchackethal · · Show · Open on Twitter

I was interviewed about the new Berlin programming language, which aims to make programming simpler and more enjoyable. This video is a good intro: youtube.com/watch?v=uu3tb3…

@dchackethal · · Show · Open on Twitter

@shl

This thread has been archived on archive.vn/jsBFE, archive.vn/fVbeG, archive.vn/KzNXG, and various mirrors thereof.

@dchackethal · · Show · Open on Twitter

@shl

Social-justice warriors may be gaining power right now, but they will eat themselves up eventually and they will turn against you and may well use this twitter thread to do so.

@dchackethal · · Show · Open on Twitter

@shl

What you post makes for good marketing and it’s very “in” right now, but people may not always look on it so favorably. You may choose to be more careful. You should jump off the social-justice bandwagon while you still can.

@dchackethal · · Show · Open on Twitter

@shl

I’m not a boomer :) I think you have good intentions btw. But by basing investment decisions on race and publicly admitting to it you’re just begging for someone you turn down in the future to file a lawsuit against you.

@dchackethal · · Show · Open on Twitter

@shl

By basing investment decisions off of skin color, you’re lowering the standards for blacks relative to nonblacks, which is economically counterproductive, patronizing, and, of course, racist.

@dchackethal · · Show · Open on Twitter

@shl

You’re clearly making investment decisions based on race—otherwise you couldn’t account for your “subconscious bias,” which is a non-falsifiable idea that people can always use to justify decisions based on skin color.

@dchackethal · · Show · Open on Twitter

@shl

(And btw, by admitting to “subconscious bias” you basically admitted to harboring racist thoughts against black people.) So then, the question is, how can you ensure a “fair” outcome without favoring black people and disfavoring white people?

@dchackethal · · Show · Open on Twitter

@shl

I take that as a “yes,” because, to ensure you’re not “exhibit[ing] any subconscious bias,” you must look at the ratio of black founders you invested in. That’s why you share the number of “black investments” so proudly.

@dchackethal · · Show · Open on Twitter

@shl

And to do that, and because—people would hope—you want to ensure a proportionate and fair outcome, you are taking into account their race to make an investment decision, yes?

@dchackethal · · Show · Open on Twitter

@shl

Are you favoring black people for investments? (Not a rhetorical question)

@dchackethal · · Show · Open on Twitter

@EMF_7 @NASAJuno

I was secretly hoping the first one was a photograph, too!

@dchackethal · · Show · Open on Twitter

@NASAJuno

I’m assuming the first picture is a rendering, the second is a photograph?

@dchackethal · · Show · Open on Twitter

@ReachChristofer @FitzClaridge

Do ideas replicate within a mind?

@dchackethal · · Show · Open on Twitter

@ReachChristofer @FitzClaridge

What are the similarities and differences between coercive mechanisms within a mind and across minds?

Is TCS compatible with non-libertarian political views?

@dchackethal · · Show · Open on Twitter

@krazyander

... to a more positive, open, people-friend (and, thereby, AGI-friendly) worldview, in which rapid progress and error correction are things not to be feared and avoided, but celebrated.

@dchackethal · · Show · Open on Twitter

@krazyander

technology ultimately requires a shift in attitude from the kind of cynical, pessimistic and AGI-skeptical view I explained in this tweet: twitter.com/dchackethal/st…

@dchackethal · · Show · Open on Twitter

@krazyander

And, as I have argued in another thread with someone else, most of these surface issues are really only addressed properly by getting on the same page about epistemology.

Lastly, at the end of the day, embracing AGI and seeing it as a fascinating and ultimately positive...

@dchackethal · · Show · Open on Twitter

@krazyander

That software is the decisive limiting factor does not make for a trivial case—indeed, it follows from our best explanations about computation.

But yes, as I have argued before, more processing speed can help with faster error correction, including the correction of moral errors

@dchackethal · · Show · Open on Twitter

Most of the anti-AGI worries translate into this: sentencing AGIs to death for crimes against humanity they didn’t commit, nor could have committed because they haven’t been born yet, but might possibly commit.

Basically pre-natal thought crimes.

@dchackethal · · Show · Open on Twitter

I get VERY excited watching videos like this one: youtube.com/watch?v=y6VlzF…

@dchackethal · · Show · Open on Twitter

@MRMV84 @paulg

I've considered that, and it might be an interesting approach. But again, the tech was easy—a browser extension would also involve getting people to install it, use it frequently, etc.

Unless I could somehow integrate the browser extension with an existing social network...

@dchackethal · · Show · Open on Twitter

@G_Langenderfer @paulg

No.

@dchackethal · · Show · Open on Twitter

@paulg

I built a social network once that had the ability to compute the quality of a post. The tech was easy; getting people to join more difficult.

@dchackethal · · Show · Open on Twitter

@paulg

Reminds me I should spend more time reading and less time on Twitter.

@dchackethal · · Show · Open on Twitter

@nburn42 @Plinz

Well, the brain is a computer, but the mind is software running on the brain. The mind attempts to explain the world by repeatedly conjecturing and criticizing solutions to problems (Popper).

@dchackethal · · Show · Open on Twitter

@krazyander

(and no, faster hardware doesn't make an AGI child even significantly different in the relevant ways)

@dchackethal · · Show · Open on Twitter

@krazyander

In other words, faster hardware by itself doesn't make an AGI child not a child.

If this still doesn't convince you, then I suspect that your condition to convince you wasn't sufficient in the first place.

@dchackethal · · Show · Open on Twitter

@krazyander

If you're like most AI-safety worriers, you may reply "but this is different! This time, it's orders of magnitude!" So were computers when they first came around, and they have been gaining orders of magnitude every decade since. Which has again been overwhelmingly a good thing

@dchackethal · · Show · Open on Twitter

@krazyander

That's not to mention Deutsch's point that people have been increasing their hardware over the centuries through pen, paper, computers, etc and that's overwhelmingly been a good thing.

@dchackethal · · Show · Open on Twitter

@krazyander

Hardware performance is overrated for this particular issue. It needs to be accompanied by improved software performance, the AGI child needs to learn how to use fast hardware, etc...

@dchackethal · · Show · Open on Twitter

@Plinz

No, "a human mind is a function that takes in observations and yields behavior and policy updates" is an empiricist and behaviorist misrepresentation of the mind. Human minds are creative, and continue do be so without sense input.

@dchackethal · · Show · Open on Twitter

@TahaElGabroun

I've enjoyed videos from Bucky Roberts' channel: youtube.com/user/thenewbos…

And Treehouse has good tutorials as well (paid though!): teamtreehouse.com/courses

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

@krazyander

The same question remains: but why? Where does “greater potential for rapidly gaining power and influence over the world” come from? And where do the bad inborn ideas come from?

@dchackethal · · Show · Open on Twitter

@david_perell

Or better yet, no school at all, just voluntary learning and fun.

@dchackethal · · Show · Open on Twitter

@krazyander

Btw I already explained how and why an AGI child is just like a human child and you ignored that and made an unargued assertion to the contrary when, if I’m not mistaken, I had met your condition to change your mind.

@dchackethal · · Show · Open on Twitter

@krazyander

Right, an AGI child might not be bright at all. Or mediocre. Or very smart.

To your second point: but why?!

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Ok. To learn about Popperian epistemology, I recommend reading "The Beginning of Infinity" by David Deutsch, or Karl Popper's "Objective Knowledge."

@dchackethal · · Show · Open on Twitter

@krazyander

That's not to mention that no parent of a human child ever goes "thank god my child doesn't develop quickly enough to harm the human race."

A good parent will want to help their child develop as quickly as possible.

@dchackethal · · Show · Open on Twitter

@krazyander

Yes, an AGI is literally a child. It will have a few inborn ideas and needs to figure out everything else for itself. Learn to perceive, learn to communicate, learn to move, etc.

How quickly that happens depends entirely on the child, just like with human children.

@dchackethal · · Show · Open on Twitter

@bnielson01 @ChipkinLogan @HeuristicAndy @RealtimeAI @mizroba

I understand it was tongue in cheek, just wanted to clarify. Didn’t mean to take away the humor. :)

@dchackethal · · Show · Open on Twitter

@krazyander

AGI is different in that regard from any other technology in that one wouldn’t instantiate an AGI to benefit from it, but because one wants to raise it and help it learn. Same reason people (should) have behind raising children.

But again: any other way to change your mind?

@dchackethal · · Show · Open on Twitter

@SpaceX

Holy shit

@dchackethal · · Show · Open on Twitter

@joerogan

Lovely doge

@dchackethal · · Show · Open on Twitter

@bnielson01 @ChipkinLogan @HeuristicAndy @RealtimeAI @mizroba

I think Logan was talking about the creation of wealth, not its redistribution.

@dchackethal · · Show · Open on Twitter

@shl

Yes, and are fun to do.

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

I think the fact that ideas in favor of authority conflict with Popperian epistemology is merely interesting and worthy of exploration. It points to a problem. There lies a fascinating truth to be found. There is no need to call those who want to find that solution "idiotarians."

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

But they would also contend that privately-owned security companies would do a better job at it.

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

And it doesn't seem that people try to understand the libertarian position. For example, Andy claims that they "sloganeer that anything the government does by definition is evil."

But they don't. I think they would say, if a cop prevents a murder, that's a good thing.

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

Those skeptical of regulations are considered flawed somehow, or dumb. "Idiotarians." Or thinking along the lines of "only an idiot could think that way." Which begs the related questions of "why that happened."

@dchackethal · · Show · Open on Twitter

@bnielson01 @HeuristicAndy @RealtimeAI @mizroba

What's telling is that these "disturbing idiotarians" happily grant that people in favor of regulations have reasons for thinking that and are not idiots. That good will is not reciprocated.

@dchackethal · · Show · Open on Twitter

@bnielson01

Please elaborate?

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

We will have a hard time agreeing on the surface issue of AI safety if we don't resolve the underlying disagreement about epistemology.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

The problem is that Bayesian epistemology is completely false, and the only good epistemology we have found is Popper's. With Popper's, it is much easier to see that AI-safety concerns do not apply any more than to any other people.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Your bio says you're interested in effective altruism, which makes me think that you quite possibly subscribe to Bayesian epistemology and other related rationalist ideas about epistemology.

Those who worry about AI safety usually come from that direction.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

AGIs could not be created as psychopaths, because psychopathic ideas cannot be "induced"—there is no instruction from without (Popper).

Which brings me to my main point. I doubt any of this will convince you, because the underlying disagreement is an epistemological one.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

You cannot prevent someone from becoming a psychopath by coercing them, or through "education," or any other restrictive measures.

To think it is our concern to "manage its development" is rather sinister. An AGI is a child, free to develop in any way it wants.

@dchackethal · · Show · Open on Twitter

@sapepens @krazyander

Since AGIs, by definition, are people, they will automatically be capable of feeling love and of being psychopaths, and everything else/in between.

Whether an AGI—like any person—becomes a psychopaths depends on the development of ideas within that person's mind.

@dchackethal · · Show · Open on Twitter

When the conflict is solved, that leads to happiness, both in society and within the mind. Coercion, in turn, can only lead to misery: something's gotta give.

Source on coercion (highly recommended): takingchildrenseriously.com/node/50#Coerci…

@dchackethal · · Show · Open on Twitter

Solving the conflict would mean that both sides within oneself are happy. That is also always possible, but also takes creativity and is, therefore, hard.

@dchackethal · · Show · Open on Twitter

That's bound to make oneself unhappy, just like the lockdown is leading to unrest. And any gym routine based on coercion won't last long.

@dchackethal · · Show · Open on Twitter

An example of coercion within a mind is when the idea of going to the gym arbitrarily wins over the idea of staying in bed. In other words, one forces oneself to go to the gym even though a conflicting idea is still present in one's mind.

@dchackethal · · Show · Open on Twitter

Truly resolving the conflict would mean both sides are happy. That is always possible, but takes creativity, so it can be hard.

@dchackethal · · Show · Open on Twitter

Search tweets

/
/mi
Accepts a case-insensitive POSIX regular expression. Most URLs won’t match. Tweets may contain raw markdown characters, which are not displayed.
Clear filters