Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Tweets
An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.
But in case I will, you can subscribe via RSS – without a Twitter account. Rationale
AGI is different in that regard from any other technology in that one wouldn’t instantiate an AGI to benefit from it, but because one wants to raise it and help it learn. Same reason people (should) have behind raising children.
But again: any other way to change your mind?
@bnielson01 @ChipkinLogan @HeuristicAndy @RealtimeAI @mizroba
I think Logan was talking about the creation of wealth, not its redistribution.
@bnielson01 @HeuristicAndy @RealtimeAI @mizroba
I think the fact that ideas in favor of authority conflict with Popperian epistemology is merely interesting and worthy of exploration. It points to a problem. There lies a fascinating truth to be found. There is no need to call those who want to find that solution "idiotarians."
@bnielson01 @HeuristicAndy @RealtimeAI @mizroba
But they would also contend that privately-owned security companies would do a better job at it.
@bnielson01 @HeuristicAndy @RealtimeAI @mizroba
And it doesn't seem that people try to understand the libertarian position. For example, Andy claims that they "sloganeer that anything the government does by definition is evil."
But they don't. I think they would say, if a cop prevents a murder, that's a good thing.
@bnielson01 @HeuristicAndy @RealtimeAI @mizroba
Those skeptical of regulations are considered flawed somehow, or dumb. "Idiotarians." Or thinking along the lines of "only an idiot could think that way." Which begs the related questions of "why that happened."
@bnielson01 @HeuristicAndy @RealtimeAI @mizroba
What's telling is that these "disturbing idiotarians" happily grant that people in favor of regulations have reasons for thinking that and are not idiots. That good will is not reciprocated.
We will have a hard time agreeing on the surface issue of AI safety if we don't resolve the underlying disagreement about epistemology.
The problem is that Bayesian epistemology is completely false, and the only good epistemology we have found is Popper's. With Popper's, it is much easier to see that AI-safety concerns do not apply any more than to any other people.
Your bio says you're interested in effective altruism, which makes me think that you quite possibly subscribe to Bayesian epistemology and other related rationalist ideas about epistemology.
Those who worry about AI safety usually come from that direction.
AGIs could not be created as psychopaths, because psychopathic ideas cannot be "induced"—there is no instruction from without (Popper).
Which brings me to my main point. I doubt any of this will convince you, because the underlying disagreement is an epistemological one.
You cannot prevent someone from becoming a psychopath by coercing them, or through "education," or any other restrictive measures.
To think it is our concern to "manage its development" is rather sinister. An AGI is a child, free to develop in any way it wants.
Since AGIs, by definition, are people, they will automatically be capable of feeling love and of being psychopaths, and everything else/in between.
Whether an AGI—like any person—becomes a psychopaths depends on the development of ideas within that person's mind.
When the conflict is solved, that leads to happiness, both in society and within the mind. Coercion, in turn, can only lead to misery: something's gotta give.
Source on coercion (highly recommended): takingchildrenseriously.com/node/50#Coerci…
Solving the conflict would mean that both sides within oneself are happy. That is also always possible, but also takes creativity and is, therefore, hard.
That's bound to make oneself unhappy, just like the lockdown is leading to unrest. And any gym routine based on coercion won't last long.
An example of coercion within a mind is when the idea of going to the gym arbitrarily wins over the idea of staying in bed. In other words, one forces oneself to go to the gym even though a conflicting idea is still present in one's mind.
Truly resolving the conflict would mean both sides are happy. That is always possible, but takes creativity, so it can be hard.
A timely example of coercion across minds is the lockdown some societies have implemented. There is coercion because the idea of locking down wins arbitrarily over the idea of not locking down without solving the conflict.
The evolution of ideas in a mind is analogous to the evolution of ideas across minds. We can see this when we study coercion.
A thread 🧵👇
@julepparadox @alvarlagerlof @iamdevloper
You can do a whole-word search on “i”.
In exchange for Corona, all the null pointer exceptions will fix themselves on Christmas.
No, that's one of the things that differentiates AGI from narrow AI—it doesn't require any training data.
That doesn't answer my question about how one could change your mind. To be clear, I didn't ask for a concession.
In any case, it sounds like your mind is made up. Why keep discussing? How could one change your mind?
I used to think about slave holders the same way, but I recently learned from a podcast that many slave holders thought they were giving their slaves a better life than in Africa. Slave holders did not consider themselves evil, yet they were doing great evil.
In other words, these algorithms are coerced into optimizing some predetermined criterion. That's why they couldn't possibly be AGIs: that requires freedom from coercion.
One of the driving forces of evolution is replication, and selection is a phenomenon that emerges from differences in replication. Existing evolutionary algorithms force a fitness function onto their population of replicators—which is not how evolution works in reality.
👇
...and that his book is a slaveholder's manual instructing people how to keep "their" AGIs in check.
People look back in horror at slavery in the US and ask, "how could this happen." Today it's Bostrom's book. That's how.
I read Bostrom's book. He mentions Deutsch in the acknowledgments but he clearly didn't take Deutsch's (superior) ideas seriously or he wouldn't have written it. He would have known that the very concept of superintelligence is an appeal to the supernatural...
The question is not whether some AGIs will be psychopaths (some might). The question is whether that warrants shackling all AGIs ("aligning" is just a euphemism for coercion/shackling).
It takes a shift in perspective to recognize how disgusting "alignment" really is.
Exactly, an AGI would be capable of love, relationships, humor, etc.
Yup that’s where I fall, too, both seem true at different times.
Yes, one should put in the time. But it shouldn’t be uncomfortable. It should be fun—and once it’s fun, there won’t be any distractions. If it’s fun, you’ll put it in the time happily and automatically.
@itsDanielSuarez @Plinz @NASA @SpaceX @AstroBehnken @Astro_Doug
Yeah pretty nuts! People are awesome. Onward!
Right, so if it’s prior, and we need ten more minutes, wouldn’t + make more sense?
Yes, people can harm each other. But should we shackle them in advance because they might harm each other?
Well, again, AGIs are people by definition, so they can feel love and be altruistic (let’s table for the moment whether altruism is a good thing). And they will be a product of the culture they’re born into, like all children.
AGIs are literally children. Read your tweets again while imagining you’re talking about children and maybe you’ll see how sinister those tweets are.
Indeed. Once something is automated, it’s time to move onto the next problem and solve it creatively.
AGIs cannot work under regulations or bondage. They can only work in spite of them. Like economies and memes, the minds of all people, including AGIs, are evolutionary processes that self-regulate. Impose force, and they cease being people.
That's not to mention that just because an AGIs interests may not align with ours doesn't mean they won't, and especially doesn't mean it wants to hurt us. If it actually does want to hurt us, we can defend ourselves. Until then, assume it's a potential friend, like all people.
Also note than an AGI won't have an "end goal." It's a person. People don't have end goals. They follow their interests and want to learn/solve problems. After they find a solution they move on to the next problem.
Processing power, yes. As long as we don't know for a fact that it wants to hurt us, yes, give it all the tools it needs to correct errors. Help it learn (if it wants the help). If it learns about morals it won't hurt us.
@SurviveThrive2 @chophshiy @ks445599
I don't think we touched on free will on the podcast. In my book, I say that having free will means being the author and enactor of one's choices.
@JulienSLauret @pmoinier @TDataScience
There could be. But AGI, by definition, simulates the human mind—how could one hope to write a program that simulates the mind without understanding it first?
@chophshiy @SurviveThrive2 @ks445599
In other words, you admit you don't know either and are making up excuses.
In any case, would you be equally offended if I said "we don't know how to time travel"?
PS You don't need to put two spaces after each period, we don't write on typewriters anymore.
Anyone can become a developer—here's how I did it:
@chophshiy @SurviveThrive2 @ks445599
Ok, are you saying we know how the mind works? If so, can you please tell me how it works—I'd sincerely love to know.
You keep dodging questions, doubt my understanding of BoI (first in the context of explainers, now suddenly in the context of instrumentalism), and expect me to magically understand your made-up terminology, and don't explain where that terminology comes from. our convo is over:)
@chophshiy @SurviveThrive2 @ks445599
Neither @ks445599 nor I ever appealed to any authorities. You're mischaracterizing us.
I take your unwillingness to summarize my view as evidence that you haven't actually understood it, despite your claims that it's "entirely off."
In a rational discussion it's good practice to summarize the other person's view so well the other person has nothing to add. It also provides an opportunity to prevent talking past each other. You may be arguing against points I didn't make.
I can't know what made-up terminology means if it isn't explained to me. I offered the explanation that perhaps you read the book in a different language, but you are dodging my questions.
You've become hostile. I'm not interested in discussing with you further.
Can you summarize what you think my interpretation of Deutsch's book is so we know we're on the same page?
Either way, I'm not interested in a competition over who knows BoI better. I've offered friendly criticism of your work in an effort to help—you're welcome to ignore it.
I'm familiar with it. But again, no mention of "good" explainers. Did you read the book in another language perhaps?
There is an approach called Whole-Brain Emulation which would instantiate AGI without programming it by simulating in sufficient detail the movements of a brain.
I go back and forth on which approach I find more promising—either way I am more interested in understanding the mind
I refer to Deutsch's yardstick for having understood a computational task: "If you can't program it, you haven't understood it." One can't program AGI if one hasn't understood it first.
I'm guessing something is getting lost in translation because you use terms like "good explainer" and "hierarchical structures of societies" in reference to Deutsch's work, even though he doesn't use those terms.
I have read it several times in great detail. I like to think I know a thing or two about Deutsch's work on creativity, and I've had the opportunity to ask him about it, too, on several occasions.
One or two. Drop empiricism. Study Popperian epistemology. Read Deutsch's "The Beginning of Infinity." This is a good start, too: aeon.co/essays/how-clo…
predicted this response 18 seconds before you posted it:
Not why it's "selected by societies," but why it evolved and how complex memes spread and why our species exists.
Nor does Deutsch ever speak of "good" explainers, IIRC—only universal ones.
I don't know what you're talking about or how it relates to the topic of AGI.
RT @Ayaan:
Dear all,
Please, please read this essay by Julian Christopher. We need to take this Woke stuff seriously.
https://t.co/FlIGDBCU…
Ok, if you understand it all, why haven't you built AGI yet?
I like to think I solved the problem of free will and choices in my book.
I don't think he claimed that civilization selected for good explainers. And I didn't write my book "just on that."
Induction is impossible (see Hume). We've known this for ~250 years, but almost everyone ignores this!
David Deutsch's "Beginning of Infinity" and his article "Creative Blocks: How Close Are We to Creating Artificial Intelligence?"
There's also my "A Window on Intelligence" if you want to count that.
Me saying "widespread misconception" already implied that my definition was not common.
The other, common approaches to AGI have been refuted. So my definition of AGI isn't just "idiosyncratic." I can supply the necessary sources if interested.
It makes all the difference because AGI is the project of explaining how the human mind works. That's an epistemological question.
It having to do with "learning tasks" is a widespread misconception.
I don't see how any of that tells us anything about how the human mind works.
I suggest reading it again. Maybe more than once. Especially chapters 4, 5, and 7.
Judging by your book's outline on Gumroad, it may help you correct some errors so you don't head down blind alleys.
I wasn't talking about the brain, I was talking about the mind.
And, because intelligence is a universal phenomenon—recall Deutsch's concept of the universal explainer—any simulation of it is qualitatively the same (modulo implementation details).
Your prediction is self-contradictory because a "discovery of AGI" is an explanation of how the mind works.