Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Tweets
An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.
But in case I will, you can subscribe via RSS – without a Twitter account. Rationale
RT @CodeWisdom:
“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” - Martin…
I’m going to go on the record as saying 1) they won’t mention Popper or reference/use his work anywhere (mistake) 2) therefore their work will be at first over hyped and then disappointing.
But I really want to be wrong on either or both of those.
RT @DavidDeutschOxf:
AI is the opposite of AGI.
Trying to shackle an AGI's thinking is slavery.
Explained in my essay "Beyond Reward and…
One beautiful ramification of this (I think) is that an AGI would work just fine without any input or output channels. Those aren’t part of the required hardware.
@lynz_h55 @davidarredondo @ashik_shanks
If you’re getting at the problem of sources, those don’t matter. Only content matters. If you have a great insight in a dream, it doesn’t make sense to discount that insight. The real source is always the same anyway: your mind.
@lynz_h55 @davidarredondo @ashik_shanks
You’re saying you had the dream so yes, the dream is real no matter its contents. I guess you’re really asking whether the contents are real. They’re not real as in “out there in the physical world”. But they’re real as in “abstractions in your mind”.
Also remember that if you enter California you’re entering everyone California has ever been with and that’s a LOT of people.
@lynz_h55 @davidarredondo @ashik_shanks
Whatever doesn’t figure in our best explanations. Eg god, magic, etc.
To be clear, just because both are real doesn’t necessarily mean they interact; but they do. Eg software affects the physical world.
Causality = (tentatively held, conjectured) explanation
According to Deutsch, something is real if it figures in our best explanations of something, see “The Beginning of Infinity”. That’s his criterion of reality.
If you insist however, IIRC Popper took Tarsky’s definition of truth (= correspondence to facts) and amended it a little by saying that whatever is part of a true theory should be considered real. Would need to check the source though, probably also somewhere in C&R.
Instead of looking for definitions I’d go with the common sense definition of reality that everyone knows.
The latter one is shorter and better (imo).
I don’t have it in front of me right now, but I think in “Conjectures and Refutations”. You can also just read the chapter “The Reality of Abstractions” in David Deutsch’s “The Beginning of Infinity”.
Ah, more Chomsky nonsense. No doubt they do interact, and Popper explained how.
AirPods could use head’s heat as energy source? Something for @Apple to think about.
RT @PessimistsArc:
TWITTER MICROFILM 🔎 📰
Read 1896 article on physicians blaming bicycle for lunacy below: https://t.co/XxT8whG5t5
We don’t die from that. It’s narrow AI; if it ever got dangerous (and that’s a big “if”), we could think of something it can’t yet do to overwhelm and disarm it. It’s unclear how you get from bluffing at poker to death so quickly.
Agree with 1). I’d grant more progress re 2): what we now consider “common sense” (no slavery, suffrage, universal human rights, etc) used to be very controversial. I agree however that 1) has progressed much faster than 2)!
Did you like the movie? I’ve been considering watching it.
RT @ToKTeacher:
@Azaeres
Long before the most well subscribed ideas proved themselves useful, they must first have been created in the mind…
I had skimmed it when I asked. Unless there’s some nifty CS thing that tells us every algorithm can be written using recursion, I don’t see recursion being as fundamental as you’re suggesting.
It’s not AI. It’s a piece of glass with all its knowledge instantiated. No knowledge creation at runtime. No software, even. Continue bastardizing the term “AI” until it means nothing at all.
What do you mean by “recursion epistemology”? And what by “coded into action”?
My hope is 100% genetic so one day we can in principle read its implementation. But can’t say for sure. I think David thinks it’s part gene part meme.
I suggest that on the day the first AGI is born, we consider Popper its grandfather.
@ChristopherCode @DavidDeutschOxf
Oh bummer, Twitter needs syntax highlighting :)
@ChristopherCode @DavidDeutschOxf
FWIW I consider a fn as mundane as #(println "Hello, " %)
to be emergent in the sense that we can explain its behavior well in terms of println
without reading println's implementation. I.e. functions may not need to be passed fn parameters explicitly to be emergent.
@ChristopherCode @DavidDeutschOxf
I think so, at least any Turing complete language that doesn't have explicit support for higher order functions will offer some way of modeling higher order functions anyway, see e.g. creative ways people came up with expressing higher order functions in older versions of Java.
Just released episode 8 of the podcast on artificial creativity, again heavily inspired by @DavidDeutschOxf. This one is about the central role of problems in the creative process.
Ah, yes, makes sense - criticizing a theory's structure based on some criterion is another thing we can do.
Not following exactly. What would be an example of a different but not opposing prediction?
Likewise, if I have a moral theory that "predicts" that one should lie, it will conflict with other moral theories.
So read "theory predicts" as "one can deduce from the theory".
Yeah, I don't mean just physical theories. If I have a mathematical theory of multiplication that "predicts" that 2*2=5, then it will conflict with the prevailing one.
Reviewing chapter 1 of BoI again. I can think of only one way to find a conflict between two theories: comparing their predictions and finding that they make at least one differing prediction when it should be the same.
Are there other ways?
@dela3499 @mizroba @DavidDeutschOxf
Don’t remember much about him. His common sense definition of truth sounds right to me.
@RealtimeAI
What do you mean by “in reverse”? Don’t things in dreams still happen in order?
I found Popper’s own writings about Tarski accessible. In his autobiography IIRC. Unless you’ve already read that. :)
@DavidDeutschOxf @RealtimeAI
What I mean is, if everything in the universe can be simulated with arbitrary precision on a Turing machine, yet the Turing machine's architecture is nothing like the simulated object's, then nothing depends on architecture to work, does it?
@DavidDeutschOxf @RealtimeAI
Is there anything at all that depends on architecture to work?
RT @DavidDeutschOxf:
@RealtimeAI
Because of computational universality, we can't possibly have cognitive-architecture-dependent qualia.
RT @DavidDeutschOxf:
@RealtimeAI
Indeed, before modern science, prevailing ideas interpreting qualia in terms of cognitive architecture wer…
@pmathies @RealtimeAI @DavidDeutschOxf
Why does doing math require Turing completeness? And why does having emotions not require it?
@pmathies @RealtimeAI @DavidDeutschOxf
May be an indicator of software similarities though.
@pmathies @RealtimeAI @DavidDeutschOxf
I also tend to think emotions are something that humans and (some) animals share. And they seem to have survival value (eg being scared of a predator).
My laptop could simulate all of them but is as different from my brain as it gets, so no hardware similarities needed.
@pmathies @RealtimeAI @DavidDeutschOxf
I don’t know how we could find out. Maybe neuroscientists already know the answer, idk. Ability to do math is irrelevant. A Turing complete system on its own can’t do math either. It needs the knowledge of how to do (certain kinds of, eg addition) math.
@RealtimeAI @DavidDeutschOxf
Got it. What are some software similarities between people and other animals then?
@RealtimeAI @DavidDeutschOxf
Hardware may not matter so much. Eg a cat’s brain may be a universal Turing machine for all we know; I guess the deciding factor is having the requisite knowledge inside that brain.
RT @DavidDeutschOxf:
@hunchandblunder
I guess qualia, moral agency, freewill, consciousness—are all aspects of explanatory universality. Bu…
Indeed. And then there’s Daniel Plainview in “There Will Be Blood”. Every time I watch that movie I feel a little bit of that competitiveness Daniel feels.
That’s very interesting.
Another character study came to mind: Anton Chigur (sic?) in “No Country For Old Men”.
@ToKTeacher @n_iccolo @reasonisfun @Hugoisms
Yes! Problems are soluble 🚀🚀
Anybody else like character studies as much as I do? E.g. “The Perfume”, “Lolita”, etc.
What do you like about them?
@ToKTeacher @reasonisfun @Hugoisms
You mean the abortions that have happened so far?
@ToKTeacher @reasonisfun @Hugoisms
If creativity is part gene part meme then no one is a person until after birth.
My opinion is that generally speaking it’s bad. Open to finding out about reasons it would be okay but I won’t lie; it’ll be a tough sell.
“Human” in this sense meaning non creative, person meaning creative.
Is there a difference between killing a human and killing a person?
Is a problem always a conflict between two explanations? Can it also be the lack of an explanation? Or is that somehow reducible to a conflict between two explanation?
“Peter the Popperian”! Best thing I’ve heard in a while. As Peter hehehehehehe friggin sweet guys!
For an example of this using multiplication, see soundcloud.com/dchacke/artifi… (disclaimer: I’m the author).
@bnielson01 @JSB_1685 @dela3499 @DavidDeutschOxf @EvanOLeary
He claims he came up with it 25 years ago... talk is cheap. Show us the code.
@JSB_1685 @dela3499 @DavidDeutschOxf @EvanOLeary
Where can we find and how can we run this thing you built?
Thanks @dela3499 for sharing this with me. It's good to see researchers focusing on open ended problem solving.
I agree that qualia are mysterious but that’s a problem of understanding, not a problem of qualia. Once we have a good explanation we shall understand them. Problems are soluble :)
Yes, good point. Perhaps what we are left with is that a universal explainer has the capacity to experience qualia, but doesn’t necessarily do so? Eg feeling hungry is a kind of knowledge?
If so, does that mean non-explainers cannot experience qualia?
Yup agreed. I’m after explanations at this point :)
Sounds like some neuroscientists are finally starting to see their own reductionism.
I personally stay away from qualia for the most part because they are utterly mysterious. If a universal explainer automatically has them, great; if not, I don’t really care all that much. May even make things easier if it helps avoid moral issues.
Depending on one’s definition of AGI, a universal explainer without qualia could be considered a little less than an AGI.
Lack of input/output devices doesn’t necessarily suggest lack of qualia. See aeon.co/essays/how-clo…
I agree however that even without qualia a universal explainer is genuinely universal in its capacity to explain.
Yeah, I still go back and forth on whether a universal explainer would automatically have qualia. I agree that it’s not obvious.
Episode 7 of the podcast on artificial creativity is out; as always, heavily inspired by @DavidDeutschOxf. This time, it's a Q&A episode.
@thethinkersmith @ToKTeacher @Crit_Rat @SamHarrisOrg
Neuroscience would need to violate computational universality in order to contribute.
Only half of all households in 1990 had a stove? How did people cook?
@tjaulow
The only thing universality suggests is a shared repertoire.
@thethinkersmith @ToKTeacher @Crit_Rat @SamHarrisOrg
That could all be genetic or memetic.
@thethinkersmith @ToKTeacher @Crit_Rat @SamHarrisOrg
I’m curious. How do primates demonstrate creativity?
@tjaulow
I think I understand what you mean, but I don’t think universality on its own suggests that.
It’s just a turn of phrase. It means that something that is considered an exception doesn’t break the rule precisely because it is an exception, as opposed to something that was not expected to occur.
For example, I could learn how to be attracted to men, if I chose to. This may be hard but can't be impossible, since women have that knowledge somehow.
Yeah, this helps. I think I was simply wrong about what heterosexuality implies. It means that there is different knowledge in men and women; but (given the right technology etc) nothing prevents either from learning what the other has.
While there are differences between kinds of knowledge any two people hold - e.g. you know something I don't - nothing forbids my creating that missing knowledge for myself. The universality here lies in the ability to create knowledge, not differences in existing knowledge.
But while there are no differences between people as explainers, there are undeniable differences between people as it relates to different genders.
x is a universal y if it can do all the z's all the other y's can do.
People are universal explainers. Any given person can in principle explain anything any other person could explain. That means in their ability to create knowledge, all people are literally the same.
Makes sense. I also just remembered David saying somewhere that people have an inborn fear of heights which they can exploit for fun (eg parachuting). But could they get rid of it?
How do we square gender specific preferences with universality? For example, most men are attracted to women.
Presumably, genes can create preferences, interests, etc in people, but can be overwritten by the mind?
- Knowledge is information that is adapted to a purpose.
- Knowledge is information with causal power.
- Knowledge is information that solves a problem.
Those are the three relationships that come to my mind.
RT @DavidDeutschOxf:
@notsurethomas @lynz_h55 @TheHalcyonSavan @PSTaylor13
Explanations never explain why they themselves are true. That wo…