Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Tweets
An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.
But in case I will, you can subscribe via RSS – without a Twitter account. Rationale
Ah, yes, makes sense - criticizing a theory's structure based on some criterion is another thing we can do.
Not following exactly. What would be an example of a different but not opposing prediction?
Likewise, if I have a moral theory that "predicts" that one should lie, it will conflict with other moral theories.
So read "theory predicts" as "one can deduce from the theory".
Yeah, I don't mean just physical theories. If I have a mathematical theory of multiplication that "predicts" that 2*2=5, then it will conflict with the prevailing one.
Reviewing chapter 1 of BoI again. I can think of only one way to find a conflict between two theories: comparing their predictions and finding that they make at least one differing prediction when it should be the same.
Are there other ways?
@dela3499 @mizroba @DavidDeutschOxf
Don’t remember much about him. His common sense definition of truth sounds right to me.
@RealtimeAI
What do you mean by “in reverse”? Don’t things in dreams still happen in order?
I found Popper’s own writings about Tarski accessible. In his autobiography IIRC. Unless you’ve already read that. :)
@DavidDeutschOxf @RealtimeAI
What I mean is, if everything in the universe can be simulated with arbitrary precision on a Turing machine, yet the Turing machine's architecture is nothing like the simulated object's, then nothing depends on architecture to work, does it?
@DavidDeutschOxf @RealtimeAI
Is there anything at all that depends on architecture to work?
RT @DavidDeutschOxf:
@RealtimeAI
Because of computational universality, we can't possibly have cognitive-architecture-dependent qualia.
RT @DavidDeutschOxf:
@RealtimeAI
Indeed, before modern science, prevailing ideas interpreting qualia in terms of cognitive architecture wer…
@pmathies @RealtimeAI @DavidDeutschOxf
Why does doing math require Turing completeness? And why does having emotions not require it?
@pmathies @RealtimeAI @DavidDeutschOxf
May be an indicator of software similarities though.
@pmathies @RealtimeAI @DavidDeutschOxf
I also tend to think emotions are something that humans and (some) animals share. And they seem to have survival value (eg being scared of a predator).
My laptop could simulate all of them but is as different from my brain as it gets, so no hardware similarities needed.
@pmathies @RealtimeAI @DavidDeutschOxf
I don’t know how we could find out. Maybe neuroscientists already know the answer, idk. Ability to do math is irrelevant. A Turing complete system on its own can’t do math either. It needs the knowledge of how to do (certain kinds of, eg addition) math.
@RealtimeAI @DavidDeutschOxf
Got it. What are some software similarities between people and other animals then?
@RealtimeAI @DavidDeutschOxf
Hardware may not matter so much. Eg a cat’s brain may be a universal Turing machine for all we know; I guess the deciding factor is having the requisite knowledge inside that brain.
RT @DavidDeutschOxf:
@hunchandblunder
I guess qualia, moral agency, freewill, consciousness—are all aspects of explanatory universality. Bu…
Indeed. And then there’s Daniel Plainview in “There Will Be Blood”. Every time I watch that movie I feel a little bit of that competitiveness Daniel feels.
That’s very interesting.
Another character study came to mind: Anton Chigur (sic?) in “No Country For Old Men”.
@ToKTeacher @n_iccolo @reasonisfun @Hugoisms
Yes! Problems are soluble 🚀🚀
Anybody else like character studies as much as I do? E.g. “The Perfume”, “Lolita”, etc.
What do you like about them?
@ToKTeacher @reasonisfun @Hugoisms
You mean the abortions that have happened so far?
@ToKTeacher @reasonisfun @Hugoisms
If creativity is part gene part meme then no one is a person until after birth.
My opinion is that generally speaking it’s bad. Open to finding out about reasons it would be okay but I won’t lie; it’ll be a tough sell.
“Human” in this sense meaning non creative, person meaning creative.
Is there a difference between killing a human and killing a person?
Is a problem always a conflict between two explanations? Can it also be the lack of an explanation? Or is that somehow reducible to a conflict between two explanation?
“Peter the Popperian”! Best thing I’ve heard in a while. As Peter hehehehehehe friggin sweet guys!
For an example of this using multiplication, see soundcloud.com/dchacke/artifi… (disclaimer: I’m the author).
@bnielson01 @JSB_1685 @dela3499 @DavidDeutschOxf @EvanOLeary
He claims he came up with it 25 years ago... talk is cheap. Show us the code.
@JSB_1685 @dela3499 @DavidDeutschOxf @EvanOLeary
Where can we find and how can we run this thing you built?
Thanks @dela3499 for sharing this with me. It's good to see researchers focusing on open ended problem solving.
I agree that qualia are mysterious but that’s a problem of understanding, not a problem of qualia. Once we have a good explanation we shall understand them. Problems are soluble :)
Yes, good point. Perhaps what we are left with is that a universal explainer has the capacity to experience qualia, but doesn’t necessarily do so? Eg feeling hungry is a kind of knowledge?
If so, does that mean non-explainers cannot experience qualia?
Yup agreed. I’m after explanations at this point :)
Sounds like some neuroscientists are finally starting to see their own reductionism.
I personally stay away from qualia for the most part because they are utterly mysterious. If a universal explainer automatically has them, great; if not, I don’t really care all that much. May even make things easier if it helps avoid moral issues.
Depending on one’s definition of AGI, a universal explainer without qualia could be considered a little less than an AGI.
Lack of input/output devices doesn’t necessarily suggest lack of qualia. See aeon.co/essays/how-clo…
I agree however that even without qualia a universal explainer is genuinely universal in its capacity to explain.
Yeah, I still go back and forth on whether a universal explainer would automatically have qualia. I agree that it’s not obvious.
Episode 7 of the podcast on artificial creativity is out; as always, heavily inspired by @DavidDeutschOxf. This time, it's a Q&A episode.
@thethinkersmith @ToKTeacher @Crit_Rat @SamHarrisOrg
Neuroscience would need to violate computational universality in order to contribute.
Only half of all households in 1990 had a stove? How did people cook?
@tjaulow
The only thing universality suggests is a shared repertoire.
@thethinkersmith @ToKTeacher @Crit_Rat @SamHarrisOrg
That could all be genetic or memetic.
@thethinkersmith @ToKTeacher @Crit_Rat @SamHarrisOrg
I’m curious. How do primates demonstrate creativity?
@tjaulow
I think I understand what you mean, but I don’t think universality on its own suggests that.
It’s just a turn of phrase. It means that something that is considered an exception doesn’t break the rule precisely because it is an exception, as opposed to something that was not expected to occur.
For example, I could learn how to be attracted to men, if I chose to. This may be hard but can't be impossible, since women have that knowledge somehow.
Yeah, this helps. I think I was simply wrong about what heterosexuality implies. It means that there is different knowledge in men and women; but (given the right technology etc) nothing prevents either from learning what the other has.
While there are differences between kinds of knowledge any two people hold - e.g. you know something I don't - nothing forbids my creating that missing knowledge for myself. The universality here lies in the ability to create knowledge, not differences in existing knowledge.
But while there are no differences between people as explainers, there are undeniable differences between people as it relates to different genders.
x is a universal y if it can do all the z's all the other y's can do.
People are universal explainers. Any given person can in principle explain anything any other person could explain. That means in their ability to create knowledge, all people are literally the same.
Makes sense. I also just remembered David saying somewhere that people have an inborn fear of heights which they can exploit for fun (eg parachuting). But could they get rid of it?
How do we square gender specific preferences with universality? For example, most men are attracted to women.
Presumably, genes can create preferences, interests, etc in people, but can be overwritten by the mind?
- Knowledge is information that is adapted to a purpose.
- Knowledge is information with causal power.
- Knowledge is information that solves a problem.
Those are the three relationships that come to my mind.
RT @DavidDeutschOxf:
@notsurethomas @lynz_h55 @TheHalcyonSavan @PSTaylor13
Explanations never explain why they themselves are true. That wo…
First time I've seen it, but the description sounds promising.
@RealtimeAI @chrisalbon
Hahahaha omg that sounds so disgusting! 😂
Join me for a talk on artificial creativity this Sunday at 8 at Rainbow Mansion in Cupertino, CA:
The chicken's name is Ernie! I had no idea...
He's too invested now. He would probably, as most people, think it a public failure to change his mind about superintelligence.
Deeper reason: pessimism.
Haven’t read his books yet, but some of his papers I’ve read are among the worst “contributions” to the field. Complete nonsense. Will likely never come around.
@zombieinjeans
Don't know.
FWIW, definitions of knowledge I also like include "explanation" and "information that is adapted to a purpose", which latter definition includes being hard to vary.
And yes it has causal power and tends to remain physically instantiated.
Google the term "knowledge" and you'll get:
"facts, information, and skills acquired by a person through experience or education [...]"
😭😭😭
I was worried you’d say that. Will skip this one, thanks :)
@zombieinjeans @DavidDeutschOxf
Blind evolution often finds solutions in the biosphere. But knowledge in humans has the advantage that not every single explanation along the way has to work, so it has more flexibility.
Genetic pluralism may be part of the answer, too.
It doesn’t prove general relativity. Epistemological blunder.
RT @DavidDeutschOxf:
Agreed. (Disturb.) twitter.com/ZachG932/statu…
How was “A New Kind of Science”? It was recommended to me buy an AI person, but I haven’t gotten around to it yet.
@zombieinjeans @DavidDeutschOxf
I go back and forth on this. On the lowest lvl there seems to be blind evolution happening in our minds. In hindsight it looks goal oriented and purposeful, not random. So does biological evolution, though. But yes, things we have learned in the past can help with a new problem.
Got it - are you referring to the part where I invoke the hidden target function to gather return values and then reconstruct the function from those values as “explaining data”?
Got it. I think I understand what you mean by “objective”. And what do you mean by “criterion”?
What do you mean by “mismatch between criterion and current objective”? A mismatch between target function and its replica?
Yes, I think of it this way as well.
@ChristopherCode @DavidDeutschOxf
I know next to nothing about quantum operations, but the laws of epistemology should apply to function implementations on quantum computers as well.
@DavidDeutschOxf would know better.
How do they do this? Do they not implement a function of their own to imitate the other function?
It should apply to all explanations.
I’m not sure yet how to represent more complex functions such as human speech, let alone philosophical theories. But given lambda calculus’ universality, it must be possible.
FWIW functions without parameters are also explanations.