Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Tweets
An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.
But in case I will, you can subscribe via RSS – without a Twitter account. Rationale
@RebelScience @bnielson01 @ks445599 @connectedregio1
Well, we also know from the universality of computation that we could build general intelligence on computers that don’t have any sensory inputs.
Either refute the universality of computation or stop invoking special-purpose brain regions and sensory inputs.
@RebelScience @bnielson01 @ks445599 @connectedregio1
No because we know from the universality of computation that we could build general intelligence on computers that don’t have visual cortices.
@RebelScience @ks445599 @connectedregio1 @bnielson01
Someone should write about why thinking about brain regions is pointless in this regard... Oh wait, I already did: amazon.com/dp/1734696109/…
@bnielson01 @connectedregio1 @ks445599
I suggest you move this to critapp if you want to have a more detailed discussion. It's near impossible to have long-form discussions on Twitter.
@bnielson01 @connectedregio1 @ks445599
We should take a step back. The underlying issue here is that you ignore my questions and that you are still an inductivist without realizing it. After ignoring a question you usually follow up with long, somewhat unrelated arguments.
@bnielson01 @connectedregio1 @ks445599
Again, why would there be nothing to learn?
@bnielson01 @connectedregio1 @ks445599
People don’t learn from their environment.
@bnielson01 @connectedregio1 @ks445599
In fact, one doesn't need access to any environment for learning to happen, let alone a random one. A brain in a vat can still create explanations.
Thinking that some environment is needed, or that it need contain regularities (non-randomness), is just a cousin of inductivism.
@bnielson01 @connectedregio1 @ks445599
Why would there be nothing to learn?
People may well guess the explanation "things happen randomly," and they'd be correct. By creating this explanation, they would have learned something.
And by creating mistaken explanations and then improving upon them, they'd also learn.
@bnielson01 @connectedregio1 @ks445599
Yeah, which makes me think the brain must be the one non-random thing in this scenario.
But no, I don’t think it would make every explanation coincidence. That’d be judging explanation by likelihood, no?
Not all animals, but many conceivably have both. It’s harder not to evolve it. And our ancestors, which were animals themselves, must have had both for us to evolve intelligence.
Though I agree with the conclusion, many animals’ brains may be universal computers.
Universal computation is a necessary condition for intelligence, but not sufficient. What’s needed in addition is explanatory universality.
Why wouldn’t people be able to develop explanations in such an environment?
They would wonder about their environment, and try to explain it, no? And maybe they would conjecture that it’s random.
Assuming 4 through 7 are true, aren’t they part of 3?
And isn’t taking vitamin D better than direct sun exposure?
I agree.
Effective psychotherapy is a branch of software engineering. Present-day approaches happen on the level of the brain, not the mind, and, therefore, are ungainly at best, counterproductive at worst.
Developing effective psychotherapy is a big part of AGI studies.
My book "A Window on Intelligence" is now available worldwide, anywhere you can buy books.
Amazon: amazon.com/Window-Intelli…
Apple Books: books.apple.com/us/book/a-wind…
Barnes & Noble: barnesandnoble.com/w/a-window-on-…
Learn more: windowonintelligence.com
I have been looking forward to this day for almost a year.
Today, a SPECIAL episode for my listeners. Introducing: A Window on Intelligence - The Philosophy of People, Software, and Evolution - and Its Implications.
Be the first to listen to the excerpt: soundcloud.com/dchacke/13-int… https://t.co/eH6BbVsmHg
Since evolution is a gradual process, I guess there was never a single genetically unique organism that wasn't the last of its species (that even includes humans).
Entire species can be ~unique if their last common ancestor is sufficiently far back up the phylogenetic tree.
@dela3499 @KittJohnson_
Indeed. Likewise, the gaps in human thought are filled with viable ideas. Not necessarily viable vis-a-vis a problem, but vis-a-vis spreading through the population of a mind's ideas.
Otherwise, we need to explain evolution as anything other than gradual.
@dela3499 @KittJohnson_
Yes. And are there not dramatic genetic gaps between, say, a sea horse and an elephant?
@dela3499 @KittJohnson_
Perhaps :) Or, to make things easier, space shuttles. But maybe even those could be evolved biologically in principle.
Also, the reason human thought can escape parochialism is not that it can jump gaps - it's still gradual - it's that a human with bad ideas doesn't die.
@dela3499 @KittJohnson_
I guess that Lambda Calculus is evolvable, and that evolution stumbled upon it pretty soon, because it is easier to evolve than any non-universal counterparts.
I guess that is true of other universal models of computation, too.
Conversely, what isn't evolvable in principle?
Another example of buggy animal programming (one that many will, presumably, readily interpret as intelligence, despite being evidence of the opposite). twitter.com/jimcaris/statu…
@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
Through a population of memes, yes.
Count me as one of those silly people who doesn't think animals can learn :) At least not in the sense of creating knowledge - animals that copy memes do so through imitation and don't seem to create knowledge in the process.
@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
Not a population, but a kitten not grooming itself until exposed to a cat who does is an example where memes are a better explanation than genes. For, if the kitten had contained the knowledge of how to groom genetically, why didn't it groom? Maybe that needed to be "activated"?
@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
Ah, I'm guessing that's what you meant by "universal" animal behavior: that which thousands of unconnected populations share.
For those, I agree that genes are the better explanations. Nonetheless, some animals do have memes.
Do you agree?
@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
It can. Memetic evolution may independently converge onto similar solutions, as evolutionary algorithms often do.
(It is for this same reason that a rabbit fossil in Cambrian rock would not refute the biological theory of evolution, but it applies equally to meme evolution.)
@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
"Memetic" means "of or related to memes." I don't know what you mean by universal animal behaviors, but yes, some animals have memes.
@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
Top-down causal chains are real, and they happen from meme to DNA, but reductionism denies that and says that causation can only travel bottom-up. So in a reductionist framework, it seems to make sense that genes have all the control. Which leads to many new problems :)
@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
So then genes presumably have enough control to build in a fail-safe that controls human behavior to the degree that it isn't detrimental to the genes? And so then humans aren't universal explainers after all? No.
And DNA molecules are made of atoms, so do atoms control DNA?
@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
Ah, our old friend reductionism. :) Genes build the neuronal architectures that store memes, sure, but that doesn't seem to have much impact on the kinds of memes people can have: how do memes such as fasting, homosexuality, etc spread despite being the gene's worst nightmare?
@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
And so we may consider the dysfunction or disappearance of the DNA molecules that used to encode those genes to be part of that meme’s phenotype.
2/2
@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
I think so. I once heard that cats’ grooming behavior is memetic not genetic. Suppose it used to be genetic. After the meme of grooming spread through the population of cats, mutations of the corresponding genes occurred and now those genes are either dysfunctional or gone...
1/
@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
Yes, I'd consider those DNA molecules part of the phenotype of the meme of growing meat in a lab.
Not the genes themselves, though - they are abstractions, and phenotypes are about effects on the physical world.
@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf
Some memes are encoded in genes, e.g. the human meme of pointing in some dogs' genes (credit to David for telling me this). In such cases, the DNA molecules of such genes could be considered an extended phenotype of those memes, yes.
Is that the sort of thing you had in mind?
@ReachChristofer @DavidDeutschOxf
I’m guessing that’s how one could demote a person to an animal. (Horribly immoral to do so!)
If we were to delete all ideas from a mind, I’m not sure it’d be a mind anymore, but it may be more of a semantic issue at that point.
2/2
@ReachChristofer @DavidDeutschOxf
If we could somehow delete those ideas from a mind that replicate within it, then evolution in that mind would stop and it wouldn’t be creative anymore. It may still contain non-replicating ideas, though, and therefore be a non-creative space for ideas.
1/
@Giovanni_Lido @noa_lange @DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @JohnHMcWhorter
I can't decide if it's better or worse to only have one term for it. :) The German 𝑤𝑎ℎ𝑟𝑠𝑐ℎ𝑒𝑖𝑛𝑙𝑖𝑐ℎ is an interesting case: as an adjective, it means what you said. As an adverb, it seems to mean "probably," e.g. "wahrscheinlich richtig" means "probably true."
@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter
(using "likely" as a synonym of "probable" here)
@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter
I don't know Greek but the confusion seems to date back to misinterpretations of Xenophanes' usage of the word "eoikota."
3/3
@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter
The terms are opposites because the more like the truth a theory is, the more non-trivial, complex, and bold it is, and therefore less likely to be true.
2/
@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter
Epistemologically speaking, these sound off.
According to Popper's C&R, the confusion between "likely" and "like the truth" dates back to misinterpretations of Xenophanes in ancient Greece who used the latter meaning.
1/
@tjaulow
I haven't, but I guess behaviorism puts a cap on how much he can contribute to AGI research.
@PopperPlay
Evolutionary algorithms (genetic programming). We don't yet know how to build AGI with them, but they seem to be the only promising path.
@PopperPlay
Glad you like it. Reach as in could potentially help build AGI, or as in learn about functional programming generally?
@joe_shipman @DavidDeutschOxf @ESYudkowsky
I recorded an impromptu episode of my podcast to go into more detail about this:
I recorded an impromptu episode about the question of "value alignment" in artificial intelligence research:
@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Not really, no. Here's a hard-to-vary explanation of explanation: youtube.com/watch?v=DHR6ro…
The mistake Popper identified as the who-should-rule-question permeates inquiries into AGI and is behind the pseudo problem of “value alignment.” An example of how critical philosophical knowledge is in this field and how urgently we need more of it. twitter.com/joe_shipman/st…
It’s undesirable because when taken seriously it leads to enslavement and murder of AGI.
2/2
One can’t know, nor does one need to. The very question seeks an authoritative answer (see the only superficially unrelated economist.com/democracy-in-a…; disable JavaScript to circumvent the paywall). And mistakes are what enables progress in the first place.
1/
@jchalupa_ @ReachChristofer @adilzeshan @RealtimeAI @dela3499 @ToKTeacher @Soph8B
Video game "AIs" work according to the same mechanisms as animals even when given only incomplete information about their environment. (In)complete access to model of environment is not the defining factor. The ability to create knowledge is.
@jchalupa_ @ReachChristofer @adilzeshan @RealtimeAI @dela3499 @ToKTeacher @Soph8B
It's possible, but not part of our best explanation of creativity. FWIW creativity does not only apply to humans but all people (humans, AGIs, intelligent ETs, etc).
1/
@jchalupa_ @ReachChristofer @adilzeshan @RealtimeAI @dela3499 @ToKTeacher @Soph8B
Moment to moment response to environmental factors != creativity.
Eg video game "AIs" respond to the player's movements and decisions moment to moment. Since we build them, we know they do not use creativity for that. It all follows predefined algorithms that have reach.
@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher
I'm happy to immediately drop the term "AGI" and replace it with whatever you find more appropriate as long as we know we are talking about the same thing.
@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher
I explain my reasoning for the sharp distinction between narrow AI and AGI and why progress in one is not progress in the other here: soundcloud.com/doexplain/2-wh…
@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher
Just because the vast majority of researchers use a word as it is commonly used does not mean they are successful.
I'm on board saying that narrow AI researchers achieve great successes in the domain of narrow AI. Sadly, that says nothing about their contribution to AGI (nil).
@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher
That’s the AI researchers’ confusion, not Deutsch’s. :)
They don’t understand universality. That’s why AI researchers will not make progress in AGI.
Entirely different field with a misleadingly similar name.
@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher
Not entirely sure what you mean by “AI is a superset of AGI” but in terms of problem solving it’s the opposite because an AGI could do everything all narrow AI programs could do (and then some).
AI is like all other programs; execution of predefined tasks, no creation of new knowledge to solve novel problems.
AGI = universal problem solver; creates knowledge. Opposite of not creating knowledge.
BTW, perfect value alignment is neither feasible nor desirable.
@IamtheWay13 @NathanPMYoung @SmashAGrape @DavidDeutschOxf @ToKTeacher @SamHarrisOrg
Yeah humans = AGIs (both are people) because of universality.
@RealtimeAI @pmathies @ReachChristofer @dela3499 @ToKTeacher @Soph8B
I don't understand the question. Please elaborate.
@RealtimeAI @pmathies @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Both frogs and dogs may well have universal computers for brains. So it's not the brains. It's the software installed on those brains that determines whether you can train that brain.
@ReachChristofer @EAMagnusson @RealtimeAI @dela3499 @ToKTeacher @Soph8B
Exactly. Fallibilism must not lead to paralysis during decision making. Otherwise we run the risk of turning fallibilism into a strange version of the precautionary principle.
We should act on our best explanations without hesitation. Anything else is like Pascal's wager.
@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
IIRC DNA is a general purpose storage medium, so yes, I think it could encode a nuclear spaceship. (That’s not to say such a thing would evolve biologically.)
@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
No. They’re on a molecular basis?
@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
I have been thinking about how cool it would be to build self-replicating machines. I don’t think it’s been done. Might tell us a thing or two about evolution.
@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Nuclear von Neumann probes? :)
@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Sure but none of the ideas built by creativity did. Those are created at runtime.
@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B
Examples of that: learning how to communicate with people, or building complex structures that are completely different from anything the species has built before, or every individual creative animal being entirely unique in character.
@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B
... and therefore would not have evolved biologically, and therefore could not be encoded in the animal's genes, and therefore must have been created by the animal itself, at runtime (i.e. during its lifetime).
3/
@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B
Here's what I'd consider evidence for creativity in an animal: knowledge for which there would have been no genetic precursors in its ancestors, or which would not have given the ancestors' genes a better ability to spread...
2/
@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B
Combination of inborn ideas about what things to avoid and shape recognition algorithm?
1/
@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Also note how the dog is not looking at the stick he's pulling out, or even at the tower, but at its owner. It has no idea what it's doing or why. It wants to please his owner.
2/2
@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
I don't think it guessed. It just had to update parameters and make associations. Dogs know from birth what sticks are, how to put things in their mouths, and how to seek reward and avoid punishment.
Put these things together and you get enough reach to play Jenga.
1/
@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
I made critapp.com for that reason - I think it has better tools for discussion than Twitter. If you'd like to try it out, shoot an email to contact@critapp.com and I'll make an account for you. :)
@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
... and it makes sense for dogs to have genetically evolved such criteria for success because people have been selectively breeding them.
2/2
@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Ok, a few things (Twitter’s character limit is terrible):
Narrow AI can already do what the dog does there.
AGI can only be achieved in a jump, so we must skip to it somehow.
What the dog counts as success is genetically given (praise by owner)...
1/
@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
To be clear, you mean an AGI?
The dog trick is cool, but can be accommodated by a reinforcement algorithm with enough reach, which dogs seem to have. (Note how she praises the dog. Also note how the dog watches her closely, presumably for facial cues indicating success.)
@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B
While it's impressive that animals can do this, it does not require any creativity on their part - that's why I don't consider it learning.
@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B
Yes, though I'd be careful with the word "learn" there - the animal may have been an inborn reinforcement "learning" algorithm, which, coupled with inborn shape recognition algorithms, updates parameters to categorize something as "not dangerous" after several interactions.
@univ_explainer @ReachChristofer @RealtimeAI @dela3499 @ToKTeacher @Soph8B
Why did they have almost no chance to explain anything?
In any case, note that "universal explainer" also signifies an ability, not a guarantee or even chance of success.
@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B
Yes, I don't disagree that the replication strategies of memes differ from ideas that never become memes. But... so what? :)
@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B
I claim that they do replicate within minds, just not necessarily across people :)
@univ_explainer @ReachChristofer @RealtimeAI @dela3499 @ToKTeacher @Soph8B
All babies make that jump to universality long before they learn to speak. This universality lies within people - it is not induced or awarded by outside factors such as technology (let alone the fact that one needs creativity to make technology in the first place).
@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B
Eg "selecting" a preference between fight or flight can be done according to inborn algorithms that do not involve creativity (variation and selection in the evolutionary sense).
@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B
Ah, got it. I'll claim that's a stretch of the phrase "variation and selection" as it strays a bit from evolution because it doesn't refer to variation and selection of replicators.
@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B
Can you give an example of variation and selection in animal brains?
@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B
Why bring recursion into this?
@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Well, people are universal explainers, so even if other organisms have some limited creativity, that marks a pretty sharp distinction. They would all have an infinitesimal repertoire compared to people.
@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B
Could all of those things not be encoded genetically? At the risk of sounding like a broken record, the presence of knowledge, including that which changes behavior, is not evidence of creativity. That knowledge may have emerged from biological evolution.
@ReachChristofer @RealtimeAI @dela3499 @ToKTeacher @Soph8B
Yes, people do eventually die if they don't solve problems. But I don't think the absence of creative thought = death. Eg if you run on autopilot for a few minutes, that won't kill you.
Of course, the underlying message rings true: problem avoidance eventually kills people.