Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Tweets

An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.

But in case I will, you can subscribe via RSS – without a Twitter account. Rationale

It's as if Popper had never lived. :(

@dchackethal · · Show · Open on Twitter

The video is a good example of how useless neuroscience is in this regard and how it still spreads Lamarckism even today, 160 years after it was refuted by Darwin. (She claims it was the invention of cooking that made us intelligent.)

@dchackethal · · Show · Open on Twitter

youtube.com/watch?v=7XH1…

"What is so special about the human brain?" Its software. But she doesn't talk about software once. Why not?

@dchackethal · · Show · Open on Twitter

@SmashAGrape @ReachChristofer

For creativity and consciousness, check out chapter 5. For effective psychotherapy, check out chapter 9. I recommend reading them in that order.

@dchackethal · · Show · Open on Twitter

@SmashAGrape @ReachChristofer

I didn't say that.

@dchackethal · · Show · Open on Twitter

@fbulhosen

Should already be, on Amazon.

@dchackethal · · Show · Open on Twitter

It's #corona quarantine time. You're at home, bored. Why not curl up with a new book?

𝘼 𝙒𝙞𝙣𝙙𝙤𝙬 𝙤𝙣 𝙄𝙣𝙩𝙚𝙡𝙡𝙞𝙜𝙚𝙣𝙘𝙚 is out now. It is your field guide to the exciting world of your mind.

Order right now:
amazon.com/Window-Intelli… https://t.co/Y0TIFywTyW

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @bnielson01 @RebelScience @connectedregio1 @EnricGuinovart @Built2T @ks445599

In any case, the disagreements between the various participants in this thread rest on epistemological disagreements. We'll keep budding heads if we continue discussing concrete, surface-level issues. It'd be more productive to discuss epistemology instead. Evrthng else follows

@dchackethal · · Show · Open on Twitter

@SurviveThrive2 @bnielson01 @RebelScience @connectedregio1 @EnricGuinovart @Built2T @ks445599

That sounds like a mechanistic way to solve problems that is guaranteed (or at least likely) to succeed.

There can be no such thing; and it wouldn't be intelligence, either. Intelligence involves luck. Sometimes you find a solution to a problem; sometimes you don't.

@dchackethal · · Show · Open on Twitter

@theMMPodcast

I'd like to advertise on your podcast. How do I go about that?

@dchackethal · · Show · Open on Twitter

@SmashAGrape @ReachChristofer

Why would you have to include the physical/biochemical behavior of the brain?

@dchackethal · · Show · Open on Twitter

@connectedregio1 @ks445599 @RebelScience @bnielson01

I want to understand how the mind works and then recreate it as a computer program.

@dchackethal · · Show · Open on Twitter

@connectedregio1 @RebelScience @ks445599 @bnielson01

I think trying to emulate the hardware is a waste of time. We’re trying to simulate the software, and that software can run on computers that are not the brain. So what could the brain possibly tell us? It’s like studying computer hardware to understand web browsers.

@dchackethal · · Show · Open on Twitter

@RebelScience @ks445599 @bnielson01 @connectedregio1

Intelligence is a universal ability. Therefore, AGI and "human-level" AGI are the same. Both are people.

@dchackethal · · Show · Open on Twitter

@RebelScience @ks445599 @bnielson01 @connectedregio1

We don't need any hardware at the moment. We first need an explanation of how intelligence works. And then we can draw conclusions about its performance characteristics.

Worrying about scaling hardware is premature at this point.

@dchackethal · · Show · Open on Twitter

@RebelScience @bnielson01 @ks445599 @connectedregio1

Are you suggesting these things are the same thing? They aren't.

@dchackethal · · Show · Open on Twitter

@SmashAGrape @ReachChristofer

Yes, it's possible to separate them. Software is substrate independent. I wouldn't really bother with the brain at all.

@dchackethal · · Show · Open on Twitter

@DKedmey @Numenta

Excellent. I comment on "thousand brains theory" and HTM in the book, check out chapter 7, section "Neuroscience."

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

And some program in your brain instructed you to say "an apple fell from the tree" just then, no? On the appropriate level of emergence?

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

It seems to me you're still assuming the program would simulate some reductive state of that process. It doesn't need to. It can reflect the same level of emergence as "an apple fell from the tree." We know this from computational universality.

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

It's hard to answer this many questions on Twitter. I recommend moving this to critapp if you want to go deeper.

@dchackethal · · Show · Open on Twitter

@RebelScience @bnielson01 @ks445599 @connectedregio1

No. Computing is about instantiating abstractions and their relationships through physical objects and their motion.

@dchackethal · · Show · Open on Twitter

@RebelScience @ks445599 @bnielson01 @connectedregio1

It follows from computational universality that no such reinvention is needed. Our computers can already simulate intelligence.

@dchackethal · · Show · Open on Twitter

@RebelScience @bnielson01 @ks445599 @connectedregio1

Who cares? You won't win a factual argument by trying to establish status or credibility, or by impressing others with your achievements.

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

That being said, could we use Turing machines instead of Lambda Calculus? Of course, viz. universality of computation. But it wouldn't be as illustrative.

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

The universality of computation in itself doesn't advocate for or against any particular level of emergence. It works on any level of emergence, and we can choose the one we find informative.

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

Lambda calculus is not a description of the mind, and I don't use it as such.

But, as to why I like Lambda Calculus to bridge the illusory gap between philosophy and software engineering: because each part of a function maps exactly onto how explanations work.

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

Either would work; in the former case, you do it yourself through thought, in the latter, someone else through writing code.

@dchackethal · · Show · Open on Twitter

@RebelScience @ks445599 @connectedregio1 @bnielson01

Once again you ignored my q :) Can you slow down pls?

Here are examples that would change my mind. A good formulation of a principle of computation showing (sensory) inputs are necessary for computation. Or, more generally, a refutation of the universality of computation.

@dchackethal · · Show · Open on Twitter

@RebelScience @ks445599 @connectedregio1 @bnielson01

You keep saying that, despite evidence and good explanations that this isn't the case.

What could someone possibly say that would change your mind about this? In a rational discussion, one should be prepared to answer this question. (It's not a rhetorical q, I'd like to know.)

@dchackethal · · Show · Open on Twitter

@RebelScience @connectedregio1 @bnielson01 @ks445599

So in other words, you want to keep ignoring my questions, and you want to insist that computation is impossible without inputs, even though I showed you a routine example above that performs just that?

If you want to make progress here, it's high time to change your mind.

@dchackethal · · Show · Open on Twitter

@RebelScience @connectedregio1 @bnielson01 @ks445599

Can you answer my previous question?

And, if you are right, can you please explain how the function I posted above performs the requisite computation even though it doesn't have any inputs? Notice how the brackets behind "foo" are empty; that's where inputs would go.

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

Agreed. And software engineering can help improve both, including mental ailments that are the result of bugs in the "user space."

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

What difference does it make whether and how people discuss these things? :)

In any case, I recommend chapters 3 and 6. Especially in the latter one I address criticisms similar to the ones you propose.

@dchackethal · · Show · Open on Twitter

@RebelScience @connectedregio1 @bnielson01 @ks445599

In other words, you're saying that nobody could possibly come up with an argument for why you should change your mind about this?

@dchackethal · · Show · Open on Twitter

@connectedregio1 @RebelScience @bnielson01 @ks445599

(and the vast majority of our computers are; our laptops, smart phones, desktop computers, etc.)

@dchackethal · · Show · Open on Twitter

@connectedregio1 @RebelScience @bnielson01 @ks445599

Their brains are. But yes: whatever happens in a human brain - creativity, among other things - must be replicable on a computer other than the brain, if that computer is universal.

@dchackethal · · Show · Open on Twitter

@RebelScience @connectedregio1 @bnielson01 @ks445599

"Data" means "givens." That function I sent you does not take any inputs: it's not "given" anything, and especially not through senses. And yet it performs computation, which is something you claimed is impossible.

@dchackethal · · Show · Open on Twitter

@RebelScience @bnielson01 @ks445599 @connectedregio1

There are whole programming languages that are built around the idea that code is data. They are called "Lisps."

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

What's the basic machinery of minds?

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

No. Software engineering is not an appeal to reductionism. It happens on the same level of emergence on which ideas live.

@dchackethal · · Show · Open on Twitter

@RebelScience @connectedregio1 @bnielson01 @ks445599

As to your point "there can be no computation without data." Here's a simple program that takes no data:

(defn foo [] (* 2 2))

It returns 4. Are you saying this isn't computation?

@dchackethal · · Show · Open on Twitter

@RebelScience @connectedregio1 @bnielson01 @ks445599

Organisms are born with data that's genetically given. And an organism would retain all of that genetic data even if born with a malfunction that cut off its brain from all sense data.

@dchackethal · · Show · Open on Twitter

@connectedregio1 @RebelScience @bnielson01 @ks445599

A computer is universal when it can compute anything any other compute can compute. A universal computer can compute any computable function.

@dchackethal · · Show · Open on Twitter

@RebelScience @bnielson01 @ks445599 @connectedregio1

No. Code is data and remains data even if cut off completely from the outside world, with not ability to ingest additional data.

I'm guessing this mistake rests on a misunderstanding of computational universality.

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

By the same logic, someone who is, say, schizophrenic, just needs to be talked out of it?

@dchackethal · · Show · Open on Twitter

@dela3499 @ReachChristofer

The comparison doesn't hold because neurosurgery and software engineering happen on different levels of emergence.

@dchackethal · · Show · Open on Twitter

@univ_explainer

It's still possible it would need to update some parameters at first, which may take some time, and so it might initially stumble about a bit. Progress might look like "learning" - in which case, such a result may not tell us much.

2/2

@dchackethal · · Show · Open on Twitter

@univ_explainer

Off-the-cuff guess: it would do better than a human initially, because presumably its knowledge of what to do with visual impressions is given genetically and remains present even when blind. Once the "veil is lifted," it just needs to invoke the knowledge.

1/

@dchackethal · · Show · Open on Twitter

@ReachChristofer

I found the video of the dog playing Jenga, if anyone wants to see it:

youtube.com/watch?v=5PrnVk…

@dchackethal · · Show · Open on Twitter

It was fun being on Christofer's show again. twitter.com/ReachChristofe…

@dchackethal · · Show · Open on Twitter

@bnielson01 @ks445599 @connectedregio1

I call them "functions," but all replicators (including memes and genes) are functions, so I have been looking for a new term as well! If you think of one, let me know...

@dchackethal · · Show · Open on Twitter

@RebelScience @bnielson01 @ks445599 @connectedregio1

Well, we also know from the universality of computation that we could build general intelligence on computers that don’t have any sensory inputs.

Either refute the universality of computation or stop invoking special-purpose brain regions and sensory inputs.

@dchackethal · · Show · Open on Twitter

@RebelScience @bnielson01 @ks445599 @connectedregio1

No because we know from the universality of computation that we could build general intelligence on computers that don’t have visual cortices.

@dchackethal · · Show · Open on Twitter

@RebelScience @ks445599 @connectedregio1 @bnielson01

Someone should write about why thinking about brain regions is pointless in this regard... Oh wait, I already did: amazon.com/dp/1734696109/…

@dchackethal · · Show · Open on Twitter

@bnielson01 @connectedregio1 @ks445599

I suggest you move this to critapp if you want to have a more detailed discussion. It's near impossible to have long-form discussions on Twitter.

@dchackethal · · Show · Open on Twitter

@bnielson01 @connectedregio1 @ks445599

We should take a step back. The underlying issue here is that you ignore my questions and that you are still an inductivist without realizing it. After ignoring a question you usually follow up with long, somewhat unrelated arguments.

@dchackethal · · Show · Open on Twitter

@bnielson01 @connectedregio1 @ks445599

Again, why would there be nothing to learn?

@dchackethal · · Show · Open on Twitter

@bnielson01 @connectedregio1 @ks445599

People don’t learn from their environment.

@dchackethal · · Show · Open on Twitter

@bnielson01 @connectedregio1 @ks445599

In fact, one doesn't need access to any environment for learning to happen, let alone a random one. A brain in a vat can still create explanations.

Thinking that some environment is needed, or that it need contain regularities (non-randomness), is just a cousin of inductivism.

@dchackethal · · Show · Open on Twitter

@bnielson01 @connectedregio1 @ks445599

Why would there be nothing to learn?

People may well guess the explanation "things happen randomly," and they'd be correct. By creating this explanation, they would have learned something.

And by creating mistaken explanations and then improving upon them, they'd also learn.

@dchackethal · · Show · Open on Twitter

@bnielson01 @connectedregio1 @ks445599

Yeah, which makes me think the brain must be the one non-random thing in this scenario.

But no, I don’t think it would make every explanation coincidence. That’d be judging explanation by likelihood, no?

@dchackethal · · Show · Open on Twitter

@ks445599 @connectedregio1

Not all animals, but many conceivably have both. It’s harder not to evolve it. And our ancestors, which were animals themselves, must have had both for us to evolve intelligence.

@dchackethal · · Show · Open on Twitter

@ks445599 @connectedregio1

Though I agree with the conclusion, many animals’ brains may be universal computers.

Universal computation is a necessary condition for intelligence, but not sufficient. What’s needed in addition is explanatory universality.

@dchackethal · · Show · Open on Twitter

@connectedregio1 @ks445599

Why wouldn’t people be able to develop explanations in such an environment?

They would wonder about their environment, and try to explain it, no? And maybe they would conjecture that it’s random.

@dchackethal · · Show · Open on Twitter

@michaelshermer

Assuming 4 through 7 are true, aren’t they part of 3?

And isn’t taking vitamin D better than direct sun exposure?

@dchackethal · · Show · Open on Twitter

@connectedregio1

I agree.

Effective psychotherapy is a branch of software engineering. Present-day approaches happen on the level of the brain, not the mind, and, therefore, are ungainly at best, counterproductive at worst.

Developing effective psychotherapy is a big part of AGI studies.

@dchackethal · · Show · Open on Twitter

My book "A Window on Intelligence" is now available worldwide, anywhere you can buy books.

Amazon: amazon.com/Window-Intelli…
Apple Books: books.apple.com/us/book/a-wind…
Barnes & Noble: barnesandnoble.com/w/a-window-on-…

Learn more: windowonintelligence.com

@dchackethal · · Show · Open on Twitter

I have been looking forward to this day for almost a year.

Today, a SPECIAL episode for my listeners. Introducing: A Window on Intelligence - The Philosophy of People, Software, and Evolution - and Its Implications.

Be the first to listen to the excerpt: soundcloud.com/dchacke/13-int… https://t.co/eH6BbVsmHg

@dchackethal · · Show · Open on Twitter

@DKedmey

Since evolution is a gradual process, I guess there was never a single genetically unique organism that wasn't the last of its species (that even includes humans).

Entire species can be ~unique if their last common ancestor is sufficiently far back up the phylogenetic tree.

@dchackethal · · Show · Open on Twitter

Far out in space...

soundcloud.com/dchacke/far-ou…

@dchackethal · · Show · Open on Twitter

@dela3499 @KittJohnson_

Indeed. Likewise, the gaps in human thought are filled with viable ideas. Not necessarily viable vis-a-vis a problem, but vis-a-vis spreading through the population of a mind's ideas.

Otherwise, we need to explain evolution as anything other than gradual.

@dchackethal · · Show · Open on Twitter

@DKedmey @dela3499 @KittJohnson_

LOL!!

@dchackethal · · Show · Open on Twitter

@dela3499 @KittJohnson_

Yes. And are there not dramatic genetic gaps between, say, a sea horse and an elephant?

@dchackethal · · Show · Open on Twitter

@dela3499 @KittJohnson_

Perhaps :) Or, to make things easier, space shuttles. But maybe even those could be evolved biologically in principle.

Also, the reason human thought can escape parochialism is not that it can jump gaps - it's still gradual - it's that a human with bad ideas doesn't die.

@dchackethal · · Show · Open on Twitter

@dela3499 @KittJohnson_

I guess that Lambda Calculus is evolvable, and that evolution stumbled upon it pretty soon, because it is easier to evolve than any non-universal counterparts.

I guess that is true of other universal models of computation, too.

Conversely, what isn't evolvable in principle?

@dchackethal · · Show · Open on Twitter

@dela3499 @KittJohnson_

Could be!

@dchackethal · · Show · Open on Twitter

Another example of buggy animal programming (one that many will, presumably, readily interpret as intelligence, despite being evidence of the opposite). twitter.com/jimcaris/statu…

@dchackethal · · Show · Open on Twitter

@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

Through a population of memes, yes.

Count me as one of those silly people who doesn't think animals can learn :) At least not in the sense of creating knowledge - animals that copy memes do so through imitation and don't seem to create knowledge in the process.

@dchackethal · · Show · Open on Twitter

@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

Not a population, but a kitten not grooming itself until exposed to a cat who does is an example where memes are a better explanation than genes. For, if the kitten had contained the knowledge of how to groom genetically, why didn't it groom? Maybe that needed to be "activated"?

@dchackethal · · Show · Open on Twitter

@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

Ah, I'm guessing that's what you meant by "universal" animal behavior: that which thousands of unconnected populations share.

For those, I agree that genes are the better explanations. Nonetheless, some animals do have memes.

Do you agree?

@dchackethal · · Show · Open on Twitter

@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

It can. Memetic evolution may independently converge onto similar solutions, as evolutionary algorithms often do.

(It is for this same reason that a rabbit fossil in Cambrian rock would not refute the biological theory of evolution, but it applies equally to meme evolution.)

@dchackethal · · Show · Open on Twitter

@mizroba @DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

"Memetic" means "of or related to memes." I don't know what you mean by universal animal behaviors, but yes, some animals have memes.

@dchackethal · · Show · Open on Twitter

@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

Top-down causal chains are real, and they happen from meme to DNA, but reductionism denies that and says that causation can only travel bottom-up. So in a reductionist framework, it seems to make sense that genes have all the control. Which leads to many new problems :)

@dchackethal · · Show · Open on Twitter

@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

So then genes presumably have enough control to build in a fail-safe that controls human behavior to the degree that it isn't detrimental to the genes? And so then humans aren't universal explainers after all? No.

And DNA molecules are made of atoms, so do atoms control DNA?

@dchackethal · · Show · Open on Twitter

@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

Ah, our old friend reductionism. :) Genes build the neuronal architectures that store memes, sure, but that doesn't seem to have much impact on the kinds of memes people can have: how do memes such as fasting, homosexuality, etc spread despite being the gene's worst nightmare?

@dchackethal · · Show · Open on Twitter

@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

And so we may consider the dysfunction or disappearance of the DNA molecules that used to encode those genes to be part of that meme’s phenotype.

2/2

@dchackethal · · Show · Open on Twitter

@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

I think so. I once heard that cats’ grooming behavior is memetic not genetic. Suppose it used to be genetic. After the meme of grooming spread through the population of cats, mutations of the corresponding genes occurred and now those genes are either dysfunctional or gone...

1/

@dchackethal · · Show · Open on Twitter

@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

Yes, I'd consider those DNA molecules part of the phenotype of the meme of growing meat in a lab.

Not the genes themselves, though - they are abstractions, and phenotypes are about effects on the physical world.

@dchackethal · · Show · Open on Twitter

@DKedmey @BretWeinstein @RichardDawkins @ToKTeacher @DavidDeutschOxf

Some memes are encoded in genes, e.g. the human meme of pointing in some dogs' genes (credit to David for telling me this). In such cases, the DNA molecules of such genes could be considered an extended phenotype of those memes, yes.

Is that the sort of thing you had in mind?

@dchackethal · · Show · Open on Twitter

@dchackethal · · Show · Open on Twitter

@ReachChristofer @DavidDeutschOxf

I’m guessing that’s how one could demote a person to an animal. (Horribly immoral to do so!)

If we were to delete all ideas from a mind, I’m not sure it’d be a mind anymore, but it may be more of a semantic issue at that point.

2/2

@dchackethal · · Show · Open on Twitter

@ReachChristofer @DavidDeutschOxf

If we could somehow delete those ideas from a mind that replicate within it, then evolution in that mind would stop and it wouldn’t be creative anymore. It may still contain non-replicating ideas, though, and therefore be a non-creative space for ideas.

1/

@dchackethal · · Show · Open on Twitter

@Giovanni_Lido @noa_lange @DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @JohnHMcWhorter

I can't decide if it's better or worse to only have one term for it. :) The German 𝑤𝑎ℎ𝑟𝑠𝑐ℎ𝑒𝑖𝑛𝑙𝑖𝑐ℎ is an interesting case: as an adjective, it means what you said. As an adverb, it seems to mean "probably," e.g. "wahrscheinlich richtig" means "probably true."

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

(using "likely" as a synonym of "probable" here)

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

I don't know Greek but the confusion seems to date back to misinterpretations of Xenophanes' usage of the word "eoikota."

3/3

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

The terms are opposites because the more like the truth a theory is, the more non-trivial, complex, and bold it is, and therefore less likely to be true.

2/

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

Epistemologically speaking, these sound off.

According to Popper's C&R, the confusion between "likely" and "like the truth" dates back to misinterpretations of Xenophanes in ancient Greece who used the latter meaning.

1/

@dchackethal · · Show · Open on Twitter

@tjaulow

I haven't, but I guess behaviorism puts a cap on how much he can contribute to AGI research.

@dchackethal · · Show · Open on Twitter

Search tweets

/
/mi
Accepts a case-insensitive POSIX regular expression. Most URLs won’t match. Tweets may contain raw markdown characters, which are not displayed.
Clear filters