Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Tweets

An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.

But in case I will, you can subscribe via RSS – without a Twitter account. Rationale

@dchackethal · · Show · Open on Twitter

@ReachChristofer @DavidDeutschOxf

I’m guessing that’s how one could demote a person to an animal. (Horribly immoral to do so!)

If we were to delete all ideas from a mind, I’m not sure it’d be a mind anymore, but it may be more of a semantic issue at that point.

2/2

@dchackethal · · Show · Open on Twitter

@ReachChristofer @DavidDeutschOxf

If we could somehow delete those ideas from a mind that replicate within it, then evolution in that mind would stop and it wouldn’t be creative anymore. It may still contain non-replicating ideas, though, and therefore be a non-creative space for ideas.

1/

@dchackethal · · Show · Open on Twitter

@Giovanni_Lido @noa_lange @DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @JohnHMcWhorter

I can't decide if it's better or worse to only have one term for it. :) The German 𝑤𝑎ℎ𝑟𝑠𝑐ℎ𝑒𝑖𝑛𝑙𝑖𝑐ℎ is an interesting case: as an adjective, it means what you said. As an adverb, it seems to mean "probably," e.g. "wahrscheinlich richtig" means "probably true."

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

(using "likely" as a synonym of "probable" here)

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

I don't know Greek but the confusion seems to date back to misinterpretations of Xenophanes' usage of the word "eoikota."

3/3

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

The terms are opposites because the more like the truth a theory is, the more non-trivial, complex, and bold it is, and therefore less likely to be true.

2/

@dchackethal · · Show · Open on Twitter

@DKedmey @DavidDeutschOxf @veritasium @Crit_Rat @LRNR @Giovanni_Lido @JohnHMcWhorter

Epistemologically speaking, these sound off.

According to Popper's C&R, the confusion between "likely" and "like the truth" dates back to misinterpretations of Xenophanes in ancient Greece who used the latter meaning.

1/

@dchackethal · · Show · Open on Twitter

@tjaulow

I haven't, but I guess behaviorism puts a cap on how much he can contribute to AGI research.

@dchackethal · · Show · Open on Twitter

@giantcat9

Does he remember?

@dchackethal · · Show · Open on Twitter

@PopperPlay

Evolutionary algorithms (genetic programming). We don't yet know how to build AGI with them, but they seem to be the only promising path.

@dchackethal · · Show · Open on Twitter

@PopperPlay

Glad you like it. Reach as in could potentially help build AGI, or as in learn about functional programming generally?

@dchackethal · · Show · Open on Twitter

@ChristopherCode

Sure, happy to discuss.

@dchackethal · · Show · Open on Twitter

@ChristopherCode

Mind elaborating on the last step?

@dchackethal · · Show · Open on Twitter

@joe_shipman @DavidDeutschOxf @ESYudkowsky

I recorded an impromptu episode of my podcast to go into more detail about this:

soundcloud.com/dchacke/artifi…

@dchackethal · · Show · Open on Twitter

I recorded an impromptu episode about the question of "value alignment" in artificial intelligence research:

soundcloud.com/dchacke/artifi…

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Not really, no. Here's a hard-to-vary explanation of explanation: youtube.com/watch?v=DHR6ro…

@dchackethal · · Show · Open on Twitter

The mistake Popper identified as the who-should-rule-question permeates inquiries into AGI and is behind the pseudo problem of “value alignment.” An example of how critical philosophical knowledge is in this field and how urgently we need more of it. twitter.com/joe_shipman/st…

@dchackethal · · Show · Open on Twitter

@joe_shipman @DavidDeutschOxf

It’s undesirable because when taken seriously it leads to enslavement and murder of AGI.

2/2

@dchackethal · · Show · Open on Twitter

@joe_shipman @DavidDeutschOxf

One can’t know, nor does one need to. The very question seeks an authoritative answer (see the only superficially unrelated economist.com/democracy-in-a…; disable JavaScript to circumvent the paywall). And mistakes are what enables progress in the first place.

1/

@dchackethal · · Show · Open on Twitter

@jchalupa_ @ReachChristofer @adilzeshan @RealtimeAI @dela3499 @ToKTeacher @Soph8B

Video game "AIs" work according to the same mechanisms as animals even when given only incomplete information about their environment. (In)complete access to model of environment is not the defining factor. The ability to create knowledge is.

@dchackethal · · Show · Open on Twitter

@jchalupa_ @ReachChristofer @adilzeshan @RealtimeAI @dela3499 @ToKTeacher @Soph8B

It's possible, but not part of our best explanation of creativity. FWIW creativity does not only apply to humans but all people (humans, AGIs, intelligent ETs, etc).

1/

@dchackethal · · Show · Open on Twitter

@jchalupa_ @ReachChristofer @adilzeshan @RealtimeAI @dela3499 @ToKTeacher @Soph8B

Moment to moment response to environmental factors != creativity.

Eg video game "AIs" respond to the player's movements and decisions moment to moment. Since we build them, we know they do not use creativity for that. It all follows predefined algorithms that have reach.

@dchackethal · · Show · Open on Twitter

@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher

I'm happy to immediately drop the term "AGI" and replace it with whatever you find more appropriate as long as we know we are talking about the same thing.

@dchackethal · · Show · Open on Twitter

@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher

I explain my reasoning for the sharp distinction between narrow AI and AGI and why progress in one is not progress in the other here: soundcloud.com/doexplain/2-wh…

@dchackethal · · Show · Open on Twitter

@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher

Just because the vast majority of researchers use a word as it is commonly used does not mean they are successful.

I'm on board saying that narrow AI researchers achieve great successes in the domain of narrow AI. Sadly, that says nothing about their contribution to AGI (nil).

@dchackethal · · Show · Open on Twitter

@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher

That’s the AI researchers’ confusion, not Deutsch’s. :)

They don’t understand universality. That’s why AI researchers will not make progress in AGI.

Entirely different field with a misleadingly similar name.

@dchackethal · · Show · Open on Twitter

@NathanPMYoung @IamtheWay13 @DavidDeutschOxf @ToKTeacher

Not entirely sure what you mean by “AI is a superset of AGI” but in terms of problem solving it’s the opposite because an AGI could do everything all narrow AI programs could do (and then some).

@dchackethal · · Show · Open on Twitter

@joe_shipman @DavidDeutschOxf

AI is like all other programs; execution of predefined tasks, no creation of new knowledge to solve novel problems.

AGI = universal problem solver; creates knowledge. Opposite of not creating knowledge.

BTW, perfect value alignment is neither feasible nor desirable.

@dchackethal · · Show · Open on Twitter

@DavidDeutschOxf

Oof. Pessimism ad nauseam.

@dchackethal · · Show · Open on Twitter

@IamtheWay13 @NathanPMYoung @SmashAGrape @DavidDeutschOxf @ToKTeacher @SamHarrisOrg

Yeah humans = AGIs (both are people) because of universality.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @pmathies @ReachChristofer @dela3499 @ToKTeacher @Soph8B

I don't understand the question. Please elaborate.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @pmathies @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Both frogs and dogs may well have universal computers for brains. So it's not the brains. It's the software installed on those brains that determines whether you can train that brain.

@dchackethal · · Show · Open on Twitter

@ReachChristofer @EAMagnusson @RealtimeAI @dela3499 @ToKTeacher @Soph8B

Exactly. Fallibilism must not lead to paralysis during decision making. Otherwise we run the risk of turning fallibilism into a strange version of the precautionary principle.

We should act on our best explanations without hesitation. Anything else is like Pascal's wager.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

IIRC DNA is a general purpose storage medium, so yes, I think it could encode a nuclear spaceship. (That’s not to say such a thing would evolve biologically.)

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

No. They’re on a molecular basis?

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

I have been thinking about how cool it would be to build self-replicating machines. I don’t think it’s been done. Might tell us a thing or two about evolution.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Nuclear von Neumann probes? :)

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Sure but none of the ideas built by creativity did. Those are created at runtime.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B

Examples of that: learning how to communicate with people, or building complex structures that are completely different from anything the species has built before, or every individual creative animal being entirely unique in character.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B

... and therefore would not have evolved biologically, and therefore could not be encoded in the animal's genes, and therefore must have been created by the animal itself, at runtime (i.e. during its lifetime).

3/

@dchackethal · · Show · Open on Twitter

@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B

Here's what I'd consider evidence for creativity in an animal: knowledge for which there would have been no genetic precursors in its ancestors, or which would not have given the ancestors' genes a better ability to spread...

2/

@dchackethal · · Show · Open on Twitter

@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B

Combination of inborn ideas about what things to avoid and shape recognition algorithm?

1/

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Also note how the dog is not looking at the stick he's pulling out, or even at the tower, but at its owner. It has no idea what it's doing or why. It wants to please his owner.

2/2

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

I don't think it guessed. It just had to update parameters and make associations. Dogs know from birth what sticks are, how to put things in their mouths, and how to seek reward and avoid punishment.

Put these things together and you get enough reach to play Jenga.

1/

@dchackethal · · Show · Open on Twitter

@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

I made critapp.com for that reason - I think it has better tools for discussion than Twitter. If you'd like to try it out, shoot an email to contact@critapp.com and I'll make an account for you. :)

@dchackethal · · Show · Open on Twitter

@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

... and it makes sense for dogs to have genetically evolved such criteria for success because people have been selectively breeding them.

2/2

@dchackethal · · Show · Open on Twitter

@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Ok, a few things (Twitter’s character limit is terrible):

Narrow AI can already do what the dog does there.
AGI can only be achieved in a jump, so we must skip to it somehow.
What the dog counts as success is genetically given (praise by owner)...

1/

@dchackethal · · Show · Open on Twitter

@univ_explainer @RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

To be clear, you mean an AGI?

The dog trick is cool, but can be accommodated by a reinforcement algorithm with enough reach, which dogs seem to have. (Note how she praises the dog. Also note how the dog watches her closely, presumably for facial cues indicating success.)

@dchackethal · · Show · Open on Twitter

@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B

While it's impressive that animals can do this, it does not require any creativity on their part - that's why I don't consider it learning.

@dchackethal · · Show · Open on Twitter

@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B

Yes, though I'd be careful with the word "learn" there - the animal may have been an inborn reinforcement "learning" algorithm, which, coupled with inborn shape recognition algorithms, updates parameters to categorize something as "not dangerous" after several interactions.

@dchackethal · · Show · Open on Twitter

@univ_explainer @ReachChristofer @RealtimeAI @dela3499 @ToKTeacher @Soph8B

Why did they have almost no chance to explain anything?

In any case, note that "universal explainer" also signifies an ability, not a guarantee or even chance of success.

@dchackethal · · Show · Open on Twitter

@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B

Yes, I don't disagree that the replication strategies of memes differ from ideas that never become memes. But... so what? :)

@dchackethal · · Show · Open on Twitter

@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B

I claim that they do replicate within minds, just not necessarily across people :)

@dchackethal · · Show · Open on Twitter

@univ_explainer @ReachChristofer @RealtimeAI @dela3499 @ToKTeacher @Soph8B

All babies make that jump to universality long before they learn to speak. This universality lies within people - it is not induced or awarded by outside factors such as technology (let alone the fact that one needs creativity to make technology in the first place).

@dchackethal · · Show · Open on Twitter

@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B

Eg "selecting" a preference between fight or flight can be done according to inborn algorithms that do not involve creativity (variation and selection in the evolutionary sense).

@dchackethal · · Show · Open on Twitter

@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B

Ah, got it. I'll claim that's a stretch of the phrase "variation and selection" as it strays a bit from evolution because it doesn't refer to variation and selection of replicators.

@dchackethal · · Show · Open on Twitter

@dela3499 @RealtimeAI @ReachChristofer @ToKTeacher @Soph8B

Can you give an example of variation and selection in animal brains?

@dchackethal · · Show · Open on Twitter

@RealtimeAI @dela3499 @ReachChristofer @ToKTeacher @Soph8B

Why bring recursion into this?

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Well, people are universal explainers, so even if other organisms have some limited creativity, that marks a pretty sharp distinction. They would all have an infinitesimal repertoire compared to people.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Could all of those things not be encoded genetically? At the risk of sounding like a broken record, the presence of knowledge, including that which changes behavior, is not evidence of creativity. That knowledge may have emerged from biological evolution.

@dchackethal · · Show · Open on Twitter

@ReachChristofer @RealtimeAI @dela3499 @ToKTeacher @Soph8B

Yes, people do eventually die if they don't solve problems. But I don't think the absence of creative thought = death. Eg if you run on autopilot for a few minutes, that won't kill you.

Of course, the underlying message rings true: problem avoidance eventually kills people.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer @dela3499 @ToKTeacher @Soph8B

Here's a criticism of one of the ideas in BoI :)

critapp.com/#/posts/bdd9d1…

@dchackethal · · Show · Open on Twitter

The artificial intelligence research community is in bad shape...

soundcloud.com/dchacke/scanda…

@dchackethal · · Show · Open on Twitter

@DKedmey

Actually, I take it back - progress is the result of that. So still need a word for it. :)

@dchackethal · · Show · Open on Twitter

@DKedmey

Progress? :)

@dchackethal · · Show · Open on Twitter

@PopperPlay @ReachChristofer @DavidDeutschOxf

I’m about to publish something on this, stay tuned. :)

@dchackethal · · Show · Open on Twitter

@PopperPlay @ReachChristofer @DavidDeutschOxf

Yeah those with side effects transform minds, and, if they get a mind to act, the world.

The motivation for treating ideas as functions is to solve the problem of how to encode ideas in a computer program.

Ideas need not return the same output for same input, nor do functions.

@dchackethal · · Show · Open on Twitter

@PopperPlay @ReachChristofer @DavidDeutschOxf

PS: The above is more of an answer to your question “Is there a way we can show that all possible conjecturing and problem solving descends from a single algorithm?” from popperplay.com/problem/Qb6ij0…

@dchackethal · · Show · Open on Twitter

@PopperPlay @ReachChristofer @DavidDeutschOxf

So the explanatory universality of people is powered by the computational universality of functions. Those two universalities are deeply intertwined.

4/4

@dchackethal · · Show · Open on Twitter

@PopperPlay @ReachChristofer @DavidDeutschOxf

Since Lambda Calculus is computationally universal, all ideas in the mind can be expressed as functions, and so the above is the same as saying that it’s a functional replicator in a mind that explores the space of all possible functions.

3/

@dchackethal · · Show · Open on Twitter

@PopperPlay @ReachChristofer @DavidDeutschOxf

A more elaborate one: ideas replicate imperfectly within a creative mind and thereby inadvertently explore the space of all possible ideas. This is how sometimes ideas evolve in a mind that happen to solve a problem/explain something.

2/

@dchackethal · · Show · Open on Twitter

@PopperPlay @ReachChristofer @DavidDeutschOxf

Some quick arguments for the explanatory universality of creative minds:

1) What couldn’t one guess? (nothing , it seems)

2) Humans are so far off the mark (we have built space shuttles age cured diseases etc) that it just makes sense to think they are universal.

1/

@dchackethal · · Show · Open on Twitter

@ReachChristofer @ks445599 @PopperPlay @DoqxaScott @DavidDeutschOxf

Yeah, IIRC, there is no computation a quantum computer can perform that a classical UTM couldn’t. It’s just that some of those computations run intractably slowly on UTMs compared to quantum computers.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

I once saw a video of a monkey swiping pictures on an iPhone. Cool, but not evidence of creativity.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

What do crows/monkeys/other animals do that couldn't be explained exclusively in terms of biologically evolved adaptations? Do you have a video showcasing such behavior, or maybe an article explaining it?

@dchackethal · · Show · Open on Twitter

@DoqxaScott @ks445599 @RealtimeAI @ReachChristofer

I think conjectures are the result of imperfect replication of ideas in the mind.

@dchackethal · · Show · Open on Twitter

@ks445599 @DoqxaScott @RealtimeAI @ReachChristofer

I'd leave out any considerations involving pattern matching because they are too close to empiricism. It's a mistake I have made in the past myself. Empiricism is tempting so it does sneak back into mind here and there if one isn't careful.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

I mean, maybe we can consider the result of any algorithm running in the mind a conjecture, but thinking of creativity as pattern matching is a dangerous path into empiricism, which is really creativity-denial.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

That's why I wrote "intelligence/consciousness" a number of times, because if you have one you automatically have the other.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

I didn't mean to suggest that intelligence and consciousness are the same thing.

I think intelligence = creativity. Same thing just different words. And I think consciousness, among other things, is epi-creative, meaning it arises from creativity.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Ah - you’re saying the result of, say, a pattern matching algorithm is a conjecture?

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

There may be value in it, idk, I’m just pointing out that one is an error and the other a result of one. They are different things. So I don’t think the comparison applies.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Yes, with the proviso that no (or only little in the case of inborn ideas) knowledge of how to solve particular problems is given and needs to be evolved at runtime instead.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Sounds like empiricism. Not sure what you’re trying to say. Please elaborate?

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Yes. Sometimes adaptations have enough reach to incorporate use of new tools etc.

Knowledge of any kind, no matter how sophisticated, is not evidence of intelligence.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Well, a conjecture is the result of an erroneous replication in a mind, so I wouldn’t compare it to transcription errors per se.

But yes there are many differences between biological evolution and what I call functional evolution in a mind.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Don’t see why those couldn’t have been genetically programmed?

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Not pedantic, good point. Errors in transcription do indeed happen somewhere in plant. But no evolution within plant. Hence not intelligent.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

I think conjectures and refutations are components of intelligence regardless of whether they are made consciously.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

There is no variation and selection happening within plants. They happen across plants.

And yes I think only people are intelligent.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

That's not creating knowledge. It's just updating some parameters and it all happens to genetically given instructions.

@dchackethal · · Show · Open on Twitter

@DoqxaScott @RealtimeAI @ReachChristofer

Genes are within plants, sure, but they are not intelligent/conscious because new knowledge is not created from within them.

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer

Okay but why? :)

@dchackethal · · Show · Open on Twitter

@ks445599 @RealtimeAI

It’s a way to avoid explaining that by saying that consciousness is somehow already present everywhere.

Similar to how Lamarckism, empiricism etc state knowledge is already present somehow.

2/2

@dchackethal · · Show · Open on Twitter

@ks445599 @RealtimeAI

Agreee. Also note that panpsychism is not an explanation. It’s just a statement: everything is conscious to some degree. That’s too easy. Doesn’t explain what consciousness or at least what gives rise to it.

1/

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer

Both would only be intelligent/conscious if knowledge originated from within them.

3/3

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer

In the case of plants, the knowledge originated in biological evolution and the plant just inherited it through genes.

In the case of a Roomba, the knowledge originated in a the minds of programmers and the Roomba “inherited” it through programmatic instructions.

2/

@dchackethal · · Show · Open on Twitter

@RealtimeAI @ReachChristofer

Cool :)

I don’t think a Roomba is intelligent/conscious. Both Roombas and plants contain knowledge, no doubt. But to determine whether they are intelligent, one needs to determine the origin of that knowledge.

1/

@dchackethal · · Show · Open on Twitter

@RealtimeAI

Okay, so does a Roomba. Is a Roomba intelligent/conscious?

@dchackethal · · Show · Open on Twitter

Search tweets

/
/mi
Accepts a case-insensitive POSIX regular expression. Most URLs won’t match. Tweets may contain raw markdown characters, which are not displayed.
Clear filters