Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Dennis Hackethal’s Comments

Identity verified

Loved this article. […]

Thanks.

Genes wouldn’t be favored at roughly the rate of deleterious mutations because most genes don’t code for learnable knowledge.

I think you mean creativity (or, more precisely, the genes coding for it) wouldn’t be favored and so on.

For example, you can’t learn to control the metabolic processes of your liver.

It’s true that most genes (in humans at least) don’t code for learnable knowledge, but creativity can make up for some physical shortcomings, too: if your genes give you a faulty leg, say, you can use your creativity to make a cane.

Having said that, you make a fair point. I’ve revised the article to be more specific about which mutations creativity can make up for. (I think one could control the metabolic processes of one’s liver by creating and taking the right medicine, but early humans obviously didn’t have that knowledge.)

Skipping some, you write:

The 20,000 number is genes in the sense of genetics, that is - DNA that codes for proteins.

Teslo spoke of “30,000-35,000 genes”, not 20,000.

Regarding the rest of your comment, epistemology also predicts that humans have more junk in their genes (in the neo-Darwinian sense) than any other species. And I could see missing or faulty protein synthesis leading to behavioral errors which creativity can then make up for. But I’ve edited the post to reflect the distinction you mention.

Case in point, woman gets physical with ex husband, then falsely accuses him of raping her so her boyfriend doesn’t realize her infidelity: https://x.com/ogunski/status/1826080126708125741

This cat has the exact same meowing pattern on three occasions: https://www.instagram.com/reel/C-41ee_JXup/

The video even calls the cat an ‘NPC’ (non-playable character, ie dumb video-game AI, which often makes the exact same utterances):

When your cat is actually an NPC and he tries to offer you the same side quest every single time you walk to his area of the map.

It’s a joke but shouldn’t be.

Here’s Naval pandering to mystics:

Science and spirituality are both the search for truth.

Gross. I’ve gotten ‘guru’ vibes (metaphor!) from Naval before. I don’t know why Deutsch associates with someone like that.

If they were spontaneous enough to create the conditions for themselves, than that’s exactly the situation where what you call “the leakage problem” is present.

I don’t think so. The leakage problem refers to a situation where a knowledge-laden entity (eg, a person) puts knowledge in some other entity and observers then mistake the mere presence and exhibition of said knowledge for genuine knowledge creation on the part of that second entity. The second entity did not create knowledge, it merely inherited it, yet this inheritance goes unnoticed and is mistaken for creation.

A spontaneous process, by definition, is not knowledge-laden, thus there can be no leakage of knowledge there.

Yes, but in an attempt to discover the origin of life the point of making the conditions such that life arises “reliably” is simply to say that such conditions have a probability of arising that’s on par with constraints like time, available materials at the time, etc.

I think scientists would shoot for probability 1 (or very close to 1).

That completes the how it could’ve happened explanation.

That mode, again, is no better than RNA world. (Though that is good.)

Don’t see any philosophical problem there.

Because you’re not addressing the leakage problem. That’s related but not the same.

 The first bit of knowledge creation, by definition, had to have happened by chance.

Not necessarily. There could be spontaneous processes that reliably kick off knowledge-creating processes. That would also solve the leakage problem.

Scientists like reproducibility. And what you describe is pretty much what we already have with things like RNA world except some details are missing.

Also, consider other origin-of-knowledge events such as creativity: every newborn has it, reliably & predictably (unless there are certain birth defects), yet it’s still genuine knowledge creation.

There remains the question if the currently discussed world models of physics […] already covers all of physical reality - apart from the problem that those theories are still awaiting a unification.

Not a physicist but I doubt they cover all of physical reality in the sense that they’re some ‘ultimate’ explanation. Even our best theories are always going to have shortcomings. That includes theories we come up with after the unification you mention. There’s never a guarantee that tomorrow we won’t find some new aspect of physical reality which our best theories do not yet cover.

[T]he stance “consciousness can be fully explained by the underlying physics as we know it” is of course possible, but there are others […]. What do you think about this?

Consciousness is always the result of a physical process. But that in itself doesn’t explain consciousness. Any viable explanation of consciousness will let us program it on a computer.

Maybe the question should be phrased: ‘Should there be a process of relieving children from their bad parents?’

To which my answer is: absolutely, yes, assuming the children of bad parents want to be relieved.

My answer was predictably deleted today by moderator Rory Alsop, who has a history of doing that. “Dennis - this post does not answer the question. Once again, I must remind you that Answer posts must answer the question.”

My answer does answer the question, just not in a way that doesn’t question OP’s alleged infallibility as a parent.

Does Rory think that, when a question is based on false premises, one should just pretend the premises are true? Apparently. He deleted another answer of mine and commented: “Frame challenges are not welcome here.”

I think traditional female culture has done even more harm to women.

Explain what you think traditional female culture is?

Diffs for two of the mentioned misquotes can be found here:

Arguably, the link between explaining and controlling reality is also an objectivist insight.

In short, you're suggesting the reason for his incomprehensible style of communication is not obscurantism but social incompetence?

Keep your response short.

Uncompetative,

Re #627. I like when people catch misquotes, but you technically misquoted Witten yourself. You replaced a hyphen with a space. Not sure how that happened – did you not copy/paste the quote?

Re #629. You wrote:

He can’t be vague and imprecise in order to ensure the layperson is not left behind.

He's going to have to find some solution if he wants to address laymen. He could explain the terms. For example, as a software engineer, when I speak to laymen about programming, I either explain the terms or use analogies they will understand. I can 'dumb things down' just enough without compromising on accuracy. And if I do want to talk about more advanced programming topics, I don't address laymen. Because they're laymen.

Popper's and Feynman's books are great examples of how to speak to laymen on complex issues without compromising on quality.

[T]he academic pay-wall limits what papers he is able to read in his spare time.

Not sure how big an obstacle this could present to someone like Weinstein.

Re #630. You wrote:

Eric Weinstein has stated he considers those in the Portal community to be his friends and not fans.

Friends can be fans. And not all of the hundreds of thousands views and listens he gets online are from the Portal community. Even if all of the people in the Portal community were smart enough to parse his statements, most others in the general populace aren't.

Public intellectuals shouldn't rely on others to parse their statements for them. They are responsible for making themselves intelligible. Consider this quote by Ayn Rand:

In public speeches and print, [the argument from intimidation] flourishes in the form of long, involved, elaborate structures of unintelligible verbiage, which convey nothing clearly except a moral threat. (“Only the primitive-minded can fail to realize that clarity is oversimplification.”)

Ayn Rand. The Virtue of Selfishness: A New Concept of Egoism. Chapter ‘The Argument from Intimidation.’ Apple Books.

You made that same moral threat when you accused me of being on the intellectual level of a toddler ("I am sure you can find a Cocomelon video which is more your speed."). That kind of threat doesn't impress me, but it does many others, and it's exactly the kind of tactic Weinstein and his fans, including you, evidently rely upon to spread his ideas.

Re #631. You wrote:

[Weinstein] has since explained he [put the footnote describing his paper as a work of entertainment] to ensure academic vampires would not try to suck all the blood out of it, by shunning academic peer review and copyrighting his work so that he maintains control over its progress.

Not a lawyer but I'm not sure someone else could copyright text for him. Or maybe I don't know enough about academia. Regardless, the goal you mention is compatible with intentional obscurantism.

I have redacted the remainder of #631 because if you're going to make claims that potentially harm people's reputation you better provide a source for each claim.

Re #632. You wrote:

So, you complain about not understanding his answer after omitting 122 words out of 161. That’s 75% of his answer. Gone. No wonder you can’t make sense out of it.

I've read the added context and I think it does little to aid in understanding him. It also makes a new point, which only adds to the complexity of what he's saying.

It's been a while since I quoted that passage so I don't remember what was going through my mind at the time but I'm a conscientious quoter. I don't leave out stuff to misrepresent people.

Her comment that her cat isn't touching her clothes is false, though: she said her cat kneads the air when she picks it up, so her cat is still touching her clothes, or at least her skin, which is also soft (if not with its paws then with other parts of its body).

She thought she disproved my point because the cat kneads the air, ie its paws aren't touching her clothes.

The only point that potentially disproved mine was that the cat 'kneads' hardwood and tiles – assuming it does so when not touching anything soft with any part of its body.

sopheannn wrote:

the cat isn’t touching my clothes? bro has never owned a cat obviously. they’ve done it on hardwood and tile before too

This isn't bad. Like, it still doesn't make sense for cats to 'knead' things that can't be kneaded, but what she suggests may refute my original claim that soft materials trigger kneading because they're reminiscent of a mother's belly.

Or maybe her cat is particularly buggy. In any case, something like my explanation will be true – there's some automatic trigger of kneading that cats execute uncritically, ie like robots.

The reason the original video reached for a humanizing explanation along the lines of cats feeling “safe” or “content” is that the creators don't consider that cats are robots. Uncritically kneading things that can't be kneaded (air, hardwood, tile) is still robotic behavior.

It just occurred to me that crypto-fallibilists are like 'vegans' who eat meat once in a while but then lie to themselves and still think they're vegans.

It's fine to try being vegan and fail at it. But don't lie to yourself about your failure just so you can keep that unearned title. Maybe try being vegan on Mondays only, and once you're pretty good at that, add Fridays, and so on. But as a matter of simple logic, you're not a vegan unless you don't consume any animal products, on any day of the week.

It's the same with fallibilism, only harder, because changing one's ways of thinking is harder than changing one's diet. You're not a fallibilist if you won't consider that someone else could be right and you wrong even once. It then takes time to earn the title of 'fallibilist' again. You don't lose it forever over a single mistake, or even a dozen mistakes, but you do have to try anew after each mistake.

But it's not about titles, that's surface-level stuff. It's about logic. A vegan is someone who never eats animal products. A fallibilist is someone who is always willing to consider that he could be wrong.

Like veganism, fallibilism, by definition, is indivisible and can't make room for any compromises. This indivisibility leaves no room for lies or evasions, and I guess that that is why some of those cryptos who want the unearned title of 'fallibilist' deride my stance as 'purity testing'.

You figured right; let's conclude the discussion with an impasse due to insufficient interest on both sides (though for different reasons).

#620 · on an earlier version (v1) of post ‘Choosing between Theories

Why have you stopped discussing?

#618 · on an earlier version (v1) of post ‘Choosing between Theories

[Dennis:] First, please explain when consciousness must have an effect on information processing and when it can’t. Otherwise it’s too hand wavy to work as a counter refutation.

[Kieren:] My theory doesn’t answer this question yet. Does your theory?

I don't think it matters cuz I'm not currently trying to counter refute my own refutation :)

Again, if you have an example of an explanation that says something that conflicts with my theory of consciousness, then provide it.

My original refutation?

[Dennis:] Third, you say that “[c]onsciousness is created as a result of information processing.” All information processing?

[Kieren:] The deepest I have conjectured is that it is the process of self referential modeling.

Which calculators don't have, right? In which case calculators aren't conscious after all.

Which refutation are you referring to here? And how does it conflict?

#599. It conflicts because my refutation showed by invoking modus tollens etc. that calculators are not conscious.

#613 · on an earlier version (v1) of post ‘Choosing between Theories

I'm not saying he was. James Taggart wasn't obligated to agree to a contract, nor is Deutsch obligated to write a textbook on quantum physics. That's not the point.

PS: Thinking more about this, overall, I can see that your claim that 'calculators are conscious after all; our best explanations of them just didn't need to mention consciousness' conflicts with my refutation, but why should we break symmetry in favor of the former and not the latter? It seems to me that it's not really a counter refutation unless it explains that, too.

#609 · on an earlier version (v1) of post ‘Choosing between Theories

Consciousness is created as a result of information processing. It does not necessarily have any effect on the information processing.

First, please explain when consciousness must have an effect on information processing and when it can't. Otherwise it's too hand wavy to work as a counter refutation. (And even then such an explanation is only necessary but maybe not sufficient, I'll have to think more about it.)

Second, you seem to be saying that the only reason consciousness would figure into our best explanations of calculators is if consciousness had an effect on information processing. Why must that be the case? Why couldn't consciousness figure into our best explanations of calculators for other reasons, despite not having any effect?

Third, you say that "[c]onsciousness is created as a result of information processing." All information processing?

#608 · on an earlier version (v1) of post ‘Choosing between Theories

The only explanation about animals that would change would be the ones that have something to do with their consciousness (if with [sic] had any at all). E.g. the horse bucks and neighs when it experiences pain. Existing explanations about how the animals [sic] muscles, organs, and neurons function would not change. Likewise for the calculator, only those explanations that have something to do with calculator consciousness would change.

Of course, if you divvy them up into those explanations that have something to do with consciousness and those that do not, then only some of them are going to change. But for animals/calculators as a whole, the explanations would change. (Imagine how much our explanations of humans would change if we learned that humans are not conscious! We wouldn't then say 'but explanations of our muscles remained the same'.) In a related context, you wanted me to "consider all of our current best theories, not a subset", ie have a holistic picture – now you want me to consider only subsets of theories about calculators.

None of these explanations [about calculators] conflict with my theory of consciousness, because they don’t say anything about consciousness.

The fact that none of our explanations of calculators say anything about conscious is the very reason you should think they're not conscious. And again, the four concepts from our background knowledge taken together do show that calculators really aren't conscious, and then they do conflict with your theory of consciousness because it predicts that calculators are conscious.

Note that this is not an explanation about calculator consciousness, but instead a general theory of consciousness that implies that calculators may be conscious.

Again, I suggest phrasing things in more absolute terms such as 'must', 'cannot' etc. If calculators may be conscious it's too easy to evade criticism. If you're going to constrain it, specify under which conditions calculators must be conscious and why and under which conditions they cannot be conscious and why not.

But yes, I believe I understand your point here: you're saying our explanations of the operation of calculators, their hardware, and so on would not change. However, I do think that, if we had a working theory of consciousness that implied that even calculators are conscious, people would get busy trying to understand what it is about calculators that makes them conscious and then amend our explanations of calculators as a whole accordingly, at the very least by adding an implicit reference to such a working theory of consciousness.

Can you provide me with an existing explanation of calculators that conflicts (would need changing) with the implication that calculators might be conscious? The only explanation I can think of is the theory of consciousness (the thing under question).

I think any explanation of calculators as a whole would need to be amended to include how their functionality gives rise to consciousness, at least by implicitly referencing your theory of consciousness. I don't know in detail what our current explanation of calculators looks like – I don't manufacture calculators; I would just refer to higher level concepts such as basic arithmetic on a programming level – but I do know that that explanation doesn't currently speak of consciousness, or else it would be commonly thought that calculators are conscious. I also don't know in detail what the amendment would look like since I don't know how consciousness works. Of course, tautologically, the sub explanations that don't have to do with consciousness aren't going to need to change, not even by implicit reference. (Note that this is another reason I don't find neuroscience promising when it comes to the brain; presumably, explanations of wouldn't change.)

Please provide a counter-refutation to my refutation. Otherwise, I think it's likely that we're going to reach an impasse. If you're not sure yet how to refute it, asking questions about it or steelmanning it could be a good way forward.

#606 · on an earlier version (v1) of post ‘Choosing between Theories

PS: Re when you wrote:

An explanation is not refuted at the moment it is conjectured simply because it has implications that are not implied by any existing explanations.

To be clear, it's not just that existing explanations do not imply calculator consciousness (although that would be enough of a challenge) – together, the four pieces of background knowledge I've referenced rule out calculator consciousness.

#604 · on an earlier version (v1) of post ‘Choosing between Theories

An explanation is not refuted at the moment it is conjectured simply because it has implications that are not implied by any existing explanations.

Agreed. That just means there's a conflict; an opportunity to break symmetry.

Imagine scientists first conjecturing theories involving charged particles to explain aspects of electricity. Surely you don’t think these theories are refuted because charged particles don’t figure into any of our existing explanations?

Correct, I do not. But, per Popper, new theories should explain, at least implicitly, why their predecessors are wrong. Which is what I've suggested as one of the four attack vectors: you could explain why our explanations of calculators are wrong; why they don't imply the absence of consciousness (which you seem to attempt below anyway in your remark about calculator operations). That way, you would break symmetry in favor of the prediction that calculators are (or at least might be) conscious, and then modus tollens doesn't rule out anymore that non-creative algorithms create consciousness.

If it were true that the algorithmic processing occurring within a calculator was sufficient to create some kind of conscious experience, why would it be necessary for such a consciousness to have any effect on the operation of the calculator?

If you're repeating that our explanations (not just operations) of calculators need not change if calculators are conscious, then I repeat that you then also shouldn't think our explanations of animals should change if they are (or aren't) conscious. But it seems that you do want them to change in the case of animals. And you also want them to change in the case of the human brain (where you don't restrict yourself just to operations but want neuroscience to explain how those operations result in consciousness, and such an explanation would then form part of the explanation of the brain). So, if only for consistency – but also in an attempt to understand reality – you should want them to change when it comes to calculators, too.

It sounds like you have a somewhat instrumentalist view of explanations (when it suits you), which leads you to reduce explanations of calculators to a description of their operations only. But that isn't a valid way around my application of the modus tollens.

#603 · on an earlier version (v1) of post ‘Choosing between Theories

[W]e both have our own best explanations of consciousness to consider.

Why consider a refuted explanation?

The quote from BoI chapter 1 goes:

[A] particular thing is real if and only if it figures in our best explanation of something.

Note the singular "explanation". And a refuted explanation can't be good anymore – the following quote (also from chapter 1) is about science in particular but you can easily imagine how it applies to symmetry breaking in general:

When a formerly good explanation has been falsified by new observations, it is no longer a good explanation, because the problem has expanded to include those observations. Thus the standard scientific methodology of dropping theories when refuted by experiment is implied by the requirement for good explanations.

With that said, back to your comment:

Basically I’m saying that the criterion requires that we consider all of our current best theories, not a subset, which is what I now see you doing.

Our current best theories do not include refuted ones. So I don't think my application of the criterion is incorrect.

A refuted theory does not deserve consideration until the refutation is counter-refuted. Four potential targets to attack my refutation are:

  1. Explanations of calculators
  2. Explanations of NPCs
  3. The criterion of reality
  4. Modus tollens

If you refute any one of them, my refutation is invalid, and then your theory regains the status 'non-refuted' and can thus be reconsidered.

With that in mind, when you wrote a bit further up...

If my theory of consciousness infers calculator consciousness, then as per the criterion, it exists (for me), and this happens without conflicting with background knowledge.

...even if non-refuted, that theory conflicts with explanations of calculators, which presumably form part of your background knowledge (unless you successfully refute them as per point 1 in the above list). If they do, you'd first want to break symmetry there.


By the way, a better way to state the criterion, ie in a non-justificationist way, IMO, is to simply say 'something is real if and only if it figures in a non-refuted explanation of something' – that phrasing also happens to leave room for multiple non-refuted explanations.

#601 · on an earlier version (v1) of post ‘Choosing between Theories

The Popperian approach is to assume that a conjecture is true if it is the best conjecture we have.

No. That sounds like a justificationist perversion of Popperian epistemology because it would involve 'weighing' conjectures somehow based on how 'good' they are. DD explains in BoI ch. 13 why that's a bad idea. (Ironically – and, IIRC, Elliot points this out somewhere – that means DD himself is a justificationist since he wants to weigh and choose explanations based on how "good" ("hard to vary") they are, as opposed to choosing based on whether they are refuted vs. non-refuted, which I believe would be Elliot's approach, ie the binary approach, which I am advocating.)

If we have N equally plausible explanations then we need to break symmetry. You cannot break symmetry by assuming one of the N explanations is true, and deriving consequences from it to refute the other explanations. The choice of explanation would be arbitrary then.

This I agree with, and I see now why you thought my argument was circular. Some other explanation is needed: either from our background knowledge or a new one. Either way it can't be one of the conflicting theories. But I'm not choosing one of those (see below). Note also that Popperian epistemology gives us a process of elimination that leaves us, ideally, with one non-refuted conjecture, which need not be the 'best' (depending on how you weigh).

I see this happening in our current discussion as follows.
D: ‘Creative algorithms cause consciousness’ is the best explanation because there are no plausible alternatives.
K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because we tentatively assume that only creative algorithms can cause consciousness.

That isn't my argument; I think there's been a misunderstanding. Here's how I'd change your description of our discussion:

D: ‘Creative algorithms cause consciousness’ is the bestonly explanation because there are no plausible alternatives.
K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because we tentatively assume that only creative algorithms can cause consciousnessour best explanations of calculators and NPCs do not invoke consciousness, so, per DD's criterion of reality, they really aren't conscious.

I invoke these explanations to show that the claim that "'[n]on-creative algorithms cause consciousness' is a plausible alternative" must be false, by modus tollens, since that claim makes a prediction about calculators that isn't true. Hence the claim is eliminated, I think, while DD's claim – that creativity causes consciousness – is still standing.

As you can see, the circularity you spoke of is not there as I do not reference the theory under question as a symmetry breaker. Instead, I refer to our best explanations of calculators and NPCs as well as the criterion of reality and the modus tollens, all four of which form part of my background knowledge.

I still think I can proceed within a Popperian framework [...].

Maybe I'm talking out of my ass here, but I don't think you understand it well enough (see the beginning of this comment). You'd be proceeding with what you think is a Popperian 'framework' but is actually justificationism in Popperian clothing.

I’d like to honor your request to avoid the epistemology discussion (though you've already continued it with your comments around how not to break symmetry), but I don't currently see how to avoid it. Perhaps a way forward is a discussion around the criterion of reality in particular and understanding better our apparent disagreement around that criterion? If you have other ideas, I'm open to them, too.

#599 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comment #613

Therefore, for us to know that calculator consciousness is not real, we would have to know that it does not figure into any of our best explanations.

For clarity, I think there are two possibilities: that calculators are conscious either follows from our best explanation of consciousness, or it follows from our best explanation of calculators.

[I]f you want to point at a lack of calculator/NPC consciousness to refute my alternate theories’ claims about the connection between information processing and consciousness, then you are doing so by assuming your theory of consciousness is true. This is circular because you are assuming your theory is true whilst it is under question.

This is the standard Popperian approach: we assume, tentatively, that a conjecture is true until it is refuted, even if that conjecture is currently "under question". I don't see how that leads to circularity. Since all our conjectures are always tentative in this way, they're always "under question"/open to revision anyway.

If this doesn’t help us to get to a resolution then I will reply to each of your previous responses also.

Sure.

An alternate, if lesser, resolution is that we simply have different epistemologies; that it's going to be difficult for us to come to a resolution on the question of animal consciousness until we resolve the epistemological difference. That's not surprising since the question of animal consciousness is directly influenced by epistemological considerations. But we still got to understand each other's (and our own) viewpoints better, which, as Popper would say, is more than enough.

#597 · on an earlier version (v1) of post ‘Choosing between Theories

I don’t see why the word ‘consciousness’ would appear in the explanation.

Presumably for the same reason you want the word 'consciousness' to appear in the explanation for how animals work.

Calculators are math machines, animals are gene-spreading machines (per Dawkins). You ask whether you'd be able to calculate your taxes better if consciousness figured in our best explanations of how calculators work, but you don't ask whether animals would be able to spread their genes better if consciousness figured in our best explanations of how they work. And yet, presumably, your answer to the latter question would be 'yes' whereas your implied answer to the former is 'no'. How does that fit together?

If given the knowledge that calculators are conscious, I think our explanation of how consciousness works would change, not our explanation of how calculators work.

My guess is they'd both change, but at least our best explanations of calculators would have a big unknown ('why are they conscious?'), and that unknown would form at least an implicit part of such explanations. That would be an improvement at least in the sense that there'd be a pointer toward an open problem and more progress.

As per my best understanding of consciousness, it appears it can exist without leaving a trace (since the experience is private).

If it doesn't leave a trace, that means even the brain's hardware remains unchanged. So what good is neuroscience?

Your statement is vague; it leaves room for evasions when you encounter criticism. I think it would be better to phrase it in decisive, more attackable terms, such as 'consciousness can never leave a trace' or at least 'consciousness only leaves a trace when...'.

The statement 'consciousness can never leaves a trace', for example, sounds false because if someone experiences pain, say, they usually want to fix that, and then do stuff that helps fix that (eg move out of an uncomfortable position into a more comfortable one). At which point there's a trace even though the experience is totally private.

Otherwise it's like saying, in OOP terms: private methods on a class never cause any side effects. Which isn't true.

If our best explanation of consciousness does not infer calculator consciousness, then as per DD’s criterion we can know that it does not exist.

Now it sounds like you've adopted (and applied) the criterion?!

The word “mindlessly” here is doing all of the work, but whether all inborn algorithms are mindless (without consciousness) is what I am asking you to substantiate at the moment.

When something may as well have been done mindlessly, it cannot be evidence of consciousness. So I don't need to substantiate. We need some behavior which must have been the result of consciousness. You disagree that consciousness necessarily has any behavioral impact, but that makes things more difficult for you because then you can't point at any animal behavior and say that must have been the result of consciousness. In which case animals may as well not be conscious. Or anything at all may as well be conscious, including rocks, planets, and so on.

If I came to know that the NPC was conscious, my best explanations of how the NPC works would not change.

How could you come to know that if not through explanations?

#595 · on an earlier version (v1) of post ‘Choosing between Theories

Looks like I made a mistake about USPS mailmen being parasites. Apparently, USPS is not financed by taxes, not even partially.

I'd guess there are still problems with using USPS over something like Fedex or UPS, but it wasn't right to consider mailmen parasites.

[W]hether our best explanations of how calculators work would change depends on whether the consciousness actually had an effect on the operation of the calculator.

No. Just the introduction of the word 'consciousness', effectual or not, into our explanations of calculators would be a change. In which case the use of DD's criterion of reality would be appropriate after all, thereby negating #589.

I was thinking of more complex modeling.

Why should sufficient complexity give rise to consciousness?

I imagine it as the kind of modeling that is involved in self referential awareness. Modeling of the world around us and our self within that world.

Again, video-game NPCs do this stuff all the time and our best explanations of them do not invoke consciousness. So they're not conscious. (I know I'm repeating myself but more on the criterion of reality below.)

Modeling of the model itself [...].

And presumably of the modeling of the modeling of the modeling...? Sounds like an infinite regress. If it isn't, how many levels are required for consciousness?

For the reasons I’ve explained, I don’t think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness.

What are these reasons that you refer to?

Calculators, NPCs, criterion of reality.

It seems to me we have two disagreements, each on a different level. On a basic level, it seems to me we need to break symmetry between the claims 'sufficiently complex modeling of oneself and one's surroundings gives rise to consciousness' and 'creativity gives rise to consciousness'. On a more general level, we have an epistemological disagreement re the criterion of reality and whether its use is appropriate in this context. (I think I have shown at the beginning of this comment that it is.)

Do you think that's an accurate summary of the disagreement? It seems to me that, to break symmetry between the two claims, it would be helpful to find a resolution re the criterion of reality first (cuz if we don't have some criterion for what's real that we are willing to follow without exception we can always ignore criticism as invoking something that isn't real).

#592 · on an earlier version (v1) of post ‘Choosing between Theories

[E]ven if we did come to know that calculators had accompanying consciousness, our best explanations of how calculators work wouldn’t change.

How could they not? Discovering that calculators are conscious would be remarkable. The fact that our explanations fail to predict the consciousness of calculators would be a problem we'd want to solve. We'd want to know how it is that calculators are conscious and update our explanations of them accordingly.

Sorry, by modelling an inner world, I mean modelling one’s own body and the processes within it (such as what is happening when I touch something hot).

Why should that require or give rise to consciousness? Aren't you just describing homeostasis? The simplest of organisms have homeostasis – organisms which you presumably do not think are conscious.

[E]ven if neural correlates don’t add credence to the theory they also do not rule out the theory. Therefore I still see it as a plausible alternate theory of consciousness.

It is, of course, true that certain neural states give rise to consciousness, but the reason neuronal correlates, or any other explanations relying on the brain, are ruled out as fundamental is computational universality: computers not made of neurons can also be conscious if programmed correctly. Therefore, such explanations can at best be parochially true. Neurons do somehow give rise to consciousness, but the fact that they're neurons is incidental. It's the program that matters.

Discovering that certain operations in a brain can cause a particular conscious experience would be like discovering that incrementing a register in a cpu moves the cursor along in the word processor.

Consider this alternative explanation for why the cursor moves along: because the user pressed the right-arrow key, and the program is configured to move the cursor to the right anytime that happens. While your explanation on the low, CPU level is the kind of explanation that may well be technically correct, I think mine is not just correct but also operates on a more appropriate level of emergence. This becomes important once we entertain other kinds of computers that don't have a von Neumann architecture (which, it seems to me, the brain does not!). We also lose an understanding of causality when we go too low: it's not really the register in the CPU that moves the cursor along, it's the program. Recall DD's analysis in BoI ch. 5 of Hofstadter's program that instructs certain dominos to fall or not to fall.

I'm guessing you have read BoI ch. 5. Do you have refutations of it? Or of the CBC interview with DD I linked to?

As I understand it your argument is as follows.

1) Consciousness results from the execution of knowledge (algorithms).

Only of certain, special algorithms – and we don't yet know what distinguishes them from conventional ones (presumably the distinguishing factor is creativity and/or ephemeral properties).

2) Consciousness doesn’t figure into our best explanations of the execution of pre-existing knowledge.

For conventional algorithms, I agree.

3) Therefore, as per DD’s criterion of reality, execution of pre-existing knowledge is mindless.
4) Therefore, the only remaining algorithms for causing consciousness are those that involve creating new knowledge (creativity).

Once we rule out the destruction of knowledge, yes.

I don’t see why premise (2) can’t be - Consciousness doesn’t figure into our best explanations of creative algorithms. This results in the opposite conclusions.

I think it can't be "Consciousness doesn’t figure into our best explanations of creative algorithms" because consciousness has to live in one of 1) creative or 2) non-creative algorithms. For the reasons I've explained, I don't think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness. Unless we're both wrong that consciousness is real and it lives in neither category!

#590 · on an earlier version (v1) of post ‘Choosing between Theories

It might be that all kinds of information processing results [sic] in conscious experience. I have reasons against the idea, but I can’t rule it out. Can you?

Yes. As I said at the beginning, our best explanations of how calculators work don't refer to consciousness. So whatever information processing they do does not, to our current best understanding, result in consciousness.

This is, again, an application of DD's criterion of reality. You don't have a refutation of it, yet you don't want to apply it, which then leads to situations where you "can't rule either option out".

We could be wrong, of course. And one day we may realize that. But until then, we have to take our best existing explanations seriously. I think the underlying issue is that you don't think something is knowledge unless it is certain.

The theory should then be that modeling of both the external world and our internal world causes consciousness.

Having an "internal world", including thoughts and particularly feelings, arguably presupposes consciousness. In which case your argument sounds circular.

Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status ‘true’ to the theory remains unchanged.

Why is it so hard for you to quote me properly? The first sentence makes it sound like I forgot a word at the beginning but I didn't. The proper way to quote me would have been to write '[T]he Popperian view...' and so on.

The Popperian view also says that we should prefer theories with higher corroboration.

I wrote in #109 that "Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. [...] I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important."

Assume we don’t already understand how computers work, and that our starting point was the software.

Wouldn't it be more analogous, from your POV, to say that we don't understand how the software works, and that our starting point is the hardware? Cuz that's what neuroscientists are doing.

In any case, it seems to me that, in popular culture, we understand more about the brain as hardware than about the mind as software. But, contrary to what I think you're suggesting, I came up with the neo-Darwinian theory of the mind I have mentioned previously. And I did so without studying the brain ~at all, simply by making guesses about the mind and criticizing those guesses. Even though this theory is by no means complete, it has not been refuted and has been very fruitful, and it has enabled me to solve other, related problems I did not anticipate (which is a good sign!).

I’m saying a subset of algorithmic processes in the brain (whatever they may be) cause consciousness, as opposed to creative processes in the brain (whatever they may be). I don’t see how the former is ruled out as improbable.

I see – I should have placed emphasis, in my mind, on when you wrote "algorithmic" in what you had written previously; I had missed that.

I think you'd want to rule either one out as false, not as improbable. I rule out that algorithmic processes (with one exception, see below) could lead to consciousness because the mere, mindless execution of pre-existing knowledge (which is represented by those algorithms) precludes consciousness (or else it wouldn't be mindless). The destruction of knowledge can just be done mindlessly, too. So the only option that's left is the creation of knowledge. Which brings us back to creativity.

To be clear, whatever program gives rise to consciousness must itself be executable mindlessly, too (or else it wouldn't give rise to but depend on consciousness). So there is one exception, and to that extent we're in agreement. But there's something different about that program – something our current best explanations of information processing don't take into account yet.

To tackle this problem, the most promising approach to consciousness that I am aware of is the study of ephemeral properties of computer programs. Can you think of any such properties? I have found that to be surprisingly difficult!


I want to clarify for others reading this discussion what I mean by 'algorithmic'. Whatever software gives rise to consciousness is still an 'algorithm' in the sense that a Turing machine could run it. By 'algorithmic' I instead mean something that doesn't require reflection, introspection, knowledge creation, wonder – that kind of thing. Just something that can be done mindlessly. 'Robotic' is another word for it.

#588 · on an earlier version (v1) of post ‘Choosing between Theories

[C]onscious experience may just be along for the ride, a byproduct of the information processing [...].

That's basically been Deutsch's and my claim all along – where you and I seem to disagree is whether all information processing results in consciousness or just some (and, in the latter case, which kinds). You had previously argued that all kinds might – now you're saying maybe only one does. Which is it?

[P]erhaps the impact of the consciousness is so minimal that it goes unnoticed.

Surely not. Your consciousness has causal power, does it not? It's at least causing you to write comments on this blog.

The reason I think modelling of the external world is important for consciousness is because the things most vividly present in my awareness are the sorts of things that I imagine my brain is keeping a mental model of (objects, thoughts, and feelings).

You just switched from "modelling of the external world" to the much more general "mental model". Thoughts and feelings aren't part of a model of the world around you. Also, consider whether a human brain in a vat would still be conscious. It couldn't do any modeling of the external world, but I think it would still be conscious. Don't you?

Another type of algorithmic processing that I think is a plausible cause of consciousness is the process of integrating a number of other brain processes together. This seems plausible since it is supported by studied neural correlations.

I forget who said this and the exact wording, but at most such correlations could corroborate the view that psychophysical parallelism is indeed very parallel. More generally – and we're getting back to core epistemological disagreements here – the Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status 'true' to the theory remains unchanged.

I think neuroscience is generally a bad approach to the question of how consciousness works because neuroscience operates on the wrong level of emergence. The level is too low. You wouldn't study computer hardware to understand how a word processor works. We need explanations on the appropriate level of emergence. I doubt colorful pictures of the brain can help us here; I'd disregard the brain and focus on the mind. Consciousness is an epistemological subject, not a neuroscientific one. Neuroscience has also led to such nonsense as this and this. It surely has value when it comes to understanding the brain's hardware, including medical use cases, but when it comes to the mind I think it's severely limited.

My theory [is] just that some number of [...] algorithmic processes (maybe one) are causing consciousness.

Translation: something in the brain causes consciousness. Clearly. How does that tell us anything new?

#586 · on an earlier version (v1) of post ‘Choosing between Theories

I think the answer to my question is 'no, the explanation of the source code for NPCs and Roombas does not refer to consciousness'. Note also that people have been able to program such NPCs and Roombas without first having to know how consciousness works. It's possible programmers accidentally made them conscious, but that would lead to unintended behavior in the NPCs. Programmers would seek to understand and probably get rid of this behavior as they demand absolute obedience. Also, usually, explanations come before major discoveries.

If (1) the NPC is performing a similar kind of modelling as the human brain, and (2) it is this kind of modelling which produces consciousness, then the NPC would be conscious.

Doesn't that just amount to saying: 'There's some algorithm in the brain that makes it conscious, and if an NPC runs the same algorithm, it's also conscious'?

I find that easy to agree with, but you haven't explained why that algorithm should involve modeling the external world. In #581, you wrote you "find it plausible that the brains [sic] modelling of the external world could be an important part of of [sic] this." But why?

Better yet, see if you can explain why whatever algorithm produces consciousness must have to do with modeling the external world, ie cannot be anything else. Without using 'induction'. That would be convincing.

#584 · on an earlier version (v1) of post ‘Choosing between Theories

Your answer is littered with inductivism and the strength of your beliefs. I wasn't asking how likely your theories are, how strongly you believe in them, or anything else about your psychology. I was asking whether, in objective reality, roombas and video-game NPCs are conscious. They either are or they aren't.

If you looked at the source code of a video-game NPC, would your explanation of how the code works refer to consciousness?

#582 · on an earlier version (v1) of post ‘Choosing between Theories

[...] I think it plausible that the brain’s modeling of an external world is what gives rise to our concious [sic] inner world.

That's a common claim, let's look into it. Roombas also model the external world, as do many NPCs in video games. Are they conscious?

#580 · on an earlier version (v1) of post ‘Choosing between Theories

I don't hate anybody, and neither should you.

If you're asking how people can learn to enjoy being part of a Socratic dialog, I refer you to what I wrote about slowly exposing oneself to criticism, not seeking to evangelize, and having modest expectations.

Use your real name if you want to discuss further.

#579 · on an earlier version (v1) of post ‘Crypto-Fallibilism

I thought by this quote you’re the one claiming the situations are different.

Ah, I see what you mean – the difference I had in mind is that, before the war, Zelensky wasn't using conscription (because he didn't have to), but now the West is helping him do that. There is a new initiation of force against his subjects.

That's different from North Korea, where I understand the entire population is already in a kind of perpetual servitude (I'm not counting things like taxes here, which apply in Ukraine, too) and if you want to help you have no choice but to also help the slave owner (Kim Jong-un).

For example, if they had cared, the West could have told Zelensky, 'we'll deliver weapons on condition that you don't use conscription'. They did not. But it's not unusual for countries to help each other out on a conditional basis. For example, Germany doesn't extradite criminals to the US when there's reason to suspect that the US will use the death penalty, if my memory serves me right. That's because Germany thinks the death penalty is a human-rights violation.

The problem I see in your argument is that by that criteria nothing less than a perfect society is worth fighting for.

That can't be so, if only for the reason that there can never be a perfect society since we can always improve. To that end, we can and should acknowledge the mistakes the West makes while also acknowledging that in some respects it is better than the rest by degree (eg in terms of how much coercion it employs against its citizens), and in some other respects it is better in principle (eg by meeting Popper's criterion of democracy – though I should say that that criterion leaves some things to be desired, which I have written about here).

Of course, all that being said, Ukraine isn't part of the West anyway, even though suddenly it's somehow the West's best friend.

Why the slight preference for Ukraine?

Because it's not the aggressor and, from the little I know, it seems like a slightly less shitty country than Russia.

I wonder, if Russia had instead invaded, say, Mongolia, would the West have cared just as much?

Also it’s a bit slow and tedious to argue like this. If you’re up for it we could do a video call or something where it’s easier to get to the bottom of disagreements.

Maybe. As Elliot Temple taught me, discussing in writing has many advantages over voice. But if we record and share it, I'm open to it.

I meant that from what I saw I think most Ukrainians are in active support of the defensive war effort.

"from what I saw I think" – that isn't enough. All we know is there are some Ukrainians who support the defensive war effort, and some that don't. And even of those that do, we don't know whether they wish to participate personally. My guess is very few Ukrainians wish to be conscripted, certainly less than half (if for no reason other than that ~no woman will want to be conscripted).

Regardless, I repeat again that a single Ukrainian being dragged into the meat grinder against his will is an injustice, so it doesn't matter how many other Ukrainians are in support of that. His rights are his against the whole world (paraphrase of Spooner).

I’m just concerned you’re sacrificing any way to make a decision until everything is implemented according to the best current theories or theory.

No. As I've said, let those who want to fight, fight, and let those who wish to leave, leave. That's a decision that could be made. Maybe it's difficult to make such a decision while at war; maybe Ukrainian society can't work that way. But maybe a society that enslaves its own people isn't worth fighting for. Maybe individual Ukrainians don't owe anyone a functioning society. Maybe it's ridiculous to burden them with that debt against their will. Do you see how the notion that each Ukrainian is his brother's keeper is still implicit in your argument?

If the mind were to wait until all ideas, implicit and explicit, were perfectly aligned in what to do, it’d never do anything.

One important difference between inter-mind and intra-mind morals is that you only coerce yourself, not necessarily others, when you act while you have a conflicting idea present in your mind.

By the way, being unconflicted is indeed rare, but I wouldn't say it never happens. And one of the main reasons it's rare is the kind of coercion states use against their subjects in the first place; it usually starts in school and the older we get the harder we find becoming unconflicted again.

I think that if you’re going to find someone to put the moral blame on for making people participate in a war they don’t want to be in, the clear culprit is Putin and his government.

He's definitely the aggressor in this scenario. He put Ukrainians in this situation; no disagreement there. But Ukrainian politicians could have decided to actually practice the freedom they lie about fighting for. Then Ukraine would have been the good guys unambiguously. But due to conscription, they've become a greater danger to their own subjects than Putin, don't you think? This is true even in the US: the Libertarian Party sometimes tweets about how the American president and the bureaucracy below him present a greater danger to American citizens than, say, Russia or China. Your own politicians are usually more likely to harm you than foreign ones.

[Putin] is also partly responsible for the war crimes Ukrainians commit against Russians.

Maybe, but I don't think the victim of aggression gets to use unlimited retaliation, nor does he get to be an aggressor (through conscription) in turn. Being the victim of aggression isn't a carte blanche – retaliation has to be reasonable.

I’m interested if you think there was ever a time in the evolution of Western culture (including pre-Enlightenment) where it was just the case that the society couldn’t be stabilized, and would thus destroy itself, if it didn’t use some coercion.

Since any society is going to have to be able to use defensive coercion, I'm guessing you're asking about aggressive coercion in particular. I've thought about this before but so far I don't know the answer. If it is true that some minimum of aggressive coercion is required to make primitive societies work, we should still work hard to get away from that as soon as possible. In any case, I don't share the homo homini lupus view many still seem to have.

I'm guessing you think some aggressive coercion is always necessary?

I’m also interested if you have any preference on who wins the war.

I have a slight preference for Ukraine to win, but meh. Most important to me is that the war doesn't expand to NATO and that no nuclear weapons are used.

As for helping Zelensky enslave people, the same argument could be used for the slave owner. By feeding him you’re helping him enslave.

Yes!

In BoI chapter 17, Deutsch writes:

Static societies eventually fail because their characteristic inability to create knowledge rapidly must eventually turn some problem into a catastrophe.

Deutsch's view is that static societies are ultra dogmatic; they suppress critical thinking as much as possible. Therefore, they cannot adapt; that's why they must ultimately fail.

Popper writes here (on p. 8; bold emphasis added):

From the point of view of biology, dogmatism corresponds to lack of adaptability; and since life demands constant adaptation to a constantly changing environment, dogmatism—and especially the
inflexibility of a society
—leads almost of necessity to extermination. Critical thinking corresponds to adaptability. It is, like adaptability, decisive for survival.

Deutsch gives no credit to Popper for the discovery that societies which lack adaptability will fail. Arguably, this is the central thesis of chapter 17.

As usual, Popper is more nuanced than Deutsch when Popper writes "almost of necessity" as opposed to Deutsch's "must eventually".

h/t to Martin Thaulow for providing the Popper quote.

Epistemology is one big mind-reading exercise or else it couldn't study how thinking works.

PS: Regarding North Korea and helping slaves by helping the slave owners: I don't think that's analogous to the situation in Ukraine, where the West is helping the slave owner (Zelensky et al) enslave his people (conscription) in the first place. Or is it?

Yes, sometimes people are voting to choose the lesser evil.

Usually when people use the phrase "choose the lesser evil", at least in the US, they think both candidates suck but they feel they need to vote for one regardless and so they try to determine who sucks less. I don't know if that's what you mean here, but if you do, that's not what the Spooner quote is about. It's about not misinterpreting voting as consent, which you seem to (see below).

[...] I wasn’t even talking about elections here.

But you wrote (emphasis added):

Why consider the Ukrainians victims if they elected [...] the current government?

The implicit claim here, as I understood it, was that at least those who elected the government should be considered to have consented to being conscripted. And I offered the Spooner quote as a refutation of that implicit claim.

That's not to mention those who didn't vote for the current government, and those who weren't old enough to vote at the time but are now old enough to be conscripted and so on.

You also wrote:

I was saying that I think the vast majority of Ukrainians are in active support of the government.

How did you determine that?

In any case, even if true, I preemptively addressed it by pointing out that even a single man being dragged into the meat grinder against his will is an injustice.

As for the slave owner, it would depend on what the alternatives on offer were. If the only way for both the slave and owner to survive is to feed them I think this would still be moral. Something akin to the aid going to North Koreans.

What I mean is that you don't have to choose between better and worse slaveholders. Problems really are soluble! And again, fighting for freedom by using conscription just doesn't make any sense. You can't fight for an ideal by betraying it in the process.

You seem to have an unstated collectivist assumption that each Ukrainian is his brother's keeper – and further, that we are all Ukraine's keepers. Ayn Rand explains the problems with this assumption (in general, obviously not with regard to this particular) in chapter 10 of her book The Virtue of Selfishness. People ask 'what will be done about the situation in Ukraine?' and offer, say, conscription as a 'solution', when they should first ask 'should anything be done?'. This is why Rand says the former question is really a "psychological confession[]". I do not tacitly accept the collectivist premise and, as Rand writes, it is not true that "all that remains is a discussion of the means to implement it". First, show me why each Ukrainian is his brother's keeper, then we can discuss implementations such as conscription. In the meantime, nobody will stop you if you want to help Ukrainians.

Back to your comment:

Isn’t thinking one’s already perfect and removing a way of error correction also a kind of lack of knowledge?

I suppose so, but it's 'special' in that it prevents its own correction, whereas most (all?) other mistakes don't have that property.

Well, it sounds to you like that. I don’t know why.

Probably because you're defending politicians who employ coercion through conscription.

I was saying that your argument shares a structure with the socialist one, not that you’re a socialist.

I know – I think the structure in my argument is different from what you think it is.

Why consider the Ukrainians victims if they elected and are in support of the current government?

Lysander Spooner explains here why participating in elections does not indicate support for one's government or constitution. Perhaps the most salient quote is this:

[I]n the case of individuals, their actual voting is not to be taken as proof of consent, even for the time being. On the contrary, it is to be considered that, without his consent having even been asked a man finds himself environed by a government that he cannot resist; a government that forces him to pay money, render service, and forego the exercise of many of his natural rights, under peril of weighty punishments. He sees, too, that other men practice this tyranny over him by the use of the ballot. He sees further, that, if he will but use the ballot himself, he has some chance of relieving himself from this tyranny of others, by subjecting them to his own. In short, he finds himself, without his consent, so situated that, if he use the ballot, he may become a master; if he does not use it, he must become a slave. And he has no other alternative than these two. In self-defence, he attempts the former. His case is analogous to that of a man who has been forced into battle, where he must either kill others, or be killed himself. Because, to save his own life in battle, a man takes the lives of his opponents, it is not to be inferred that the battle is one of his own choosing.

In the case of Ukraine, the battle Spooner speaks of is not just a metaphor. And that's not to mention all the Ukrainians who have not voted once. Regardless, a single man being dragged into war against his will is an injustice.

Democracy – including better democracies such as that of the United States, and worse ones such as that of Ukraine – is still tyranny. It's a tyranny that allows for some amount of error correction, and that makes it objectively and notably better than all other known forms of tyranny, but it's still a form of tyranny.

Ukraine isn’t culturally a part of the West but supporting Ukraine is supporting freedom because there are still differences between the levels of coercion in different societies and also in what they aspire to become (in this case Russia and Ukraine).

Is supporting a slave owner who is nicer to his slaves than other slave owners supporting freedom? Is it logically coherent to fuck for virginity?

You said there’s a difference between lack of knowledge and evil. I’m curious what you think it is.

I'm thinking of Sparta in chapter 1o of David Deutsch's The Beginning of Infinity. That is, evil has to do with thinking one is already perfect; destroying the means of error correction; shielding some ideas against criticism; not considering that one could be wrong about anything.