Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Dennis Hackethal’s Comments

Identity verified

[W]e both have our own best explanations of consciousness to consider.

Why consider a refuted explanation?

The quote from BoI chapter 1 goes:

[A] particular thing is real if and only if it figures in our best explanation of something.

Note the singular "explanation". And a refuted explanation can't be good anymore – the following quote (also from chapter 1) is about science in particular but you can easily imagine how it applies to symmetry breaking in general:

When a formerly good explanation has been falsified by new observations, it is no longer a good explanation, because the problem has expanded to include those observations. Thus the standard scientific methodology of dropping theories when refuted by experiment is implied by the requirement for good explanations.

With that said, back to your comment:

Basically I’m saying that the criterion requires that we consider all of our current best theories, not a subset, which is what I now see you doing.

Our current best theories do not include refuted ones. So I don't think my application of the criterion is incorrect.

A refuted theory does not deserve consideration until the refutation is counter-refuted. Four potential targets to attack my refutation are:

  1. Explanations of calculators
  2. Explanations of NPCs
  3. The criterion of reality
  4. Modus tollens

If you refute any one of them, my refutation is invalid, and then your theory regains the status 'non-refuted' and can thus be reconsidered.

With that in mind, when you wrote a bit further up...

If my theory of consciousness infers calculator consciousness, then as per the criterion, it exists (for me), and this happens without conflicting with background knowledge.

...even if non-refuted, that theory conflicts with explanations of calculators, which presumably form part of your background knowledge (unless you successfully refute them as per point 1 in the above list). If they do, you'd first want to break symmetry there.


By the way, a better way to state the criterion, ie in a non-justificationist way, IMO, is to simply say 'something is real if and only if it figures in a non-refuted explanation of something' – that phrasing also happens to leave room for multiple non-refuted explanations.

#601 · on an earlier version (v1) of post ‘Choosing between Theories

The Popperian approach is to assume that a conjecture is true if it is the best conjecture we have.

No. That sounds like a justificationist perversion of Popperian epistemology because it would involve 'weighing' conjectures somehow based on how 'good' they are. DD explains in BoI ch. 13 why that's a bad idea. (Ironically – and, IIRC, Elliot points this out somewhere – that means DD himself is a justificationist since he wants to weigh and choose explanations based on how "good" ("hard to vary") they are, as opposed to choosing based on whether they are refuted vs. non-refuted, which I believe would be Elliot's approach, ie the binary approach, which I am advocating.)

If we have N equally plausible explanations then we need to break symmetry. You cannot break symmetry by assuming one of the N explanations is true, and deriving consequences from it to refute the other explanations. The choice of explanation would be arbitrary then.

This I agree with, and I see now why you thought my argument was circular. Some other explanation is needed: either from our background knowledge or a new one. Either way it can't be one of the conflicting theories. But I'm not choosing one of those (see below). Note also that Popperian epistemology gives us a process of elimination that leaves us, ideally, with one non-refuted conjecture, which need not be the 'best' (depending on how you weigh).

I see this happening in our current discussion as follows.
D: ‘Creative algorithms cause consciousness’ is the best explanation because there are no plausible alternatives.
K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because we tentatively assume that only creative algorithms can cause consciousness.

That isn't my argument; I think there's been a misunderstanding. Here's how I'd change your description of our discussion:

D: ‘Creative algorithms cause consciousness’ is the bestonly explanation because there are no plausible alternatives.
K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because we tentatively assume that only creative algorithms can cause consciousnessour best explanations of calculators and NPCs do not invoke consciousness, so, per DD's criterion of reality, they really aren't conscious.

I invoke these explanations to show that the claim that "'[n]on-creative algorithms cause consciousness' is a plausible alternative" must be false, by modus tollens, since that claim makes a prediction about calculators that isn't true. Hence the claim is eliminated, I think, while DD's claim – that creativity causes consciousness – is still standing.

As you can see, the circularity you spoke of is not there as I do not reference the theory under question as a symmetry breaker. Instead, I refer to our best explanations of calculators and NPCs as well as the criterion of reality and the modus tollens, all four of which form part of my background knowledge.

I still think I can proceed within a Popperian framework [...].

Maybe I'm talking out of my ass here, but I don't think you understand it well enough (see the beginning of this comment). You'd be proceeding with what you think is a Popperian 'framework' but is actually justificationism in Popperian clothing.

I’d like to honor your request to avoid the epistemology discussion (though you've already continued it with your comments around how not to break symmetry), but I don't currently see how to avoid it. Perhaps a way forward is a discussion around the criterion of reality in particular and understanding better our apparent disagreement around that criterion? If you have other ideas, I'm open to them, too.

#599 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comment #613

Therefore, for us to know that calculator consciousness is not real, we would have to know that it does not figure into any of our best explanations.

For clarity, I think there are two possibilities: that calculators are conscious either follows from our best explanation of consciousness, or it follows from our best explanation of calculators.

[I]f you want to point at a lack of calculator/NPC consciousness to refute my alternate theories’ claims about the connection between information processing and consciousness, then you are doing so by assuming your theory of consciousness is true. This is circular because you are assuming your theory is true whilst it is under question.

This is the standard Popperian approach: we assume, tentatively, that a conjecture is true until it is refuted, even if that conjecture is currently "under question". I don't see how that leads to circularity. Since all our conjectures are always tentative in this way, they're always "under question"/open to revision anyway.

If this doesn’t help us to get to a resolution then I will reply to each of your previous responses also.

Sure.

An alternate, if lesser, resolution is that we simply have different epistemologies; that it's going to be difficult for us to come to a resolution on the question of animal consciousness until we resolve the epistemological difference. That's not surprising since the question of animal consciousness is directly influenced by epistemological considerations. But we still got to understand each other's (and our own) viewpoints better, which, as Popper would say, is more than enough.

#597 · on an earlier version (v1) of post ‘Choosing between Theories

I don’t see why the word ‘consciousness’ would appear in the explanation.

Presumably for the same reason you want the word 'consciousness' to appear in the explanation for how animals work.

Calculators are math machines, animals are gene-spreading machines (per Dawkins). You ask whether you'd be able to calculate your taxes better if consciousness figured in our best explanations of how calculators work, but you don't ask whether animals would be able to spread their genes better if consciousness figured in our best explanations of how they work. And yet, presumably, your answer to the latter question would be 'yes' whereas your implied answer to the former is 'no'. How does that fit together?

If given the knowledge that calculators are conscious, I think our explanation of how consciousness works would change, not our explanation of how calculators work.

My guess is they'd both change, but at least our best explanations of calculators would have a big unknown ('why are they conscious?'), and that unknown would form at least an implicit part of such explanations. That would be an improvement at least in the sense that there'd be a pointer toward an open problem and more progress.

As per my best understanding of consciousness, it appears it can exist without leaving a trace (since the experience is private).

If it doesn't leave a trace, that means even the brain's hardware remains unchanged. So what good is neuroscience?

Your statement is vague; it leaves room for evasions when you encounter criticism. I think it would be better to phrase it in decisive, more attackable terms, such as 'consciousness can never leave a trace' or at least 'consciousness only leaves a trace when...'.

The statement 'consciousness can never leaves a trace', for example, sounds false because if someone experiences pain, say, they usually want to fix that, and then do stuff that helps fix that (eg move out of an uncomfortable position into a more comfortable one). At which point there's a trace even though the experience is totally private.

Otherwise it's like saying, in OOP terms: private methods on a class never cause any side effects. Which isn't true.

If our best explanation of consciousness does not infer calculator consciousness, then as per DD’s criterion we can know that it does not exist.

Now it sounds like you've adopted (and applied) the criterion?!

The word “mindlessly” here is doing all of the work, but whether all inborn algorithms are mindless (without consciousness) is what I am asking you to substantiate at the moment.

When something may as well have been done mindlessly, it cannot be evidence of consciousness. So I don't need to substantiate. We need some behavior which must have been the result of consciousness. You disagree that consciousness necessarily has any behavioral impact, but that makes things more difficult for you because then you can't point at any animal behavior and say that must have been the result of consciousness. In which case animals may as well not be conscious. Or anything at all may as well be conscious, including rocks, planets, and so on.

If I came to know that the NPC was conscious, my best explanations of how the NPC works would not change.

How could you come to know that if not through explanations?

#595 · on an earlier version (v1) of post ‘Choosing between Theories

Looks like I made a mistake about USPS mailmen being parasites. Apparently, USPS is not financed by taxes, not even partially.

I'd guess there are still problems with using USPS over something like Fedex or UPS, but it wasn't right to consider mailmen parasites.

[W]hether our best explanations of how calculators work would change depends on whether the consciousness actually had an effect on the operation of the calculator.

No. Just the introduction of the word 'consciousness', effectual or not, into our explanations of calculators would be a change. In which case the use of DD's criterion of reality would be appropriate after all, thereby negating #589.

I was thinking of more complex modeling.

Why should sufficient complexity give rise to consciousness?

I imagine it as the kind of modeling that is involved in self referential awareness. Modeling of the world around us and our self within that world.

Again, video-game NPCs do this stuff all the time and our best explanations of them do not invoke consciousness. So they're not conscious. (I know I'm repeating myself but more on the criterion of reality below.)

Modeling of the model itself [...].

And presumably of the modeling of the modeling of the modeling...? Sounds like an infinite regress. If it isn't, how many levels are required for consciousness?

For the reasons I’ve explained, I don’t think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness.

What are these reasons that you refer to?

Calculators, NPCs, criterion of reality.

It seems to me we have two disagreements, each on a different level. On a basic level, it seems to me we need to break symmetry between the claims 'sufficiently complex modeling of oneself and one's surroundings gives rise to consciousness' and 'creativity gives rise to consciousness'. On a more general level, we have an epistemological disagreement re the criterion of reality and whether its use is appropriate in this context. (I think I have shown at the beginning of this comment that it is.)

Do you think that's an accurate summary of the disagreement? It seems to me that, to break symmetry between the two claims, it would be helpful to find a resolution re the criterion of reality first (cuz if we don't have some criterion for what's real that we are willing to follow without exception we can always ignore criticism as invoking something that isn't real).

#592 · on an earlier version (v1) of post ‘Choosing between Theories

[E]ven if we did come to know that calculators had accompanying consciousness, our best explanations of how calculators work wouldn’t change.

How could they not? Discovering that calculators are conscious would be remarkable. The fact that our explanations fail to predict the consciousness of calculators would be a problem we'd want to solve. We'd want to know how it is that calculators are conscious and update our explanations of them accordingly.

Sorry, by modelling an inner world, I mean modelling one’s own body and the processes within it (such as what is happening when I touch something hot).

Why should that require or give rise to consciousness? Aren't you just describing homeostasis? The simplest of organisms have homeostasis – organisms which you presumably do not think are conscious.

[E]ven if neural correlates don’t add credence to the theory they also do not rule out the theory. Therefore I still see it as a plausible alternate theory of consciousness.

It is, of course, true that certain neural states give rise to consciousness, but the reason neuronal correlates, or any other explanations relying on the brain, are ruled out as fundamental is computational universality: computers not made of neurons can also be conscious if programmed correctly. Therefore, such explanations can at best be parochially true. Neurons do somehow give rise to consciousness, but the fact that they're neurons is incidental. It's the program that matters.

Discovering that certain operations in a brain can cause a particular conscious experience would be like discovering that incrementing a register in a cpu moves the cursor along in the word processor.

Consider this alternative explanation for why the cursor moves along: because the user pressed the right-arrow key, and the program is configured to move the cursor to the right anytime that happens. While your explanation on the low, CPU level is the kind of explanation that may well be technically correct, I think mine is not just correct but also operates on a more appropriate level of emergence. This becomes important once we entertain other kinds of computers that don't have a von Neumann architecture (which, it seems to me, the brain does not!). We also lose an understanding of causality when we go too low: it's not really the register in the CPU that moves the cursor along, it's the program. Recall DD's analysis in BoI ch. 5 of Hofstadter's program that instructs certain dominos to fall or not to fall.

I'm guessing you have read BoI ch. 5. Do you have refutations of it? Or of the CBC interview with DD I linked to?

As I understand it your argument is as follows.

1) Consciousness results from the execution of knowledge (algorithms).

Only of certain, special algorithms – and we don't yet know what distinguishes them from conventional ones (presumably the distinguishing factor is creativity and/or ephemeral properties).

2) Consciousness doesn’t figure into our best explanations of the execution of pre-existing knowledge.

For conventional algorithms, I agree.

3) Therefore, as per DD’s criterion of reality, execution of pre-existing knowledge is mindless.
4) Therefore, the only remaining algorithms for causing consciousness are those that involve creating new knowledge (creativity).

Once we rule out the destruction of knowledge, yes.

I don’t see why premise (2) can’t be - Consciousness doesn’t figure into our best explanations of creative algorithms. This results in the opposite conclusions.

I think it can't be "Consciousness doesn’t figure into our best explanations of creative algorithms" because consciousness has to live in one of 1) creative or 2) non-creative algorithms. For the reasons I've explained, I don't think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness. Unless we're both wrong that consciousness is real and it lives in neither category!

#590 · on an earlier version (v1) of post ‘Choosing between Theories

It might be that all kinds of information processing results [sic] in conscious experience. I have reasons against the idea, but I can’t rule it out. Can you?

Yes. As I said at the beginning, our best explanations of how calculators work don't refer to consciousness. So whatever information processing they do does not, to our current best understanding, result in consciousness.

This is, again, an application of DD's criterion of reality. You don't have a refutation of it, yet you don't want to apply it, which then leads to situations where you "can't rule either option out".

We could be wrong, of course. And one day we may realize that. But until then, we have to take our best existing explanations seriously. I think the underlying issue is that you don't think something is knowledge unless it is certain.

The theory should then be that modeling of both the external world and our internal world causes consciousness.

Having an "internal world", including thoughts and particularly feelings, arguably presupposes consciousness. In which case your argument sounds circular.

Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status ‘true’ to the theory remains unchanged.

Why is it so hard for you to quote me properly? The first sentence makes it sound like I forgot a word at the beginning but I didn't. The proper way to quote me would have been to write '[T]he Popperian view...' and so on.

The Popperian view also says that we should prefer theories with higher corroboration.

I wrote in #109 that "Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. [...] I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important."

Assume we don’t already understand how computers work, and that our starting point was the software.

Wouldn't it be more analogous, from your POV, to say that we don't understand how the software works, and that our starting point is the hardware? Cuz that's what neuroscientists are doing.

In any case, it seems to me that, in popular culture, we understand more about the brain as hardware than about the mind as software. But, contrary to what I think you're suggesting, I came up with the neo-Darwinian theory of the mind I have mentioned previously. And I did so without studying the brain ~at all, simply by making guesses about the mind and criticizing those guesses. Even though this theory is by no means complete, it has not been refuted and has been very fruitful, and it has enabled me to solve other, related problems I did not anticipate (which is a good sign!).

I’m saying a subset of algorithmic processes in the brain (whatever they may be) cause consciousness, as opposed to creative processes in the brain (whatever they may be). I don’t see how the former is ruled out as improbable.

I see – I should have placed emphasis, in my mind, on when you wrote "algorithmic" in what you had written previously; I had missed that.

I think you'd want to rule either one out as false, not as improbable. I rule out that algorithmic processes (with one exception, see below) could lead to consciousness because the mere, mindless execution of pre-existing knowledge (which is represented by those algorithms) precludes consciousness (or else it wouldn't be mindless). The destruction of knowledge can just be done mindlessly, too. So the only option that's left is the creation of knowledge. Which brings us back to creativity.

To be clear, whatever program gives rise to consciousness must itself be executable mindlessly, too (or else it wouldn't give rise to but depend on consciousness). So there is one exception, and to that extent we're in agreement. But there's something different about that program – something our current best explanations of information processing don't take into account yet.

To tackle this problem, the most promising approach to consciousness that I am aware of is the study of ephemeral properties of computer programs. Can you think of any such properties? I have found that to be surprisingly difficult!


I want to clarify for others reading this discussion what I mean by 'algorithmic'. Whatever software gives rise to consciousness is still an 'algorithm' in the sense that a Turing machine could run it. By 'algorithmic' I instead mean something that doesn't require reflection, introspection, knowledge creation, wonder – that kind of thing. Just something that can be done mindlessly. 'Robotic' is another word for it.

#588 · on an earlier version (v1) of post ‘Choosing between Theories

[C]onscious experience may just be along for the ride, a byproduct of the information processing [...].

That's basically been Deutsch's and my claim all along – where you and I seem to disagree is whether all information processing results in consciousness or just some (and, in the latter case, which kinds). You had previously argued that all kinds might – now you're saying maybe only one does. Which is it?

[P]erhaps the impact of the consciousness is so minimal that it goes unnoticed.

Surely not. Your consciousness has causal power, does it not? It's at least causing you to write comments on this blog.

The reason I think modelling of the external world is important for consciousness is because the things most vividly present in my awareness are the sorts of things that I imagine my brain is keeping a mental model of (objects, thoughts, and feelings).

You just switched from "modelling of the external world" to the much more general "mental model". Thoughts and feelings aren't part of a model of the world around you. Also, consider whether a human brain in a vat would still be conscious. It couldn't do any modeling of the external world, but I think it would still be conscious. Don't you?

Another type of algorithmic processing that I think is a plausible cause of consciousness is the process of integrating a number of other brain processes together. This seems plausible since it is supported by studied neural correlations.

I forget who said this and the exact wording, but at most such correlations could corroborate the view that psychophysical parallelism is indeed very parallel. More generally – and we're getting back to core epistemological disagreements here – the Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status 'true' to the theory remains unchanged.

I think neuroscience is generally a bad approach to the question of how consciousness works because neuroscience operates on the wrong level of emergence. The level is too low. You wouldn't study computer hardware to understand how a word processor works. We need explanations on the appropriate level of emergence. I doubt colorful pictures of the brain can help us here; I'd disregard the brain and focus on the mind. Consciousness is an epistemological subject, not a neuroscientific one. Neuroscience has also led to such nonsense as this and this. It surely has value when it comes to understanding the brain's hardware, including medical use cases, but when it comes to the mind I think it's severely limited.

My theory [is] just that some number of [...] algorithmic processes (maybe one) are causing consciousness.

Translation: something in the brain causes consciousness. Clearly. How does that tell us anything new?

#586 · on an earlier version (v1) of post ‘Choosing between Theories

I think the answer to my question is 'no, the explanation of the source code for NPCs and Roombas does not refer to consciousness'. Note also that people have been able to program such NPCs and Roombas without first having to know how consciousness works. It's possible programmers accidentally made them conscious, but that would lead to unintended behavior in the NPCs. Programmers would seek to understand and probably get rid of this behavior as they demand absolute obedience. Also, usually, explanations come before major discoveries.

If (1) the NPC is performing a similar kind of modelling as the human brain, and (2) it is this kind of modelling which produces consciousness, then the NPC would be conscious.

Doesn't that just amount to saying: 'There's some algorithm in the brain that makes it conscious, and if an NPC runs the same algorithm, it's also conscious'?

I find that easy to agree with, but you haven't explained why that algorithm should involve modeling the external world. In #581, you wrote you "find it plausible that the brains [sic] modelling of the external world could be an important part of of [sic] this." But why?

Better yet, see if you can explain why whatever algorithm produces consciousness must have to do with modeling the external world, ie cannot be anything else. Without using 'induction'. That would be convincing.

#584 · on an earlier version (v1) of post ‘Choosing between Theories

Your answer is littered with inductivism and the strength of your beliefs. I wasn't asking how likely your theories are, how strongly you believe in them, or anything else about your psychology. I was asking whether, in objective reality, roombas and video-game NPCs are conscious. They either are or they aren't.

If you looked at the source code of a video-game NPC, would your explanation of how the code works refer to consciousness?

#582 · on an earlier version (v1) of post ‘Choosing between Theories

[...] I think it plausible that the brain’s modeling of an external world is what gives rise to our concious [sic] inner world.

That's a common claim, let's look into it. Roombas also model the external world, as do many NPCs in video games. Are they conscious?

#580 · on an earlier version (v1) of post ‘Choosing between Theories

I don't hate anybody, and neither should you.

If you're asking how people can learn to enjoy being part of a Socratic dialog, I refer you to what I wrote about slowly exposing oneself to criticism, not seeking to evangelize, and having modest expectations.

Use your real name if you want to discuss further.

#579 · on an earlier version (v1) of post ‘Crypto-Fallibilism

I thought by this quote you’re the one claiming the situations are different.

Ah, I see what you mean – the difference I had in mind is that, before the war, Zelensky wasn't using conscription (because he didn't have to), but now the West is helping him do that. There is a new initiation of force against his subjects.

That's different from North Korea, where I understand the entire population is already in a kind of perpetual servitude (I'm not counting things like taxes here, which apply in Ukraine, too) and if you want to help you have no choice but to also help the slave owner (Kim Jong-un).

For example, if they had cared, the West could have told Zelensky, 'we'll deliver weapons on condition that you don't use conscription'. They did not. But it's not unusual for countries to help each other out on a conditional basis. For example, Germany doesn't extradite criminals to the US when there's reason to suspect that the US will use the death penalty, if my memory serves me right. That's because Germany thinks the death penalty is a human-rights violation.

The problem I see in your argument is that by that criteria nothing less than a perfect society is worth fighting for.

That can't be so, if only for the reason that there can never be a perfect society since we can always improve. To that end, we can and should acknowledge the mistakes the West makes while also acknowledging that in some respects it is better than the rest by degree (eg in terms of how much coercion it employs against its citizens), and in some other respects it is better in principle (eg by meeting Popper's criterion of democracy – though I should say that that criterion leaves some things to be desired, which I have written about here).

Of course, all that being said, Ukraine isn't part of the West anyway, even though suddenly it's somehow the West's best friend.

Why the slight preference for Ukraine?

Because it's not the aggressor and, from the little I know, it seems like a slightly less shitty country than Russia.

I wonder, if Russia had instead invaded, say, Mongolia, would the West have cared just as much?

Also it’s a bit slow and tedious to argue like this. If you’re up for it we could do a video call or something where it’s easier to get to the bottom of disagreements.

Maybe. As Elliot Temple taught me, discussing in writing has many advantages over voice. But if we record and share it, I'm open to it.

I meant that from what I saw I think most Ukrainians are in active support of the defensive war effort.

"from what I saw I think" – that isn't enough. All we know is there are some Ukrainians who support the defensive war effort, and some that don't. And even of those that do, we don't know whether they wish to participate personally. My guess is very few Ukrainians wish to be conscripted, certainly less than half (if for no reason other than that ~no woman will want to be conscripted).

Regardless, I repeat again that a single Ukrainian being dragged into the meat grinder against his will is an injustice, so it doesn't matter how many other Ukrainians are in support of that. His rights are his against the whole world (paraphrase of Spooner).

I’m just concerned you’re sacrificing any way to make a decision until everything is implemented according to the best current theories or theory.

No. As I've said, let those who want to fight, fight, and let those who wish to leave, leave. That's a decision that could be made. Maybe it's difficult to make such a decision while at war; maybe Ukrainian society can't work that way. But maybe a society that enslaves its own people isn't worth fighting for. Maybe individual Ukrainians don't owe anyone a functioning society. Maybe it's ridiculous to burden them with that debt against their will. Do you see how the notion that each Ukrainian is his brother's keeper is still implicit in your argument?

If the mind were to wait until all ideas, implicit and explicit, were perfectly aligned in what to do, it’d never do anything.

One important difference between inter-mind and intra-mind morals is that you only coerce yourself, not necessarily others, when you act while you have a conflicting idea present in your mind.

By the way, being unconflicted is indeed rare, but I wouldn't say it never happens. And one of the main reasons it's rare is the kind of coercion states use against their subjects in the first place; it usually starts in school and the older we get the harder we find becoming unconflicted again.

I think that if you’re going to find someone to put the moral blame on for making people participate in a war they don’t want to be in, the clear culprit is Putin and his government.

He's definitely the aggressor in this scenario. He put Ukrainians in this situation; no disagreement there. But Ukrainian politicians could have decided to actually practice the freedom they lie about fighting for. Then Ukraine would have been the good guys unambiguously. But due to conscription, they've become a greater danger to their own subjects than Putin, don't you think? This is true even in the US: the Libertarian Party sometimes tweets about how the American president and the bureaucracy below him present a greater danger to American citizens than, say, Russia or China. Your own politicians are usually more likely to harm you than foreign ones.

[Putin] is also partly responsible for the war crimes Ukrainians commit against Russians.

Maybe, but I don't think the victim of aggression gets to use unlimited retaliation, nor does he get to be an aggressor (through conscription) in turn. Being the victim of aggression isn't a carte blanche – retaliation has to be reasonable.

I’m interested if you think there was ever a time in the evolution of Western culture (including pre-Enlightenment) where it was just the case that the society couldn’t be stabilized, and would thus destroy itself, if it didn’t use some coercion.

Since any society is going to have to be able to use defensive coercion, I'm guessing you're asking about aggressive coercion in particular. I've thought about this before but so far I don't know the answer. If it is true that some minimum of aggressive coercion is required to make primitive societies work, we should still work hard to get away from that as soon as possible. In any case, I don't share the homo homini lupus view many still seem to have.

I'm guessing you think some aggressive coercion is always necessary?

I’m also interested if you have any preference on who wins the war.

I have a slight preference for Ukraine to win, but meh. Most important to me is that the war doesn't expand to NATO and that no nuclear weapons are used.

As for helping Zelensky enslave people, the same argument could be used for the slave owner. By feeding him you’re helping him enslave.

Yes!

In BoI chapter 17, Deutsch writes:

Static societies eventually fail because their characteristic inability to create knowledge rapidly must eventually turn some problem into a catastrophe.

Deutsch's view is that static societies are ultra dogmatic; they suppress critical thinking as much as possible. Therefore, they cannot adapt; that's why they must ultimately fail.

Popper writes here (on p. 8; bold emphasis added):

From the point of view of biology, dogmatism corresponds to lack of adaptability; and since life demands constant adaptation to a constantly changing environment, dogmatism—and especially the
inflexibility of a society
—leads almost of necessity to extermination. Critical thinking corresponds to adaptability. It is, like adaptability, decisive for survival.

Deutsch gives no credit to Popper for the discovery that societies which lack adaptability will fail. Arguably, this is the central thesis of chapter 17.

As usual, Popper is more nuanced than Deutsch when Popper writes "almost of necessity" as opposed to Deutsch's "must eventually".

h/t to Martin Thaulow for providing the Popper quote.

Epistemology is one big mind-reading exercise or else it couldn't study how thinking works.

#569 · on an earlier version (v1) of post ‘Mind Reading

PS: Regarding North Korea and helping slaves by helping the slave owners: I don't think that's analogous to the situation in Ukraine, where the West is helping the slave owner (Zelensky et al) enslave his people (conscription) in the first place. Or is it?

Yes, sometimes people are voting to choose the lesser evil.

Usually when people use the phrase "choose the lesser evil", at least in the US, they think both candidates suck but they feel they need to vote for one regardless and so they try to determine who sucks less. I don't know if that's what you mean here, but if you do, that's not what the Spooner quote is about. It's about not misinterpreting voting as consent, which you seem to (see below).

[...] I wasn’t even talking about elections here.

But you wrote (emphasis added):

Why consider the Ukrainians victims if they elected [...] the current government?

The implicit claim here, as I understood it, was that at least those who elected the government should be considered to have consented to being conscripted. And I offered the Spooner quote as a refutation of that implicit claim.

That's not to mention those who didn't vote for the current government, and those who weren't old enough to vote at the time but are now old enough to be conscripted and so on.

You also wrote:

I was saying that I think the vast majority of Ukrainians are in active support of the government.

How did you determine that?

In any case, even if true, I preemptively addressed it by pointing out that even a single man being dragged into the meat grinder against his will is an injustice.

As for the slave owner, it would depend on what the alternatives on offer were. If the only way for both the slave and owner to survive is to feed them I think this would still be moral. Something akin to the aid going to North Koreans.

What I mean is that you don't have to choose between better and worse slaveholders. Problems really are soluble! And again, fighting for freedom by using conscription just doesn't make any sense. You can't fight for an ideal by betraying it in the process.

You seem to have an unstated collectivist assumption that each Ukrainian is his brother's keeper – and further, that we are all Ukraine's keepers. Ayn Rand explains the problems with this assumption (in general, obviously not with regard to this particular) in chapter 10 of her book The Virtue of Selfishness. People ask 'what will be done about the situation in Ukraine?' and offer, say, conscription as a 'solution', when they should first ask 'should anything be done?'. This is why Rand says the former question is really a "psychological confession[]". I do not tacitly accept the collectivist premise and, as Rand writes, it is not true that "all that remains is a discussion of the means to implement it". First, show me why each Ukrainian is his brother's keeper, then we can discuss implementations such as conscription. In the meantime, nobody will stop you if you want to help Ukrainians.

Back to your comment:

Isn’t thinking one’s already perfect and removing a way of error correction also a kind of lack of knowledge?

I suppose so, but it's 'special' in that it prevents its own correction, whereas most (all?) other mistakes don't have that property.

Well, it sounds to you like that. I don’t know why.

Probably because you're defending politicians who employ coercion through conscription.

I was saying that your argument shares a structure with the socialist one, not that you’re a socialist.

I know – I think the structure in my argument is different from what you think it is.

Why consider the Ukrainians victims if they elected and are in support of the current government?

Lysander Spooner explains here why participating in elections does not indicate support for one's government or constitution. Perhaps the most salient quote is this:

[I]n the case of individuals, their actual voting is not to be taken as proof of consent, even for the time being. On the contrary, it is to be considered that, without his consent having even been asked a man finds himself environed by a government that he cannot resist; a government that forces him to pay money, render service, and forego the exercise of many of his natural rights, under peril of weighty punishments. He sees, too, that other men practice this tyranny over him by the use of the ballot. He sees further, that, if he will but use the ballot himself, he has some chance of relieving himself from this tyranny of others, by subjecting them to his own. In short, he finds himself, without his consent, so situated that, if he use the ballot, he may become a master; if he does not use it, he must become a slave. And he has no other alternative than these two. In self-defence, he attempts the former. His case is analogous to that of a man who has been forced into battle, where he must either kill others, or be killed himself. Because, to save his own life in battle, a man takes the lives of his opponents, it is not to be inferred that the battle is one of his own choosing.

In the case of Ukraine, the battle Spooner speaks of is not just a metaphor. And that's not to mention all the Ukrainians who have not voted once. Regardless, a single man being dragged into war against his will is an injustice.

Democracy – including better democracies such as that of the United States, and worse ones such as that of Ukraine – is still tyranny. It's a tyranny that allows for some amount of error correction, and that makes it objectively and notably better than all other known forms of tyranny, but it's still a form of tyranny.

Ukraine isn’t culturally a part of the West but supporting Ukraine is supporting freedom because there are still differences between the levels of coercion in different societies and also in what they aspire to become (in this case Russia and Ukraine).

Is supporting a slave owner who is nicer to his slaves than other slave owners supporting freedom? Is it logically coherent to fuck for virginity?

You said there’s a difference between lack of knowledge and evil. I’m curious what you think it is.

I'm thinking of Sparta in chapter 1o of David Deutsch's The Beginning of Infinity. That is, evil has to do with thinking one is already perfect; destroying the means of error correction; shielding some ideas against criticism; not considering that one could be wrong about anything.

Ukraine need not be a picture of innocence for it to be the best option at the moment.

I don't know what you mean by "best option", but to be clear, when I say Ukraine is not a picture of innocence, I mean that people shouldn't blindly assume that Ukraine is part of the West; for the reasons I've explained, it's unclear to me how exactly supporting Ukraine is a fight for freedom. Due to conscription, it seems to me the opposite is the case.

Would you deny that the Declaration of Independence was a good thing because the Founding Fathers hadn’t abolished slavery at the very beginning of the US?

No, although I think those are two different issues. But if they used coercion to write the declaration, I would judge them accordingly. And weren't the few people who understood at the time that slavery is an abomination right to condemn it?

As you imply, people can't do more than act on their best theories, moral or otherwise. There's a difference between a lack of knowledge and evil. Should today's teachers be jailed? Probably not. Should they be judged for abusing children? Yes. Should they stop abusing children immediately? Yes. Or do you disagree?

Condemning Ukrainians right now as moral monsters for supporting this government [...].

I largely consider the Ukrainian populace victims. I instead condemn Ukrainian politicians for hypocritically abusing the virtue of liberty to coerce their subjects, as well as US politicians for sending my tax dollars over there against my will.

Your argument seems to me similar in structure to the socialist argument of demanding that people in developing countries have wages as high as first-world countries.

Isn't that different? Socialists wish to forcefully prevent such people from entering contracts they might otherwise enter into happily. I don't wish to prevent Ukrainians who want to fight from fighting. I don't wish to replace free trade with coercion, as socialists do. I wish to replace coercion with freedom.

I’m not an advocate for conscription nor am I trying to justify it.

Then why does it sound like you are?

None of that means that it’s immoral to support the current government in Ukraine.

It depends how that support is organized. Tax money? Immoral. Conscription? Disgusting. Voluntary help? Go for it.

As for Zelensky, the fact that he benefited politically from the war doesn’t condemn him morally.

The litmus test will be whether he pocketed any of the billions of dollars that have been sent to Ukraine for himself, accepted bribes, that sort of thing. I understand that organizations such as Transparency International, but also laws in Western countries, have clearly defined rules around what constitutes corruption. As I pointed out in the Twitter thread you implicitly reference below, Ukraine isn't the picture of innocence many seem to think it is. Same goes for Zelensky by extension, IMO. I'm no expert on him but I wouldn't put it past him.

It’s possible his economic policies are wrong not because, as you say, he’s trying to be a “parasite”.

His policies could indeed be wrong for all kinds of reasons. But he's not just trying to be a parasite. Unless he pays himself no salary – and maybe even then, depending on the circumstances – he's a net parasite in the sense that he's made a profit from money extorted from his subjects. Just like most other politicians but also judges, policemen, USPS mailmen (but not Fedex or UPS mailmen), etc are net parasites.

For the policies that you’re advocating [...].

What policies am I advocating?

[J]ust as the morality of conscription doesn’t depend on it’s popularity, so doesn’t the offering help to Ukraine. [Link added]

Agreed; it depends, in part, on whether such help is coercive or not.

Maybe one day you'll be dragged by the feet to die in a war you do not wish to fight. Will you still be glad that governments are making decisions for you?

Ukraine + Russia + NATO sounds like three sides, at least. And NATO countries are helping Ukraine resist Russia.

Whether most Ukrainians align with the government I do not know. But I do know that the morality of a policy such as conscription does not depend on its popularity.

Re what I think an ideal response from a Western country would be: I'm really no expert, but I think all countries should condemn acts of aggression. In addition, Western citizens are free to help Ukrainian citizens voluntarily. I think that's about it. I certainly don't think Western tax money should be spent on the conflict. Nor am I aware of any contractual obligations any Western countries have toward Ukraine.

[I]n your refutation you are seemingly referring back to premise 2 itself [...]. If this is the case then there is still circularity.

Maybe I'm missing something, but I think it's merely a repetition. In other words, if I propose a claim a, and you propose a conflicting claim b, and I then say 'no, I still think a', that isn't circular. Granted, it may be repetitive, but I think it would only be circular if I said, directly or indirectly, 'a because a'.

In any case, I would use a different refutation. The claim that "the execution of certain inborn algorithms by certain means (e.g. by an animal brain) gives rise to conscious experience" seems to imply that there is something special about wetware such as animal brains. As DD and others have pointed out before me, that cannot be true since it's in violation of computational universality: there's nothing a computer made of metal and silicon couldn't do that one made of wetware could (and vice versa). Our computers are universal simulators (within memory and processing-power constraints).

This refutation refers to neither previously stated syllogism, and instead to a different concept altogether (computational universality), so I don't see any circularity here.

I agree that engineering projects shouldn't be attempted on people's free choices. To be very clear, I think men would benefit from focusing less on women, but I'm not prepared to tell anyone what they should and should not do (unless they employ coercion).

You wrote:

[I]ndividuals shouldn’t be making choices based on what they think the dating market ought to look like.

Let's see how this compares to other markets. Continuing with the car market, if someone is looking to buy a car but decides the market isn't favorable at the moment, isn't he right to wait until conditions improve? Or, if he decides not to participate in that market because he finds some fundamental flaws with it, isn't he right to withdraw from it? And if the answer to both questions is 'yes', how is the dating market different?

I agree that planned mass intervention would be a disaster – I'm a libertarian so I think that any such top-down attempt would be immoral anyway, let alone impossible. Instead, I was talking about slowly changing the culture from within.

Creating awareness of the issues as I've described them could be a start. Men could decide to pay less attention to beauty in women and instead value other traits more. Or they could both decide to deprioritize sex and dating in general.

I dismiss my previous syllogism and instead refer back to the DD quote I gave in the main article from BoI ch. 7:

My guess is that every AI is a person: a general-purpose explainer. It is conceivable that there are other levels of universality between AI and ‘universal explainer/constructor’, and perhaps separate levels for those associated attributes like consciousness. But those attributes all seem to have arrived in one jump to universality in humans, and, although we have little explanation of any of them, I know of no plausible argument that they are at different levels or can be achieved independently of each other. So I tentatively assume that they cannot.

To put this in syllogistic form:

  1. There are only two options: either creativity is necessary for consciousness (a) or it is not (b).
  2. There is "no plausible argument" that consciousness can be achieved independently from creativity, and it seems that they both "arrived in one jump to universality in humans" (link added).
  3. That leaves only (a).

Building on this syllogism, we can address animals separately (I think one of the weaknesses of my circular syllogism, and potentially the reason for its circularity, was that it did too much at once):

  1. There are only two options: either some computer is conscious (a) or it is not (b).
  2. Evidence of some behavior or idea that must have been created by the computer itself (as opposed to, say, merely having been inherited via genes or copied via rote imitation/memes) would be evidence of creativity and, therefore, consciousness.
  3. I know of no such evidence for (non-human) animal computers. That leaves only (b).

In this video, a woman comments on another woman dressing up to go to the club:

She doesn’t dress like that around the house, so it’s not like she’s doing it for herself.

#541 · on an earlier version (v1) of post ‘Mind Reading

This particular argument first, then potentially my view on animal intelligence in general.

#540 · on an earlier version (v1) of post ‘Choosing between Theories

In this video, the interviewer asks:

[T]here's a lot of women who say [...] 'I dress a certain way for myself' [...] My question is, if women are dressing for themselves, why do you often see women walking around in uncomfortable shoes and skimpy dresses when it's freezing cold outside?

He's implying that, if women were dressing for themselves, they'd be wearing comfortable clothes instead. But they're not, so they can't be dressing for themselves. Three women subsequently agree that women do not dress for themselves but for attention. One says:

I think a lot of women say that they dress for themselves but they're really not dressing for themselves.

Her friend agrees:

I think it's, like, subconsciously dressing for others [...].

In other words, many women lie to themselves about their reasons for dressing up. The interviewer did the proper 'mind reading' to bring that to light.

Happy new year!

Same to you.

It looks like premise 2 is doing most of the work here.

Yes.

  1. If animals are conscious, then they would correct obvious mistakes in their behavior.
  2. Animals have been observed as failing to correct obvious mistakes in their behavior (e.g. the cat failing to drink water from a tap).
  3. Therefore, animals are not conscious.

[How do] you establish premise 1 without assuming that creativity is necessary for consciousness?

I see the problem. If premise 1 itself depends on creativity being necessary for consciousness, then that means I (unwittingly) snuck that assumption into my original premise 2, when it was the conclusion I wanted to arrive at. Circular reasoning.

Thanks for pointing this out. Time for me to go back to the drawing board.

#537 · on an earlier version (v1) of post ‘Choosing between Theories

Stating that someone is annoyed as a matter of fact is getting outside the realm of credible goodwill, though, and guessing that they’ll deny it is effectively nuking the discussion utterly. [...] The only thing it does is reveal to me that you expect me to lie, which is a breach of the trust and cooperation needed to have a good discussion.

Isn't lying a breach of trust first?

A final point, which is my own mind reading, and thus an example of what I consider an acceptable meta-comment in a rational discussion:
I wrote my master’s thesis on normative argumentation theory, which means I spent two years reading and writing about these exact questions.

I don't understand how that's mind reading.

#536 · on an earlier version (v1) of post ‘Mind Reading

I think this can be done without us clashing on epistemology, and going forward I will focus on doing just that. If we find that this is clearly not possible without first resolving our epistemological differences then that can be our conclusion. Maybe we would then agree to pick up where we left off and focus on epistemology alone.

OK.

You have provided the following syllogism.

1) Creativity is necessary and sufficient for consciousness/sentience to arise.
2) Animals are not creative.
3) Therefore, animals are not sentient.

You've misquoted me again; as a result, the formatting is off. You can see an explanation here (that site is under development and the link may break). You can use that site to check quotes before submission (expect bugs). Or you can paste your quote into the browser's word search and, if you only get one match (the one in the textarea), it must be a misquote (that won't work in this instance because of the enumeration but it's a decent quick-glance approach in general).

This argument is fine, however it is not the argument I was looking for. I was actually looking for an argument where premise 1 of above would be the conclusion.

I suspect that an explanatory theory of consciousness will provide such an argument. I'm afraid I do not have one yet, but you seem to imply that my claim's epistemic status will increase if it's a conclusion rather than a standalone conjecture.

That cannot be true because we'd always need infinitely many new theories to accept just one new one. Imagine if Einstein had proposed GR and then people had said 'but what does it follow from?' We still don't know. Coming up with the next theory (from which GR follows, if only as an approximation) is another creative act. And if we do find that next theory, people can then always say 'well but what does that theory follow from?'.

This approach exhibits the infinite regress of justificationism, so I'm skeptical as to whether you can "provid[e] [me] with a refutation of [my] claims about animal consciousness [...] without us clashing on epistemology [...]".

All that being sad, I am still interested in your plan of demonstrating circularity, and this path...

If you don’t want to try [or can't, for the moment] provide an argument [ie syllogism] in favor of this claim [...], then maybe you can instead refute the counterclaim: creativity is not necessary for consciousness.

...is still open. (You can see here that my quote is accurate.) I think your request can be rephrased in terms of breaking symmetry between the claims 'creativity is necessary for consciousness' and 'creativity is not necessary for consciousness'. I can then meet your request for my "preferred argument in syllogistic form" by breaking symmetry as follows:

  1. There are only two options: either creativity is necessary for consciousness (a) or it is not (b).
  2. I rule out (b) because I have seen no evidence of creativity in animals, which I should be seeing if they were creative, and I have seen lots of evidence of animals making mistakes which, were they conscious, they would correct (eg this cat in #107)), as well as evidence of their algorithmic, ie non-creative nature (see #124, among others).
  3. That leaves only (a).

Thus there should be a way for you "to demonstrate the circularity [you] see in [my] reasoning."

NYT article about Square making it harder for small businesses during the pandemic by increasing their money-withholding practice with little warning. But publicly they present themselves as caring about small businesses.

There is a petition with over 3,000 signatures on change.org to end this shady practice:

Many small business owners are fighting for survival and cannot afford for this to happen.

The petition links to https://squarevictims.org but unfortunately that site isn't working for me at the moment.

I've signed the petition to show my support.

#498 · on an earlier version (v16) of post ‘Don’t Use Square

this guy didn't end up doing anything.
he deleted my post even though it was exactly the kind of thing the fb group description asked for.

#496 · on an earlier version (v14) of post ‘Don’t Use Square

In BoI chapter 10, Deutsch has Socrates say:

SOCRATES: [...] one thing [Hermes] asked me to do was to imagine a ‘Spartan Socrates’.

But that isn't true. Previously, Socrates starts imagining a Spartan Socrates on his own and Hermes merely points it out:

HERMES: So now you are imagining some Spartan Socrates [...]

It links to The Fountainhead.

#494 · on an earlier version (v1) of post ‘True Controversial Ideas

I could conjecture all sorts of fantastical/crazy theories to solve any problem I want. These theories would meet the criteria you provided - solving previously unsolved problems - but we wouldn’t expect them to contain truth because of this right?

Right, because they don't meet other criteria (such as not being "fantastical/crazy"). We have all kinds of criteria good theories must meet. DD wrote about this in BoI.

Re induction, I have pointed out that people use 'induction' psychologically. I do not disagree that past successes can be used to convince people to adopt a theory. That doesn't refer to induction as a process that can create knowledge.

If you're going to hold on to induction – Peirce's or someone else's – you better come up with a refutation of Hume's and Popper's work on it. I'm not interested in refuting induction for you, nor in making it work.

Regarding "[t]he source code of the universe", when I wrote "[i]n the above examples, reality is the underlying algorithm – the source code", I was debating whether I should clarify that I do NOT mean that reality is made up of source code. Looks like I was wrong not to. So, to be clear: I was merely using source code as a stand-in for reality.

The justification matters because it gives us confidence/belief in the theory, it gives us a tool to convince/reason others into believing the theory too.

Not in the scenario I've described, where you'd have no 'reason to believe' in your theory whatsoever, nor would anyone else, yet you'd be 100% correct. In addition, I quote BoI ch. 10 once more:

So the thing [justificationists] call ‘knowledge’, namely justified belief, is a chimera. It is unattainable to humans except in the form of self-deception; it is unnecessary for any good purpose; and it is undesired by the wisest among mortals.

You wrote:

You might suggest that the non-creative (algorithmic) aspects of our brain’s processing are without consciousness, but you will need to provide a good argument in favor of that claim before I can accept it.

I think your request for "a good argument in favor" is indicative of a larger problem in this discussion. You seek supportive arguments, whereas I seek refutations, and I also don't consider a 'supportive argument' a success or as causing any sort of increase in a theory's epistemic status. Your methodology is justificationist in nature, mine is Popperian/'refutationist'. The reason you should accept the claim is that you cannot find a refutation of it (if indeed you cannot find one), not that I haven't given enough arguments in favor of it.

This difference in our respective approaches may lead to an impasse in this discussion. That doesn't mean we can't learn from each other, but I follow Elliot in thinking that if you're going to have a fruitful discussion, you better make decisive, yes/no arguments. I'd love for you to offer me a brutal refutation of the claim that animals are not sentient. Conversely, I'm not interested in providing "a good argument in favor" of my claims re animal sentience – not only do I doubt that any such argument will ever convince you because there could always be more justifications, but I also don't ask for such an argument in favor of the claim that animals are sentient after all.

Your first attempt at refutation was this:

Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

Notably, this isn't a deductive syllogism of the kind you requested from me. It's inductive. But in any case, that is how we then got to the example with the beads, and this first attempt doesn't work, IMO, for the reasons I've explained re induction. But you can convince me that I'm wrong by refuting Hume's and Popper's work on induction – not by giving arguments in support of your view, but by refuting theirs.

I believe your only other attempt at refutation has been the claim that my argument is circular. I don't see it. But here's the syllogism you requested:

  1. Creativity is necessary and sufficient for consciousness/sentience to arise.
  2. Animals are not creative.
  3. Therefore, animals are not sentient.

You can arrive at this syllogism by taking yours from #488 and reversing 1) and 2). (The major premise should come before the minor premise.)

As I hinted in #482, the syllogism may instead be:

  1. An ability to be critical is necessary and sufficient for consciousness/sentience to arise.
  2. Animals do not have this ability.
  3. Therefore, animals are not sentient.

(Given the links between creativity and criticism, we may eventually find these two syllogisms to be the same, but the difference in focus may be important in understanding animals and consciousness.)

Please explain how these syllogisms are circular? My current guess is that you're looking for a justification for 1), you think 3) would constitute such a justification, and so you misinterpret the syllogism as being circular.

You also wrote:

I have read [chapter 1 of Popper's Objective Knowledge]. He correctly points out the shortcomings of various forms of induction. He also attempts to solve the pragmatic problem of preference with his conception of Corroboration. Did you want to discuss anything in particular?

No. You had requested "a good answer to these problems" so you may "have a much more elegant epistemology to employ". The Popper reference was an attempt to help with that.

How can you determine that a theory solves the problem if you do not test it?

You can know that from theory.

What does calling it psychological change?

I'm distinguishing between the epistemological and the psychological, as Popper did. That distinction matters because the two fields are often after different things. For example, I've quoted Popper here as saying:

Such remarks probably won’t satisfy those who are after a psychological theory of creative thinking […]. Because what they’re after is a theory of successful research and thinking.
I believe that the demand for a theory of successful thinking cannot be satisfied. And it is not the same as a theory of creative thinking. […]

Deutsch picked up the same difference in BoI – in ch. 9 he speaks of "matters not of philosophy but of psychology – more ‘spin’ than substance". And in #252, I mentioned the difference between the logical and the psychological problems of induction.

Back to your comment:

Does it mean that the reasoning is bad and that its conclusions should not be relied on?

Not just because it's psychological, but yes, inductive reasoning is bad.

Which would mean you think it is just as rational to expect a blue bead despite the past 999 jars containing red beads?

Depending on the underlying explanation, yes.

Say you have a bead-drawing algorithm (the kind of thing you might see in a virtual casino). Given that the algorithm works as follows...

(defn draw-bead []
  "red")

...the 'inductive' approach would happen to be spot on.

But given that it works as follows...

(defn draw-bead' []
  (if (zero? (rand-int 2))
    "red"
    "blue")) 

...the same approach would fail pretty soon – although you might find yourself very unlucky (or lucky, depending on how you look at it) and have it repeat the same color many times.

And given that it works as follows...

(defn draw-bead'' [beads-drawn]
  (if (< beads-drawn 1000)
    "red"
    "blue"))

...the 'inductive' approach would be spot on for the first 999 draws, and then it would suddenly fail when you're more confident than ever before that it's correct (like people were with Newtonian physics).

The Popperian approach says that making predictions is only part of reasoning, and it's not the main part. Reasoning is mainly about explaining reality, which involves resolving contradictions between ideas. In the above examples, reality is the underlying algorithm – the source code. If it's hidden from you, like reality, all you have is your knowledge of which beads you've drawn in the past, and even that you only have fallibly. But you don't limit yourself to predicting which beads will be drawn in the future. You look for cases where your predictions do not come true so you can improve your idea of what the algorithm looks like, i.e., resolve contradictions between what you think the source code is and its return values on the one hand, and the real source code and its so-far observed return values on the other. While we typically make predictions that are in line with past observations, doing so shouldn't be mistaken for induction.

Re the last example, draw-bead'', you might ask: 'If we've only drawn 500 beads, what earthly reason would we have to suspect that the code flips after 999?' As in: we would continue to think that draw-bead is the correct solution. We wouldn't conjecture that the algorithm contains that conditional (if (< beads-drawn 1000) ...) – after all, our predictions have always come true so far, so we've had no reason to adjust our model of the algorithm to include that conditional. In other words: we wouldn't be justified in introducing the conditional; we should only change our code when a prediction fails. And I would agree. But if somebody made that change, even without justification, they'd happen to be right! Not only would they be vindicated after 500 more draws, but they'd have discovered the true source code without any justification. So how can justification possibly matter?

The problem is that premise 2 is what is under question. You linked me to your ‘Buggy Dogs’ post as evidence of premise 2, [...].

No, I think you may have misread me. In #476, you asked, "[w]hat sort of evidence tells us that animals are not conscious?" I responded with a link to my 'Buggy Dogs' post. To be sure, there was an aside of mine in between on how consciousness isn't the same as creativity but follows from it, but when I wrote "For specific evidence [...]", I was referring specifically to your question.

I consider premise 2 – "Creativity is required for consciousness." – to be uncontroversial for the moment. But if you have arguments why that premise cannot be true, ie refutations, I want to know.

I am aware of the flaws in my epistemology which involves induction, but I am also aware of problems in Popperian epistemology (corroboration).

As I've said in #109 re corroboration:

Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. But I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important.

That leaves the problem of induction. You also wrote:

If Popperian epistemology can give me a good answer to these problems then that would be a win for me, because I would then have a much more elegant epistemology to employ.

Popper has addressed induction thoroughly. Have you read chapter 1 of his book Objective Knowledge, titled 'Conjectural Knowledge: My Solution of the Problem of Induction'?

#490 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comment #492

On second thought, re when I wrote:

Square had my up-to-date email address on file. When I wrote that they sent an email “to an old email address I do not use to log in to Square anymore”, I meant that they did so despite having my new email. I may update that line for clarity.

I don't think I need to update that line. The implication is that, when I log in to Square using some email address, Square must have that email address on file. (Otherwise I wouldn't be able to log in with it.)

#487 · on an earlier version (v3) of post ‘Don’t Use Square

you didn’t update an email when you changed emails

Square had my up-to-date email address on file. When I wrote that they sent an email "to an old email address I do not use to log in to Square anymore", I meant that they did so despite having my new email. I may update that line for clarity.

any credit card company would act this way!

Square isn't a credit-card company. They're a payment processor.

Also they ask for bank statements to ensure it’s an active account

I realize that. I don't think it would have helped anyway since no amount of activity in my bank account could convince them that my client is not a fraudster. (Recall that this particular issue was caused by my client's cards being declined repeatedly.)

So it looks like they did everything within the law and it was your screw up for the beginning.

I disagree. Even if their closing my account was legitimate – and I don't think it was – that is a separate issue from them keeping my money past their self-imposed deadline without explanation. I cannot imagine that the latter is legal.

#486 · on an earlier version (v3) of post ‘Don’t Use Square

I wrote:

[T]heories that have survived lots of criticism contain mistakes and truth.

(Which you misquoted, btw, by not italicizing the 'and'. Those italics are important. Continuing with my quote:)

Your [Kieren's] question was: “Why would you continue to use GPS if not because of its past success?” That’s one of the reasons why – that I know that even if it contains mistakes, it also contains truth.

Then you said:

In this scenario, the only true parts of GPS that you are aware of are where it has been successful in the past.

One can mistakenly think that it worked in some situations and also mistakenly think that it didn't work in others. We're fallible in our interpretation of test results, too. But in any case, I wouldn't restrict my truth claims about the theory to only those applications of it that I have observed (and correctly think worked). A major 'reason to believe' – and I'm phrasing this in justificationist terms on purpose – that a theory is true, or closer to the truth, is that it solves previously unsolved problems. People can and do make such truth claims without ever testing a theory – so there can be no corroboration or (psychological) induction at play.

Regarding your adjusted GPS example about precision timing of industrial-control systems, you wrote:

I think I could convince them to operate the system based on its past success (massive sample size).

As I believe I've said before, there was a massive sample size of tests of Newton's theories over the centuries, they were all successful, and yet Newton was wrong. Do I doubt that one could convince people based on past success? As I've said: no. But that's a psychological question. Sometimes, just a few decisive negative test results undo thousands of corroborations.

[W]hen you speak of “intelligence” you are actually referring to the definition of it in terms of creativity right? Therefore this is an invalid reason because you are referencing the very thing that is under question (creativity -> consciousness).

To be clear, the Deutschian claim as I understand it is that some entity is conscious if and only if it is creative. (Though I have wondered whether it's really: some entity is conscious if and only if it is critical. But I digress.)

Since it is an 'if and only if', we can deduce a lack of creativity from a lack of consciousness, and vice versa – can we not?

In #481, you wrote:

I had been pursuing the role of induction/corroboration in your epistemology and I think in regards to that I have been on track.

Are you interested in being right or in finding flaws in your thinking?

#482 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comment #492

Have you noticed that, when I offer refutations or counterexamples, you then keep tweaking the scenarios until I'm more or less forced to agree with you?

For example:

In this scenario, [...].

and

Ok, then consider [...].

and

However, if you imagine a period [...].

and

Ok, but before we could [...].

and

Assume you don’t have any background knowledge like this [...].

It's easy to find examples of you doing this, ie making adjustments to your original point so that my refutations or counterexamples don't apply anymore. You were successful in doing this with the example of the beads because you tweaked it sufficiently.

Do you think that approach is conducive to you changing your mind if you're wrong, or to "seeing this discussion to its conclusion", as you wrote?

#479 · on an earlier version (v1) of post ‘Choosing between Theories

The distinction is between a theory that has survived all falsification attempts (tentatively true), and one which has not (known to be false). So right now the problem is deciding what to do when all you have are theories that are known to be false.

I agree. I guess serious fallibilists consider even their best guesses to be false, or eventually found to be false, always. But they might be going too far: sometimes we do speak the truth, if only accidentally. (But, of course, we can never know whether we have spoken the truth, as Xenophanes said, and we should remain critical.)

I don’t think this works. A conjecture that GPS will work tomorrow is arbitrary and easy to vary. I could just as easily conjecture that it will not work. From a Popperian perspective, If we still had a good, tentatively true theory explaining GPS then we could rule out one of these options, but in this hypothetical we no longer have this.

Let me try another approach: what we know of Popperian epistemology (which is quite difficult to vary) says that theories that have survived lots of criticism contain mistakes and truth. Your question was: "Why would you continue to use GPS if not because of its past success?" That's one of the reasons why – that I know that even if it contains mistakes, it also contains truth.

I agree that people might just continue using GPS out of habit.

I don't think it's habit. What I've described is a hard requirement/dependance. In this light, regarding your followup question:

However if [habit] were the only remaining reason for using [GPS], then wouldn’t people quickly transition away from relying on it (especially for life critical application)? I think we can both agree that this wouldn’t happen, but why?

The reason they can't is not habit but dependance and because coming up with new solutions is usually difficult. It takes skill, time, and also luck. They may quickly begin to work on alternatives, but it might take a while before they find a viable one. In the meantime, it seems to me they have no choice but to keep using GPS. Breaking with traditions is hard.

What sort of evidence tells us that animals are not conscious? It cannot be evidence of their lack of creativity (since creativity == consciousness is what is under question).

Following Deutsch, I think it's more like: creativity leads to consciousness. As in: creativity bestows consciousness/consciousness is a side effect of creativity. I don't think they're the same.

For specific evidence, see (I may have linked to some of these before):

On the topic of animal sentience more generally, I recommend my ‘Animal-Sentience FAQ’.

#477 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comment #482

Ok, but before we could adjust our theories there would be a period of time where all we have are theories that we know to be wrong.

As fallibilists, isn't that already the case, ~all the time?

I would continue to use and rely on GPS during this period and I imagine you would too.

Yes.

Why would you continue to use GPS if not because of its past success?

I think it'd be more like: a conjecture that GPS and GR can still solve some of the problems I need solved. To put in your terms: there's no 'reason to believe' that GPS or GR are wrong in their entirety – they're wrong, but they contain truth. The true parts may still be useful.

Another reason to keep using GPS in such a scenario is tradition/dependency: lots of people rely on it and removing it would cause chaos, so you have no choice but to keep using it. (In short: dependency management, avoiding revolutions.) It's a lot like in software development where introducing breaking changes should be done with care and ripping out entire pieces of software without replacement should generally be avoided. If my macOS is found to have a bug I generally (though there are some exceptions) will not (or simply cannot) stop using macOS. If possible, I'll avoid the bug until it is fixed (ie a successor theory is found) or, if the bug is bad and pressing enough, I'll try to switch, if only temporarily, to another OS that isn't known to have this problem. In such cases, my thinking isn't 'my OS has worked in the past so it will work in the future' – if, say, I'm not confident in Apple's abilities, I may conclude that the OS won't be fixed in the future – and my reason for continued use is my dependency on the OS and my theories around the nature of the bug (not wrecking the OS entirely, the OS still being safe to use overall, etc).

[I]f you imagine a period between knowing that Newton’s gravity is incorrect and before GR was discovered, then you can ask a similar question to the one I asked above. Why continue using Newtonian physics during this period?

You can extend my previous answer to this question. In short: Newtonian physics still contained truth, and people needed to keep building bridges.

By the way, I think historically there was such a period, but I'd have to look into it further.

In the same way I would bet that the last jar contains red beads, I would bet that other animals have consciousness. This is a reason why I have the belief/expectation that animals are conscious, which conflicts with your restriction to consciousness requiring creativity.

But for the beads we assumed no other (background) knowledge, whereas with animals we have lots of evidence even if the can't see the figurative beads (ie look inside animals' heads). If there were no such evidence nor any theoretical background so that the situation with judging animal minds really were analogous to the example with the beads, I might agree with you about animals being conscious.

It irritates me a little when Popperians react strongly to seeing the word “justification”. Popper rightfully rejects justification as far as it means to prove something as infallibly true, but the word also has a more everyday meaning. When I say “justify your claim”, I don’t mean “Prove absolutely and without error that your claim is true”, I just mean “Provide reasons why I should think your claim is any good”. Here “reasons” can be those that a Popperian restricts themselves to using.

That's fair.

But remember in this hypothetical we have found that both GR and quantum physics are wrong. Therefore, we no longer have good explanations for why GPS is working right?

We'd adjust our explanations to why it only works in certain cases, or why GR (despite being wrong) still explains GPS but not certain other things. We've done this with Newtonian physics: we understand why it only works as an approximation and when it's still acceptable to use. From BoI ch. 5:

Newton’s predictions are indeed excellent in the context of bridge-building, and only slightly inadequate when running the Global Positioning System, but they are hopelessly wrong when explaining a pulsar or a quasar – or the universe as a whole. To get all those right, one needs Einstein’s radically different explanations.

Sometimes we don't know yet know why an explanation doesn't work in some area, only that it doesn't, until we find its successor – which, per Popper, will explain where and why its predecessor failed. But that's for the negative cases. In a case where a theory does work, like Newtonian physics for bridge-building, yea, continue using it, I don't see the problem. On the contrary, Newtonian physics may even have an advantage over relativity in legitimate applications, where, say, ease of use outweighs the fact that (I'm making this up) the 15th decimal place in the result is wrong, and you only need three decimal places anyway. Likewise, I'm not aware of anyone having found that GR does not work for GPS.

Back to your comment:

It sounds like you would bet that the last jar also contains red (even if only because of your psychology)?

Yes. As I have written: "[k]nowing nothing else I probably would bet on the next jar containing only red beads [...]".

These quotes give a general statement of Popper’s views, but it’s his comments on corroboration that Salmon was reacting to (the stem of this discussion).

We have also talked a bunch about justification, which Salmon invokes, too. Like when he writes "[w]hat I want to see is how corroboration could justify such a preference." I had taken the position that justification is always impossible and never desirable – but Popper is more nuanced than that and makes room for some form of justification (while being careful about how he phrases it). (I think I've 'inherited' this mistake from Deutsch. FWIW, when Deutsch borrows ideas from Popper (and maybe others), there's sometimes a reduction in quality, as I've written about here and here. I think fans of Deutsch should read those articles.) Since Popper accommodates justification a little bit, maybe I was wrong to reject it wholesale, and so maybe there's some compatibility between Salmon and Popper.

#473 · on an earlier version (v1) of post ‘Choosing between Theories

This discussion may be difficult if you take months to respond. Can you commit to responding within, say, a week?

By the way, the first quote in your last comment is a misquote of me. The first line (starting with "Because your") should be a nested quote since you originally wrote that, not me.

#471 · on an earlier version (v1) of post ‘Choosing between Theories

Adding some more info on whether Plato went by 'Aristocles', as Deutsch calls him in BoI ch. 10. The Wikipedia article I referenced in this footnote says that Plato didn't go by 'Aristocles'. I translate freely from the article (original German at the end of this comment):

Also, a claim which has been passed on, according to which Plato originally used his grandfather's name, Aristocles, is a fabrication [...].

The corresponding source/footnote reads (slightly modified for the purpose of translation into English) "James A. Notopoulos: The Name of Plato. In: Classical Philology. vol. 34, 1939, p. 135–145, here: 141–143; Alice Swift Riginos: Platonica. Leiden 1976, p. 35, 38."

Auch eine Überlieferung, wonach Platon ursprünglich den Namen seines Großvaters Aristokles trug, ist eine [...] Erfindung.

lol, I wrote "all the other room get". It should be 'rooms' (plural).

[I]f progress is to be made, some of the opportunities and some of the discoveries will be inconceivable in advance.

– BoI ch. 9

Not only some but all of the discoveries will be inconceivable in advance, or else they wouldn't be discoveries.