Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Dennis Hackethal’s Comments

Identity verified

Germany, my home country, has started specifically targeting unvaccinated people: https://www.independent.co.uk/news/world/europe/germany-covid-vaccinations-christmas-transport-b1960065.html

Germany has agreed a series of tougher Covid-19 measures which stipulate that only those who have been vaccinated, test negative or have recovered from the virus can use public transport or go in to work from next week.

You won't be able to go in to work. They're taking away people's ability to make a living.

As if that weren't disgusting enough, ironically, their conditions imply that those who self-isolated successfully all this time and didn't catch the virus are treated worse than those who have had it. The former group are being punished for following the government's instructions. So why would anyone follow the government's instructions in the future?

“We are facing a serious emergency situation,” said Lothar Wieler, the head of the Robert Koch Institute, which monitors public health in Germany.

Haven't we heard this before?

“We’re looking at a horrible Christmas break if we don’t act now. We’ve never been as worried as we are right now. The outlook is bleak, extremely bleak. Anyone who can’t see how serious it is is making a big mistake.”

And this?

The normally cautious and mild-mannered [...]

Wow it must be serious!

[...] Mr Wieler said that the government had made mistakes by reopening too much of the economy too soon after an earlier lockdown in the spring, and that the country’s vaccination rate of around 67 per cent was simply too low to slow the spread of the virus.

“We’ve got to stop giving those who are unvaccinated chances to avoid getting it with tests,” he said [...].

As I quoted at the beginning of this comment, currently you can still go to work without being vaccinated as long as you get a negative test – which is a hassle and discriminatory in and of itself – but it sounds like Wieler wants to take that away, too. Which means if you don't wish to get the vaccine, you either won't be able to go to work or you will have to actively try to catch the virus so that you can recover and then go back to work.

So here's one of the country's leading health 'experts' punishing 'wrongthink' by forcing people to catch the virus.

Fuck you, Wieler.

[...] Don’t you agree that a theory requires such reasons in its favour to be rationally deemed as a good theory?

If I claim you cannot enter your bedroom because there is a tiger in there, you will naturally ask for reasons why I think that. Perhaps I will describe what I heard or show you pictures of the tiger. I would call these reasons that justify my claim. Don’t you think such reasons are required for me to persuade you in such a situation?

Depending on the details of the situation, that may well be the case, but it's the inverse that matters: it's that the absence of such reasons would cause me to dismiss the claim that there's a tiger in my room. Whether I then consider the presence of such 'reasons' a justification, or whether they satisfy me, is just psychological.

For example, if there really is a tiger in my room, then if I listen closely, I should hear growling or some other noises. At least eventually – maybe the tiger is currently sleeping. If I knocked on the door or agitated the tiger somehow I should be able to hear it.

Now, If I do hear growling that does not mean there really is a tiger in the room. It could be a recording, for example. Maybe it's a prank. It could be any number of things. As Deutsch likes to say, there's no limit to the size of error we can make.

Call failed refutations a reason in favor of a theory if you like – I think what's important is that we have a critical attitude toward our theories.

I think Popper explicitly presents the social sciences as a domain where corroboration is necessary. It is a science where we know the theories are not true, but instead approximations, or useful instruments for predicting social behaviours, wellbeing etc. [emphasis added]

That doesn't sound like Popper. It sounds like instrumentalism. But if you have a quote, I may change my mind. (Note the analogy to your tiger example here: I'm not asking for a reason your claim is true – it's that, if your claim is true, then it should be possible to provide such a quote, whereas if it false, it should be impossible to provide such a quote.)

Q: I have criticism X of corroboration. How do you respond?
A: My idea of corroboration has survived all criticism.
His theory surviving previous criticism is irrelevant to deciding whether it survives this particular criticism right?

Yes. But I don't think Popper would have given that answer A because he knew that past performance is no indication of future performance. He instead would have addressed criticism X directly, presumably.

For example, what if a level of self referential modelling within a program conjures up consciousness?

If I had a nickel for every time I’ve heard this…

Do you have a refutation for this sort of idea and all similar variations? Perhaps in your FAQ?

I've written a little bit about self-referential stuff in my book. I think discussing this bit further would take us down a mostly unrelated tangent but I do recommend reading the book in general.

Re the beads, I think your variation of my example needlessly breaks with consciousness being private, but yes it does contain the same problem. Do you see it?

Herd immunity is a collectivist's wet dream.

Kathy Hochul is the governor of New York.

New York's lawmakers should be less eager to pass new laws. From Ayn Rand's The Virtue of Selfishness, chapter 'Man's Rights':

A collectivist tyranny dare not enslave a country by an outright confiscation of its values, material or moral. It has to be done by a process of internal corruption. Just as in the material realm the plundering of a country’s wealth is accomplished by inflating the currency—so today one may witness the process of inflation being applied to the realm of rights. The process entails such a growth of newly promulgated “rights” that people do not notice the fact that the meaning of the concept is being reversed. Just as bad money drives out good money, so these “printing-press rights” negate authentic rights.

This is the kind of legislation the country can do without:

Kathy Hochul signs bill today requiring utility companies use their customers’ preferred pronouns. So brave! pic.twitter.com/mzi74tq9lm

— Libs of Tik Tok (@libsoftiktok) November 17, 2021

It seems that the best time to judge whether a regulation is necessary is before it's implemented, i.e. before an unknowable number of dependencies exist.

In a previous comment, I wrote:

I think people mistake flexible behavior – such as path finding, which can vary depending on the path – for intelligent/conscious behavior.

I've since found two instances of flexible behavior, one in a cat, the other in a machine, and I suspect most would consider the former evidence of consciousness, but not the latter.

First, here's a cat with flexible behavior. Source

There are several times where the cat pauses and reconsiders which way to go. That results in flexible behavior, but is no evidence of consciousness, because it may just as well be preprogrammed. Changing behavior as new information comes in is entirely preprogrammable. For example, building a video-game character which follows another character around, even as the second one changes paths, is trivially easy to do nowadays. I've done so, and you can read a tutorial on how to do that here. Note that I was able to write the tutorial without knowing how consciousness works.

Next, consider this machine, whose behavior ~nobody will consider evidence of consciousness, even though I think it does something very similar. Source

Let's put aside the fact that this machine has very different hardware from the cat's, and that it's built to do something else – namely to balance balls. The key similarity despite these differences is that the machine also displays flexible behavior. It has to recalibrate constantly while balancing the ball.

So flexible behavior can't be sufficient for an entity to be conscious. And I don't see why a person who's had all limbs removed – i.e., can't move – and is deaf, blind, and mute, couldn't be conscious. In which case flexible behavior – or any behavior, for that matter – can't be necessary for being conscious, either, since that person wouldn't display any behavior whatsoever while still being conscious.

Update 2021-11-12: Improved a couple of sentences shortly after publication.

I wrote in my previous comment:

[Imprinting] was discovered by Austrian luminary animal researcher Konrad Lorenz.

That's false. From Wikipedia:

[Imprinting] was first reported in domestic chickens, by Sir Thomas More in 1516 as described in his treatise Utopia, 350 years earlier than by the 19th-century amateur biologist Douglas Spalding. It was rediscovered by the early ethologist Oskar Heinroth, and studied extensively and popularized by his disciple Konrad Lorenz working with greylag geese.

Here are some key quotes from Austrian biologist Hans Hass' book The Human Animal, as quoted on Elliot Temple's blog. Brackets are mine, not Temple's.

The digger wasp, for instance, seems to display highly intelligent brood-tending behavior. Having dug a nest, it flies off in search of a caterpillar, overpowers and kills it, drags it into the nest, and lays eggs on it. The emerging young are thereby provided with the nourishment they need and find protection in the nest, which the wasp seals. Interrupt the sequence of partactions, however, and it soon becomes clear that no form of intelligence is at work here [emphasis mine]. Returning to its hole with the caterpillar, the wasp first deposits it in the entrance and inspects the interior, then reappears at the entrance, head foremost, and drags its quarry inside. If, while the wasp is inspecting its hole, the caterpillar is removed and deposited some distance away, the wasp will continue to search until it has rediscovered the caterpillar and then will drag it to the entrance again, whereupon the whole cycle[ – ]depositing, inspecting, etc. – begins all over again. Take away the caterpillar ten or twenty times, and the wasp will still deposit it at the entrance and embark on a tour of the hole, with which it is [or should be] thoroughly familiar by this time. The insect continues to be guided by the same commands, in computer fashion [emphasis mine], and evidently finds it hard to make any change in the overall sequence. Only after thirty or forty repetitions will the wasp finally drag the caterpillar into its nest without further inspection.

Hass then makes the mistake of attributing intelligence ("learning") where nothing but a simple path-finding-and-storing algorithm may just as well be at work:

Yet the digger wasp shows a great aptitude for learning where other procedures are concerned. While in flight, it memorizes the route which it must take on the ground when returning to the nest with its prey – a very considerable feat of learning. On the other hand, the burial of its prey is an instinctive action and, thus, strongly programmed. The wasp is almost incapable of influencing or altering this part of its behavior by learning, because it is controlled by an innate and extremely incorrigible mechanism.

I think people mistake flexible behavior – such as path finding, which can vary depending on the path – for intelligent/conscious behavior. Scientist Walter Veit made a similar (maybe the same, IIRC) mistake in a recent discussion with me and others.

There's also this buggy food-storing behavior in squirrels:

Once stimulated, whole cycles of action can proceed by themselves. In the squirrel, food storing consists of the following part-actions: scraping away soil, depositing the nut, tamping it down with the muzzle, covering it over, and pressing down the soil. A squirrel reared indoors will still perform these actions in full, even in the absence of soil. It carries the nut into a corner, where it starts to dig, deposits the nut in the (nonexistent) hole, rams it home with its muzzle (even though it merely rolls away in the process), covers up the imaginary hole, and presses down the nonexistent soil. And the squirrel still does all these things even when scrupulous care has been taken to ensure that it has never set eyes on a nut before or been given an opportunity to dig or conceal objects.

In other words, this algorithm is inborn and squirrels will execute it mindlessly and uncritically, bugs and all, when certain conditions are met.

Toads' mating behavior is buggy, too:

The toad reacts just as unselectively at mating time when faced with the task of finding a mate. The male leaps indiscriminately at any moving body and embraces it. Should the object of its attentions be another male toad, the latter emits a rapid series of cries, whereupon the former releases its hold. The mating-minded toad sooner or later encounters a female, whose spawn it fertilizes, but it has no innate "image" of a prospective mate. Waggle your finger in front of a male toad and it will mount and embrace it in exactly the same manner.

Embracing anything that moves is reminiscent of imprinting, which was discovered by Austrian luminary animal researcher Konrad Lorenz. He found that goslings will follow around ('identify as their mother') the first moving object they see after hatching. If that's their mother, they will follow her, but they will also follow a person. A primitive movement-detection algorithm suffices here.

Lastly, turkeys' brood-tending behavior is buggy to destructive levels:

How little such reactions are associated with intelligence was shown by experiments with turkeys. To the turkey hen, the characteristic cheeping of turkey chicks is the key stimulus which arouses brood-tending behavior. Conceal a loudspeaker which emits this cheeping sound inside a stuffed polecat – one of the turkey's natural foes – and the turkey hen will take it protectively under her wing. Deprive the turkey hen of her hearing, on the other hand, and she will kill her own young because the appropriate key stimulus fails to reach her IRM.

Parasitic birds can abuse such uncritical attitudes by laying eggs in other birds' nests so as to avoid the burden of child rearing. Some may interpret this as 'cunning' on the part of the parasite, but it should come as no surprise that genes coding for slightly more parasitic behavior managed to spread through the gene pool. And that behavior can, again, be executed mindlessly, in robot fashion.

Writing this comment I'm getting the feeling that 'mindlessly' may be the same as 'uncritically'...

#124 · on post ‘Buggy Dogs’ · Referenced in post ‘Sleepwalking’ and in comments #352, #482, #533, #535

Here's another. A cat holding and kicking something that isn't there.

Saying one's animal is 'broken' is a meme. People use robot-adjacent vocabulary without realizing their pets really are robots.

Video source

EDIT: Maybe the cat is holding something that's too small to be kicked (or seen).

There's this video of a buggy cat.

This video is both evidence that the cat is consciously trying to jump without realizing it's too small ('keewwwt') and that its jumping algorithm and/or height-estimation algorithm is buggy, depending on how you look at it.

Video credit

#121 · on post ‘Evidence Is Ambiguous’ · Referenced in comments #122, #477

In The Beginning of Infinity ch. 7, David Deutsch writes about how people over-attribute intelligence to animals that can recognize themselves in the mirror:

[S]ome abilities of humans that are commonly included in that constellation associated with general-purpose intelligence do not belong in it. One of them is self-awareness – as evidenced by such tests as recognizing oneself in a mirror. Some people are unaccountably impressed when various animals are shown to have that ability. But there is nothing mysterious about it: a simple pattern-recognition program would confer it on a computer.

(I personally wouldn't call that awareness, but his argument stands.) I have written software that allows MacBook Pros and iPhones to recognize themselves in the mirror. You can try it out. Your MacBook Pro/iPhone does not suddenly become conscious upon visiting that website.

Deutsch then applies this argument to other areas which, in my terminology, are evidence of smarts but not intelligence:

The same is true of tool use, the use of language for signalling (though not for conversation in the Turing-test sense), and various emotional responses (though not the associated qualia). At the present state of the field, a useful rule of thumb is: if it can already be programmed, it has nothing to do with intelligence in Turing’s sense.

Popperians come across as if they are allergic to the words “justification”, “support”, etc.

Yes, because we take seriously what Deutsch wrote in ch. 10 of BoI:

So the thing [justificationists] call ‘knowledge’, namely justified belief, is a chimera. It is unattainable to humans except in the form of self-deception; it is unnecessary for any good purpose; and it is undesired by the wisest among mortals.

(Upon further reflection, I don't like the last part-sentence, as there's a bit of intimidation going on there.)

Back to your comment:

[...] I don’t think his use of the word “justified” is grounds for dismissing his argument.

It is, for the reason I have mentioned: he thinks that what isn’t justified isn’t rational. The view that beliefs should be justified isn't justified. So it doesn't pass the 'mirror test', as Logan Chipkin calls it.

Corroboration lets us know which of our yet-to-be-falsified theories to prefer for practical predictions. So there seems to be a gap that needs filling.

What's interesting is that I've never run into a situation where I wished I had corroboration to help me break symmetry. I also don't seem to run into situations much where multiple viable yet conflicting theories are left over. The only situation I can remember off the top of my head is thinking that non-creative processes might give rise to consciousness, so I couldn't yet break symmetry in favor of the notion that only creative processes do, but then I found a refutation to that claim (I consider it a refutation – others might not).

But yea either way maybe what Elliot's written fills the gap. I have yet to read it.

Regarding Salmon's remark about being critical of critical methods:

But [Popper's] answer is inappropriate in this context because our aim is precisely to subject his philosophical views, in the best Popperian spirit, to severe criticism.

To be clear, you think Salmon's saying that Popper presupposes the thing we wish to be critical of, namely being critical?

Perhaps it is the alternative/replacement to corroboration, but if it is relying on Elliot’s yes/no philosophy then I don’t think it will provide enough.

Why not? (I haven't studied yes/no philosophy.)

For example, what if a level of self referential modelling within a program conjures up consciousness?

If I had a nickel for every time I've heard this...

Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

If somebody pointed out that this isn’t logically valid reasoning, would you consider that a candidate refutation of your background knowledge (as you suggested as a way forward)?

Yes, if the criticism is general enough to refute the other couple variations of this sort of reasoning that leads me to believe other animals are conscious.

Consider this variation. Say everyone owns an urn with colored beads in it, and say you can look only at the beads in your own urn (since consciousness is private, as you called it), and it is common knowledge that everyone owns urns:

‘Beads are present both in my urn and others’ urns.
My urn contains red beads in particular.
Therefore, red beads have a fair chance of being present in other people's urns.’

See the problem?

I think a definite way forward is to dig into the reasoning that leads people to believe other animals are conscious.

Maybe later, as this discussion is already branching out too much, which makes it harder to address criticisms and make progress. I suggest focusing only on the bead example for now. We can always get to why people believe that animals are conscious later.

#119 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comment #137

This video is interesting. When I first saw it I struggled to explain it for a few seconds:

pic.twitter.com/hdHpn1vR1J

— No Context Animals (@AnimalNoContext) September 29, 2021

As many commenters think, the dog's behavior is a sign that it's extremely smart, even cunning. Some other commenters realized it was a trick. I pointed this out, too:

How could this meat robot do that without being conscious? (Watch till the end first)

I suspect the video is a trick: the dog was trained to go over this exact sequence dozens of times with human guidance until it was good enough its owner could make the video to fool people. https://twitter.com/AnimalNoContext/status/1443031090348838913

— Dennis Hackethal (@dchackethal) September 29, 2021

I scanned dozens of comments, including foreign-language ones, and even those who realized it was a trick didn't state how the trick worked.

Then I was told by many that what I was saying was obvious, even accused of ruining the fun for others. But judging by some of the comments on the video above, I don't think it's obvious. As I wrote in the previous comment, people are eager to over-attribute intelligence to animals. And of course, several people think the video is oh-so adorable.

People are too eager to attribute intelligence to animals. For example:

A magpie tries to smother a fire. A lot smarter than many people who don't understand why it's important to smother a fire. https://t.co/ldgsdJC9ZD

— Carlos E. Perez (@IntuitMachine) October 29, 2021

There are a couple of big mistakes in Carlos' tweet. First, even if you think animals are conscious, no animal is smarter than people. Second, when you watch the video, the magpie is just gathering sticks, presumably to build a nest. It may well have no idea that there's a fire or that fires can be put out, let alone how to do that.

The video title is erroneous, too: "A magpie takes out a fire". No, it doesn't. It just gathers sticks. You can see a pile of sticks it has gathered on the right-hand side, and at 0:06 you can see it adding a stick to that pile. And at the end of the video you can still see a fair amount of smoke so I think the fire is still burning, meaning the video doesn't provide evidence that the magpie actually extinguishes the fire as the title claims.

Let's consider, for the sake of argument, that the bird really is trying to put out the fire. Why couldn't that be preprogrammed genetically and then executed mindlessly by the bird? If you can't say, then you don't know that it's intelligent behavior.

In addition to over-attributing intelligence to animals, people don't take them seriously. When animals display overt bugs, people just shrug it off as 'cute'.

I noticed that you've discussed with Elliot underneath his 'Rationally Resolving Conflicts of Ideas' article quite a bit, so maybe you're already familiar with some of the linked essays.

#110 · on an earlier version (v1) of post ‘Choosing between Theories

I think it is a little harsh to dismiss the paper from one sentence [...]

We need not worry about people's sensibilities when deciding whether to continue reading their papers. Imagine someone publishes a book called 'How to Do Basic Arithmetic' and then claims somewhere on the first few pages that 2 + 2 = 5. You'd put the book down.

That said, I now think I was mistaken, and I did read Salmon's text through page 122, as you suggested (a bit further actually):

If, however, we make observations and perform tests, but no negative instance is found, all we can say deductively is that the generalisation in question has not been refuted.

Yes.

In particular, positive instances do not provide confirmation or inductive support for any such unrefuted generalisation.

Yes.

At this stage, I claim, we have no basis for rational prediction. Taken in themselves, our observation reports refer to past events, and consequently they have no predictive content. They say nothing about future events.

OK, this is basically Hume's statement of the problem of induction. But Salmon is wrong to conclude that we have "no basis for rational prediction". If he's looking for justification, he's simply mistaken that that's needed (or possible). If he's claiming that prediction is not possible at this stage, he's mistaken about how theories work. One needs a theory ("generalisation") before one can perform any tests. If the theory didn't make any prediction before testing, how would you know what to compare your test results against? A theory alone suffices to make predictions. If you roughly know, from theory, how the earth moves, you can and will predict that the sun will rise tomorrow, even if you have never observed a sunrise before. Lastly, if, on the other hand, he's claiming that one cannot know whether the theory will continue to make true (or false) predictions in the future – meaning one cannot make reliable predictions about the theory's predictions – then he's correct to claim that, but wrong to assert that there's a problem with that. This is only a problem for someone who's looking for reliable knowledge, which cannot exist.

My aim is to emphasise that, even if we are entirely justified in letting such considerations determine our theoretical preferences, it is by no means obvious that we are justified in using them as the basis for our preferences among generalisations which are to be used for prediction in the practical decision-making context.

Evidence of him being a justificationist.

Conjectures, hypotheses, theories, generalisations—call them what you
will—do have predictive content.

This convinced me that by "generalisation" he means 'conjecture' or 'theory'.

What I want to see is how corroboration could justify such a preference.

Even more evidence of him being a justificationist. Immediately afterwards, he says:

Unless we can find a satisfactory answer to that question, it appears to me that we have no viable theory of rational prediction [...]

He's saying, in effect, that what isn't justified isn't rational. This is a bad mistake (and an age-old one at that).

But if every method is equally lacking in rational justification, then there is no method which can be said to furnish a rational basis for prediction, for any prediction will be just as unfounded rationally as any other.

At this point, he's basically stuck. He's trying to force Popperian epistemology into a justificationist/inductivist straight jacket and then wonders why that can't work. He also comes dangerously close to relativism.

We do have reasons for – or rather, means of – preferring some methods over others, namely by elimination through criticism. For example, you wouldn't flip a coin (his example) to decide on a theory, because by the same method a conflicting theory could be 'shown' to be true as well. And the same theory could be 'shown' to be false shortly after, and then flip back and forth. So that can't work, because we know – also from theory – that reality doesn't flip like that. And you can't choose the method of sorting theories alphabetically (also his example) because then their truthiness would depend on spelling, and reality doesn't care about how we spell things. Importantly, justificationism can't work because it leads to an infinite regress, and we know – again from theory – to reject infinite regresses.

If you keep eliminating methods this way, pretty soon you are left with very few, maybe only one, way of choosing whether to tentatively consider a theory true and whether to act on it. I think that's why Popper put such emphasis on criticism: it's not just theories we can criticize, but also our methods of evaluating theories (which are themselves theories), our preferences for doing so (ditto), etc.

Related to that, Deutsch writes in ch. 13 of The Beginning of Infinity:

During the course of a creative process, one is not struggling to distinguish between countless different explanations of nearly equal merit; typically, one is struggling to create even one good explanation, and, having succeeded, one is glad to be rid of the rest.

I think you could reformulate this quote as follows so it applies to the issue at hand: 'During the course of a creative process, one is not struggling to distinguish between countless different methods of nearly equal merit for judging conflicting theories; typically, one is struggling to create even one good method, and, having succeeded, one is glad to be rid of the rest.'

In light of that, after I read on a bit, I found that Salmon quotes Popper on p. 123 as saying:

Thus the rational decision is always: adopt critical methods which have themselves withstood severe criticism [...]

This is precisely the conclusion at which I have arrived independently above.

I will say: Salmon is right to point out that there are problems with Popper's concept of corroboration. Others have written about that. But I think you can retain much of Popper's epistemology just fine without accepting that concept. It's not that important.

An article that may interest you is this one by Elliot, which collects several different articles on the topic of how to resolve conflicts between ideas rationally. (I have not read the linked articles yet apart from the one I mention below.) Note that this is slightly different from Salmon's problem of rational prediction in particular – and I think he's mistaken in his focus on prediction over explanation – but it seems to me that once you have rationally chosen an idea, you can rationally make predictions using that idea.

There's also this article by Elliot, which you may wish to read first, in which he writes:

The idea of a critical preference is aimed to solve the pragmatic problem: how should we proceed while there is a pending conflict between non-refuted theories?

Which sounds right up your alley since it's about the problem of practical decision-making as referenced by Salmon.

I plan to read these articles myself, and if you like, it could be fun and fruitful to compare notes and maybe discuss further afterwards.

Regarding the calculator stuff, I think it's notable that you commented on your experiences involving calculators quite a bit (the word 'experience' and variants thereof appear five times in your most recent comment). In particular, you wrote:

You are right that all the experiences I have in regards to calculators are explained without reference to it being conscious [...]

But that's not what I said. I made no claims about your experiences (claims about something subjective/psychological), only about how calculators work (claims about something objective/epistemological).

In addition to calculators, there's also the issue with Lamarckism I mentioned, which is an important factor in breaking symmetry in favor of the idea that execution-only information processing, to which animals seem to be constrained, cannot create new knowledge.

Then you wrote:

Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

If somebody pointed out that this isn't logically valid reasoning, would you consider that a candidate refutation of your background knowledge (as you suggested as a way forward)?

[T]he idea that creativity is required for consciousness [...] does not explain why uncreative animals are conscious [...]

Well, you can hardly criticize a theory for not doing something it's not meant to do!

#109 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comments #490, #588, #589

Here's a video by Instagram user iamkylo_ of a cat 'drinking' from a faucet.

Not only does the cat have no idea what it's doing or that that's not working, it doesn't correct the error either. Nobody's home and the lights aren't even on.

Toward the end, as one commenter points out, it even swallows the non-existent water in its mouth. That makes me think swallowing in cats just happens at certain intervals while in a state of 'drinking', not based on how much water is in the mouth. (But the commenter just describes this behavior as "[a]dorable ❤️❤️", as expected. As of 2021-10-27, none of the commenters interpret this video as evidence that the cat isn't conscious.)

For those who have Instagram, here are two other videos of the same cat exhibiting the same bug:

https://www.instagram.com/p/CQl4XW4AzdP/ and https://www.instagram.com/p/CRlOBe2jWj3/

These are interesting because they start with the cat doing it right, then getting into the erroneous state (again without correction).

#107 · on post ‘Buggy Dogs’ · Referenced in comments #477, #533, #535

I think that the argument provided by Salmon articulates well the reason why I do not adopt a Popperian epistemology. If you want to refute the criticism laid out by Salmon, then I think you will need to read the first 8 pages - up to page 122 where a neat summary of the criticism is provided.

I may do that if I am wrong about Salmon misrepresenting Popper's account of scientific knowledge. If I'm not wrong about that, Salmon's misrepresentation seems grave enough that it's reasonable to expect not much of value to be gathered from his text. So – am I wrong?

What Elliot and yourself refer to as breaking symmetry I would describe as ‘providing reasons in favour of one claim over another’, would you agree?

Although this can sometimes, in effect, be what one ends up doing, I think the approach is a critical one, with the goal of eliminating one of the conflicting theories, not elevating the other in some way by providing support for it.

DD does not appear to elaborate on reason 1, but I don’t currently have my copy of BoI with me to verify this.

I believe you're correct.

if consciousness arises from all information processing, even things like calculators must be conscious. But our best explanations of how calculators work, which are very good and part of our background knowledge in this case, don’t invoke consciousness, so we should conclude that calculators are not conscious. Therefore, it cannot be true that all information processing results in consciousness.

I do not accept this fact, because I do not accept that we know that calculators are not conscious just because our best theories do not invoke consciousness.

This is a variation on Deutsch's criterion of reality. From The Beginning of Infinity, chapter 1:

[W]e should conclude that a particular thing is real if and only if it figures in our best explanation of something.

We need some way to determine, tentatively, whether calculators are conscious. Going off of whether our best explanations tell us they are is a good way, I think. And no matter which way we choose, we can always say 'but they still might be conscious' – but then we never break the symmetry. In other words: yes, it's always possible to be mistaken about how to break the symmetry, but one has to try one way or another. I think the fact that our best explanations of calculators – which are fantastic since we have invented them and know how to build and control them – don't mention consciousness is an almost irrevocably fatal blow to the idea that calculators are conscious, only to be reconsidered if our explanations of calculators change accordingly.

Additionally, there are no big unknowns in our understanding of how calculators work, neither their hardware nor software. With the brain that's different – when it comes to the brain's hardware (well, wetware), in addition to being a universal computer, it seems to have all kinds of special-purpose information-processing hardware built in and connected to it (like eyes), some of which we don't understand well yet. But those are not important for consciousness, and we do understand universal computers well, be they made of wetware or hardware.

Then you wrote:

Our best theories explaining how human brains work (neuroscience) do not invoke consciousness (except as something to be explained), but we do not conclude that we are not conscious.

Well, the parenthetical "(except as something to be explained)" makes all the difference here. Our explanations of calculators don't have that gaping hole. (Though technically that gaping hole lies not in our explanations of brain hardware but brain software. So, to be clear, and for the comparison to work, when I speak of explanations of calculators, I really mean explanations of their software. For calculators we have great explanations for both their hardware and their software. For the human brain as a universal computer we have great explanations, while for some of its software, especially creativity and consciousness, we do not.)

All that said, I believe your condition of providing "some fact about the world that the claim ‘creativity is required for consciousness’ explains so well that it would be implausible to think otherwise" is still met.

#105 · on an earlier version (v1) of post ‘Choosing between Theories’ · Referenced in comment #588

Luke,

[M]ore specifically the thesis is on the identity conditions of persons as overlapping with the subject’s disposition for conscious experiences [...]

Yeah, that's the kind of unnecessarily complicated academic lingo people will use in their theses.

[A] curiosity which comes about from [it being perfectly feasible to posit the existence of an entity which responds to stimuli in a way that we would expect of a conscious human without it actually having consciousness] is that no kind of behaviour is satisfactory to give us epistemic closure on a claim about the consciousness of another entity. What this also means is that there is no necessary causal relation going from a consciouss experience to a mental state or type of behaviour [...]

How does the part "what this also means" follow?

You assume that I am too because we are physiologically similar [...]

No, it's because we both run software in our brains that makes us conscious. Physiology cannot matter due to computation being substrate independent. See also this entry in my FAQ on animal sentience.

The problem I have with your argument is that in the same way that seemingly complex behaviours does not a conscious entity make, neither does the lack of complex behavioural responses act as evidence towards the entity not being conscious.

I don't believe I said that, but you seem to be implying that I did. Do you have a quote? (You follow this up by referring to errors in animal behavior, which I did write about, but I don't believe I claimed that the presence of errors indicates a lack of complexity.)

For example, in the same way that we can imagine an AI which feigns consciousness, we can also imagine one which is more intelligent than a conscious human without being consciousness.

I don't think so. Following David Deutsch, I believe intelligence is something you either have or don't have – it's a binary thing, not a matter of degree. And, also following Deutsch, nothing can be more intelligent than people – what people refer to as 'superintelligence' can't exist because what we might call the 'intelligence repertoire' of people is already universal. Lastly, if consciousness really does result from intelligence, then any entity that's intelligent would also be conscious – it couldn't be intelligent without also being conscious.

If, one the other hand, you're referring to the smarts of an entity – which can exist in degrees – then yes, we can imagine an entity that's smarter than humans without being conscious. But I don't think this presents a conflict for me. It's just that smarts and intelligence are orthogonal, as I have written.

If we argue that so-and-so animal behaviour can be adequately explained without appeal to conscious experience, then why can’t we apply this argument to other humans?

While I agree that behavior can't definitely tell us either way – no evidence can – I think humans are so far off the mark in their creativity compared to animals, as Elliot Temple once pointed out to me, that I'm not worried that humans are potentially not really creative or conscious. Humans are markedly different from animals in what they have achieved. Many people like to dehumanize the human race by claiming humans are not conscious or creative or don't have free will – I'm not one of those people.

This is where I believe we land in a bit of an impass until neurobiologists figure out more about what parts of the brain appear to be responsible for consciousness.

Don't hold your breath, as neurobiology is pretty much useless in this regard.

In case "Fear and Intimidation" sounds like an unnecessary repetition: fear keeps people caring for animals, intimidation recruits people.

So Geometric Unity is obscure by its very nature.

Not to people who study that stuff, as you said yourself. But again, he doesn't address those people.

It’s pretty clear if you’ve ever heard Weinstein interviewed about this that Geometric Unity is not actually intended to be a work of entertainment. He literally thinks his theory is true [...].

Works of entertainment can be true and clear. There's no conflict there.

As explained in the previous paragraph, there’s just not a way to make stuff like this non-obscure to laypeople, and surely he knows that.

Him knowing that is a prerequisite for his capitalizing on it to impress his fans.

Speaking of what's "pretty clear if you’ve ever heard Weinstein interviewed" – about other topics, too – he uses obscurantist language in verbal discussions as well. For example, when asked about Bitcoin's "most interesting property", he responded:

The amazing thing about the blockchain and bitcoin was that it [sic] emerged to show us that we could have a locally enforced conservation law that mimicked physical reality and allow us to have a locally determined medium of exchange [...].

What the fuck is he talking about?

His obscurantism isn't hard to find. This is the first interview I picked off of YouTube at random, and this quote is from the first time he talks. People will have no idea what he just said, but they'll think they heard something profound they're just not smart enough to understand. One commenter wrote:

Well I understood about 1 percent of that

While somebody else wrote:

Eric Weinstein is God tier in some of his answers [...].

So impressing people this way actually works. In another interview, the second one I picked at random, the interviewer praises him in front of the live audience:

In addition to being one of the most brilliant economists on our globe, and also being what many consider the Einstein of our generation [...].

LOL.

LMF,

As I wrote in a previous comment:

[Weinstein] calls his paper a “work of entertainment”. Hence it is aimed at a general audience, specifically the audience of his podcast, most of whom he knows won’t understand him. I think “obscurantism” captures it aptly.

You wrote:

The sentence you quoted about a “four-dimensional manifold with a chosen orientation and a unique spin structure” isn’t actually obscurantism: That’s just how mathematicians talk. His statement has a perfectly precise meaning, which anyone with a background in graduate-level differential geometry [...] would understand.

First, I made a mistake: I originally wrote Weinstein is interested in "impressing his peers" (emphasis added). He's not – he's interested in impressing his fans, most of whom aren't his peers, which is exactly the problem. I'm claiming that he's using math lingo his target audience doesn't understand to impress them. Target audiences for "work[s] of entertainment" consist mostly of laymen, whom he specifically addresses, and only of few mathematicians or theoretical physicists.

However, I didn't claim that mathematicians don't talk like that, or that what he says is vague. (Academic obscurantism is often vague/hard to pin down but I didn't claim that in this case.) If we were addressing his actual peers (not wannabe peers like his fans), I wouldn't have a problem with it.

You'd still be able to tell that what Weinstein wrote is obscurantist if you weren't a theoretical physicist. In fact, you might be better suited to tell if you weren't because it would be more jarring to you.

You misquoted Weinstein btw.

2021-09-27: I did some light editing of this post shortly after publishing it.

I think consciousness may not feature explicitly in code in the sense that you could 'read it off', but code that is conscious when run would be novel in an unexpected way. I don't expect it to look just like any old program people have written before.

When I wrote

// see any consciousness in this code??

I meant to point out that that code looks just like any other: it's the same old mindless execution of pre-existing knowledge.

Regarding the linking theory to see which parts of the code 'light up' with consciousness – I really like that phrasing by the way – I expect once we understand consciousness we will know what to look for in code to tell, without running it, whether it (or parts of it) will be conscious when run. In other words, I'd guess a theory of consciousness would come with such a linking theory.

#95 · on an earlier version (v1) of post ‘Animal-Sentience FAQ

To be clear, I don't argue that

animals are conscious [...] because why would the consciousness have evolved, if it didn’t play some functional role in animal programming?

because this is based on the misconception that evolution is always adaptive and constitutes progress/fulfills some function. There are plenty of examples in the biosphere of 'adaptations' not fulfilling any apparent purpose or even being plain disadvantageous.

Having said that, in the particular case of creativity, I think consciousness arises as an emergent byproduct of creativity, and creativity is hugely advantageous. For one thing, any genetic mutation that reduces a gene's ability to do its 'job' (meaning most mutations) can be 'fixed' at runtime by creativity.

But I think you know that I'm not arguing that animals are conscious, so if by 'you' you mean a hypothetical 'someone': well, they'd be wrong to argue that, for the reasons I just said.

[human] programs are either creative or uncreative, and we can (even if we do not yet know exactly the details) write both types of program without consciousness

I'm not sure we know that either way. As a conjecture, I have an inkling that it is false.

[...] maybe you were thinking [consciousness] is not needed for creativity per se [...]

Again, I view the chain of causation the other way round. Do you have a refutation of that view?

Because if consciousness does not play some functional role for people, why did it persist over evolutionary time?

See above.

If I could apply that same criterion of dismissing consciousness in animals for its lack of necessity to their programming [...]

To be clear to others who read this: I'm not arguing from necessity. The quote continues:

[...] to dismiss consciousness in humans, I suppose I would end up concluding that consciousness is not a matter of software

Why/how does that follow?

#93 · on an earlier version (v1) of post ‘Animal-Sentience FAQ

Adam, you provided a blockquote without a source. Where's that quote from? Or was it instead meant to be emphasized text which you wrote?

To address what you wrote:

We don’t understand whether consciousness is needed for creativity.

I think it's the other way round: creativity is needed for consciousness. The latter arises only from the former.

So why are we conscious—to the degree that we are—when we are conscious? Conversely, why are we unconscious—to the degree that we are—when we are unconscious?

It seems to have to do with automation, among other things. Once you've automated riding your bike you're not conscious of all the minute movements you make, just the overall experience. Whereas when you first learn to ride a bike you're aware of the smallest movements because you need to correct lots of errors in them.

We also seem to be aware only of ideas which have spread sufficiently through our minds.

I speculate more about why we are conscious of some thing and not others in the referenced post 'The Neo-Darwinian Theory of the Mind'.

Another thing you may find interesting is fleeting properties of computer programs, which I have been thinking about lately. What's promising about them is that they don't exist before runtime, meaning they need to be (and can be!) created first. They don't already exist just by virtue of the program existing, which seems to be common for properties of present-day programs. You can read more about them in my article 'What Makes Creative Computer Programs Different from Non-creative Ones?'.

#91 · on an earlier version (v1) of post ‘Animal-Sentience FAQ

@Shane: Reminds me of https://www.youtube.com/watch?v=9SGV3ctLlu4

To think that Lennon could have had virtually any pussy he wanted. Yet he chose that one. Makes you think, doesn't it?

By the way, and to be clear, thanks to the use of setTimeout it doesn't matter if your 'recursive' call is a tail call. It could be called anywhere in the fn and it shouldn't make a difference. Just know that the rest of your function runs first.

hasen, I wrote that humans have the same bug in other situations and that we do realize it.

[...] the dog feels no shame about it’s instinctive movements, and has no incentive to stop.

Exactly. It should have an incentive to stop – it's not touching water! It should understand that incentive and adjust its behavior accordingly (without going through dozens of iterations of reinforcement 'learning', i.e. what you call "housetraining"). If a person tried to swim above water you'd say: Dude, what the fuck are you doing. But if an animal does it people rush to defend it. I don't get it.

Humans also have instincts that make them act in ways that are not sensible or proportional to the situation, and they can only overcome them with high motivation for it - for example the fear of snakes, fear of spiders etc.

Yes. See the example of me thinking my face is wet. But as I said, humans deal with that very differently than dogs, and somewhere in that difference, I think, lies the difference between sentient and non-sentient.

I was informed by a cat owner that cats have the same 'swimming' bug dogs have.

550...

#74 · on an earlier version (v1) of post ‘Snake

Aaron Stupple suggests adding Bayesianism to the list, which arguably falls under inductivism, but deserves an honorable mention due to its recent popularity.

Thatchaphol Saranurak has pointed out to me that computer programs, despite their deterministic nature, are not always entirely predictable. There's the halting problem, of course.

As a result, I think my focus on predictability was wrong. That's not where creativity and computer programs clash. I have crossed out the corresponding parts.

Notwithstanding, I think my conclusion that there is a problem to be solved still holds, only that it now focuses solely on determinism, not on predictability. In short, the problem is: computer programs are predetermined; creativity is not predetermined; yet creativity is also a computer program. And the "emergent approach" is still the only way I see out of that conundrum.

#72 · on an earlier version (v1) of post ‘Two Guesses About Creativity

Another way to think of the connection between the prevailing conception of physics and programming is this:

Computations are physical processes (Deutsch) and as such they must be deterministic, like all other physical processes.

So it's not just that the two prevailing conceptions are analogous as I wrote in the original post above—it's that computations are just one specific kind of physical process, and that is why computations are always deterministic.

#71 · on an earlier version (v1) of post ‘Two Guesses About Creativity

Others have mentioned that there is quantum computation. To be clear, that's not what I have in mind when I speak of different modes of computation. I'm looking for something that doesn't adhere to the prevailing conception in programming. I'd guess that quantum computation adheres to it, too, but I'm no expert on quantum computation. Also, IIRC, the set of computable algorithms (i.e. runnable programs), which includes creativity, is exactly the same for both classical and quantum computers.

#70 · on an earlier version (v1) of post ‘Two Guesses About Creativity

[...] and just thought that I could try and provide a better relationship with kids who struggle??

The in-text uptalk... ugh. This is a sentence, but she ends it in two question marks.

I see that David uses the phrase "taking ideas seriously" in his book The Fabric of Reality, and I read that years ago, before I read any of Rand's books—maybe that's what inspired me to use the phrase originally!

Technical papers are aimed other physicists, not the general public.

I don't think Weinstein is addressing physicists. In a footnote on the first page, he writes:

The Author is not a physicist and is no longer an active academician, but is an Entertainer and host of The Portal podcast. This work of entertainment is a draft of work in progress [...]

He calls his paper a "work of entertainment". Hence it is aimed at a general audience, specifically the audience of his podcast, most of whom he knows won't understand him. I think "obscurantism" captures it aptly.

Following a suggestion, I have changed the following passage

Then there’s the issue that nobody has ever given a moral explanation for why it would be okay to employ coercion against peaceful people.

to say "non-refuted moral explanation" instead. As suggested, "there has been plenty of moral philosophy dealing with that, starting with Hobbes and Rousseau. Whether the theories are satisfying or not, they do exist."

Likewise, the following sentence that used to say

Arguments usually concentrate on certain outcomes that may seem desirable, but it is never explained why coercing yourself there is okay.

has been changed to

Coercion does not solve problems—it just steamrolls over one side of the argument.

I just realized that Weinstein published his paper on April 1st, so... I hope it's not just an April Fools' joke 😂

The simplification of "to reassess the neurobiological substrates of conscious experience and related behaviors in human and non-human animals" to the clearer "to think about consciousness in all animals" reminds me of the following passage from Richard Feynman's Surely You're Joking, Mr. Feynman!:

There was this sociologist who had written a paper for us all to read ahead of time. I started to read the damn thing, and my eyes were coming out: I couldn’t make head nor tail of it! I figured it was because I hadn’t read any of the books on the list. I had this uneasy feeling of “I’m not adequate,” until finally I said to myself “I’m gonna stop, and read one sentence slowly so I can figure out what the hell it means.”

So I stopped—at random—and read the next sentence very carefully. I can’t remember it precisely, but it was very close to this: “The individual member of the social community often receives his information via visual, symbolic channels.” I went back and forth over it, and translated. You know what it means? “People read.”

Then I went over the next sentence, and realized that I could translate that one also. Then it became a kind of empty business: “Sometimes people read; sometimes people listen to the radio,” and so on, but written in such a fancy way that I couldn’t understand it at first, and when I finally deciphered it, there was nothing to it.

Same for the Cambridge Declaration above. There is nothing to it. Maybe a good term for this phenomenon is "academic obscurantism."

How is it evil to ask a person to contribute 50 million back to society if they have billions?

First of all, it's not about asking them. That implies that they'd get a chance to respond with "no, I'm not going to do that." In reality, if wealth tax is instated, they won't get that chance—it's an initiation of force.

Second, "contribute [...] back to society" (emphasis mine) implies that they haven't contributed anything yet, when in reality they've contributed lots in wages and taxes. They didn't take anything to get rich—society is not a zero-sum game.

Third, regarding "if they have billions": it doesn't matter how much money they have. Extracting money from someone against their will is theft, be they a poor person or a billionaire. And theft, as all aggressive coercion, is evil.

One only becomes a billionaire by “stealing” in the first place.

Is that really what you think? Can you think of other ways people make lots of money?

#62 · on an earlier version (v1) of post ‘Wealth Tax Is Evil

Following a suggestion, I want to point out that some situations do not require explicit consent. Instead, consent can sometimes be implied. For example, enthusiastic participation in an activity such as sex can reasonably be understood as consent. In other words, explicit asking and granting of consent is not always necessary for something to be consent.

What is necessary is:

  1. An ability to say "no" beforehand
  2. An ability to change one's mind and say "no" as it's happening

One has neither option when it comes to taxation. And my point that a lack of resistance to force does not imply consent stands.

Isn't it a bit ironic that the Mises Institute has a copyright notice on page 3?

Copyright © 2008 Ludwig von Mises Institute

Looking at this again, I noticed that I mix having and skipping parentheses in Ruby for method invocations. It would probably be better to settle on one approach (maybe parentheses because those are never ambiguous) and then use it consistently.