Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Dennis Hackethal’s Comments
✓
Identity verified
My blog about philosophy, coding, and anything else that interests me.
Germany, my home country, has started specifically targeting unvaccinated people: https://www.independent.co.uk/news/world/europe/germany-covid-vaccinations-christmas-transport-b1960065.html
You won't be able to go in to work. They're taking away people's ability to make a living.
As if that weren't disgusting enough, ironically, their conditions imply that those who self-isolated successfully all this time and didn't catch the virus are treated worse than those who have had it. The former group are being punished for following the government's instructions. So why would anyone follow the government's instructions in the future?
Haven't we heard this before?
And this?
Wow it must be serious!
As I quoted at the beginning of this comment, currently you can still go to work without being vaccinated as long as you get a negative test – which is a hassle and discriminatory in and of itself – but it sounds like Wieler wants to take that away, too. Which means if you don't wish to get the vaccine, you either won't be able to go to work or you will have to actively try to catch the virus so that you can recover and then go back to work.
So here's one of the country's leading health 'experts' punishing 'wrongthink' by forcing people to catch the virus.
Fuck you, Wieler.
Depending on the details of the situation, that may well be the case, but it's the inverse that matters: it's that the absence of such reasons would cause me to dismiss the claim that there's a tiger in my room. Whether I then consider the presence of such 'reasons' a justification, or whether they satisfy me, is just psychological.
For example, if there really is a tiger in my room, then if I listen closely, I should hear growling or some other noises. At least eventually – maybe the tiger is currently sleeping. If I knocked on the door or agitated the tiger somehow I should be able to hear it.
Now, If I do hear growling that does not mean there really is a tiger in the room. It could be a recording, for example. Maybe it's a prank. It could be any number of things. As Deutsch likes to say, there's no limit to the size of error we can make.
Call failed refutations a reason in favor of a theory if you like – I think what's important is that we have a critical attitude toward our theories.
That doesn't sound like Popper. It sounds like instrumentalism. But if you have a quote, I may change my mind. (Note the analogy to your tiger example here: I'm not asking for a reason your claim is true – it's that, if your claim is true, then it should be possible to provide such a quote, whereas if it false, it should be impossible to provide such a quote.)
Yes. But I don't think Popper would have given that answer A because he knew that past performance is no indication of future performance. He instead would have addressed criticism X directly, presumably.
I've written a little bit about self-referential stuff in my book. I think discussing this bit further would take us down a mostly unrelated tangent but I do recommend reading the book in general.
Re the beads, I think your variation of my example needlessly breaks with consciousness being private, but yes it does contain the same problem. Do you see it?
Herd immunity is a collectivist's wet dream.
Kathy Hochul is the governor of New York.
New York's lawmakers should be less eager to pass new laws. From Ayn Rand's The Virtue of Selfishness, chapter 'Man's Rights':
This is the kind of legislation the country can do without:
It seems that the best time to judge whether a regulation is necessary is before it's implemented, i.e. before an unknowable number of dependencies exist.
In a previous comment, I wrote:
I've since found two instances of flexible behavior, one in a cat, the other in a machine, and I suspect most would consider the former evidence of consciousness, but not the latter.
First, here's a cat with flexible behavior. Source
There are several times where the cat pauses and reconsiders which way to go. That results in flexible behavior, but is no evidence of consciousness, because it may just as well be preprogrammed. Changing behavior as new information comes in is entirely preprogrammable. For example, building a video-game character which follows another character around, even as the second one changes paths, is trivially easy to do nowadays. I've done so, and you can read a tutorial on how to do that here. Note that I was able to write the tutorial without knowing how consciousness works.
Next, consider this machine, whose behavior ~nobody will consider evidence of consciousness, even though I think it does something very similar. Source
Let's put aside the fact that this machine has very different hardware from the cat's, and that it's built to do something else – namely to balance balls. The key similarity despite these differences is that the machine also displays flexible behavior. It has to recalibrate constantly while balancing the ball.
So flexible behavior can't be sufficient for an entity to be conscious. And I don't see why a person who's had all limbs removed – i.e., can't move – and is deaf, blind, and mute, couldn't be conscious. In which case flexible behavior – or any behavior, for that matter – can't be necessary for being conscious, either, since that person wouldn't display any behavior whatsoever while still being conscious.
Update 2021-11-12: Improved a couple of sentences shortly after publication.
I wrote in my previous comment:
That's false. From Wikipedia:
Here are some key quotes from Austrian biologist Hans Hass' book The Human Animal, as quoted on Elliot Temple's blog. Brackets are mine, not Temple's.
Hass then makes the mistake of attributing intelligence ("learning") where nothing but a simple path-finding-and-storing algorithm may just as well be at work:
I think people mistake flexible behavior – such as path finding, which can vary depending on the path – for intelligent/conscious behavior. Scientist Walter Veit made a similar (maybe the same, IIRC) mistake in a recent discussion with me and others.
There's also this buggy food-storing behavior in squirrels:
In other words, this algorithm is inborn and squirrels will execute it mindlessly and uncritically, bugs and all, when certain conditions are met.
Toads' mating behavior is buggy, too:
Embracing anything that moves is reminiscent of imprinting, which was discovered by Austrian luminary animal researcher Konrad Lorenz. He found that goslings will follow around ('identify as their mother') the first moving object they see after hatching. If that's their mother, they will follow her, but they will also follow a person. A primitive movement-detection algorithm suffices here.
Lastly, turkeys' brood-tending behavior is buggy to destructive levels:
Parasitic birds can abuse such uncritical attitudes by laying eggs in other birds' nests so as to avoid the burden of child rearing. Some may interpret this as 'cunning' on the part of the parasite, but it should come as no surprise that genes coding for slightly more parasitic behavior managed to spread through the gene pool. And that behavior can, again, be executed mindlessly, in robot fashion.
Writing this comment I'm getting the feeling that 'mindlessly' may be the same as 'uncritically'...
Here's another. A cat holding and kicking something that isn't there.
Saying one's animal is 'broken' is a meme. People use robot-adjacent vocabulary without realizing their pets really are robots.
Video source
EDIT: Maybe the cat is holding something that's too small to be kicked (or seen).
There's this video of a buggy cat.
This video is both evidence that the cat is consciously trying to jump without realizing it's too small ('keewwwt') and that its jumping algorithm and/or height-estimation algorithm is buggy, depending on how you look at it.
Video credit
In The Beginning of Infinity ch. 7, David Deutsch writes about how people over-attribute intelligence to animals that can recognize themselves in the mirror:
(I personally wouldn't call that awareness, but his argument stands.) I have written software that allows MacBook Pros and iPhones to recognize themselves in the mirror. You can try it out. Your MacBook Pro/iPhone does not suddenly become conscious upon visiting that website.
Deutsch then applies this argument to other areas which, in my terminology, are evidence of smarts but not intelligence:
Yes, because we take seriously what Deutsch wrote in ch. 10 of BoI:
(Upon further reflection, I don't like the last part-sentence, as there's a bit of intimidation going on there.)
Back to your comment:
It is, for the reason I have mentioned: he thinks that what isn’t justified isn’t rational. The view that beliefs should be justified isn't justified. So it doesn't pass the 'mirror test', as Logan Chipkin calls it.
What's interesting is that I've never run into a situation where I wished I had corroboration to help me break symmetry. I also don't seem to run into situations much where multiple viable yet conflicting theories are left over. The only situation I can remember off the top of my head is thinking that non-creative processes might give rise to consciousness, so I couldn't yet break symmetry in favor of the notion that only creative processes do, but then I found a refutation to that claim (I consider it a refutation – others might not).
But yea either way maybe what Elliot's written fills the gap. I have yet to read it.
Regarding Salmon's remark about being critical of critical methods:
To be clear, you think Salmon's saying that Popper presupposes the thing we wish to be critical of, namely being critical?
Why not? (I haven't studied yes/no philosophy.)
If I had a nickel for every time I've heard this...
Consider this variation. Say everyone owns an urn with colored beads in it, and say you can look only at the beads in your own urn (since consciousness is private, as you called it), and it is common knowledge that everyone owns urns:
‘Beads are present both in my urn and others’ urns.
My urn contains red beads in particular.
Therefore, red beads have a fair chance of being present in other people's urns.’
See the problem?
Maybe later, as this discussion is already branching out too much, which makes it harder to address criticisms and make progress. I suggest focusing only on the bead example for now. We can always get to why people believe that animals are conscious later.
This video is interesting. When I first saw it I struggled to explain it for a few seconds:
As many commenters think, the dog's behavior is a sign that it's extremely smart, even cunning. Some other commenters realized it was a trick. I pointed this out, too:
I scanned dozens of comments, including foreign-language ones, and even those who realized it was a trick didn't state how the trick worked.
Then I was told by many that what I was saying was obvious, even accused of ruining the fun for others. But judging by some of the comments on the video above, I don't think it's obvious. As I wrote in the previous comment, people are eager to over-attribute intelligence to animals. And of course, several people think the video is oh-so adorable.
People are too eager to attribute intelligence to animals. For example:
There are a couple of big mistakes in Carlos' tweet. First, even if you think animals are conscious, no animal is smarter than people. Second, when you watch the video, the magpie is just gathering sticks, presumably to build a nest. It may well have no idea that there's a fire or that fires can be put out, let alone how to do that.
The video title is erroneous, too: "A magpie takes out a fire". No, it doesn't. It just gathers sticks. You can see a pile of sticks it has gathered on the right-hand side, and at 0:06 you can see it adding a stick to that pile. And at the end of the video you can still see a fair amount of smoke so I think the fire is still burning, meaning the video doesn't provide evidence that the magpie actually extinguishes the fire as the title claims.
Let's consider, for the sake of argument, that the bird really is trying to put out the fire. Why couldn't that be preprogrammed genetically and then executed mindlessly by the bird? If you can't say, then you don't know that it's intelligent behavior.
In addition to over-attributing intelligence to animals, people don't take them seriously. When animals display overt bugs, people just shrug it off as 'cute'.
I noticed that you've discussed with Elliot underneath his 'Rationally Resolving Conflicts of Ideas' article quite a bit, so maybe you're already familiar with some of the linked essays.
We need not worry about people's sensibilities when deciding whether to continue reading their papers. Imagine someone publishes a book called 'How to Do Basic Arithmetic' and then claims somewhere on the first few pages that 2 + 2 = 5. You'd put the book down.
That said, I now think I was mistaken, and I did read Salmon's text through page 122, as you suggested (a bit further actually):
Yes.
Yes.
OK, this is basically Hume's statement of the problem of induction. But Salmon is wrong to conclude that we have "no basis for rational prediction". If he's looking for justification, he's simply mistaken that that's needed (or possible). If he's claiming that prediction is not possible at this stage, he's mistaken about how theories work. One needs a theory ("generalisation") before one can perform any tests. If the theory didn't make any prediction before testing, how would you know what to compare your test results against? A theory alone suffices to make predictions. If you roughly know, from theory, how the earth moves, you can and will predict that the sun will rise tomorrow, even if you have never observed a sunrise before. Lastly, if, on the other hand, he's claiming that one cannot know whether the theory will continue to make true (or false) predictions in the future – meaning one cannot make reliable predictions about the theory's predictions – then he's correct to claim that, but wrong to assert that there's a problem with that. This is only a problem for someone who's looking for reliable knowledge, which cannot exist.
Evidence of him being a justificationist.
This convinced me that by "generalisation" he means 'conjecture' or 'theory'.
Even more evidence of him being a justificationist. Immediately afterwards, he says:
He's saying, in effect, that what isn't justified isn't rational. This is a bad mistake (and an age-old one at that).
At this point, he's basically stuck. He's trying to force Popperian epistemology into a justificationist/inductivist straight jacket and then wonders why that can't work. He also comes dangerously close to relativism.
We do have reasons for – or rather, means of – preferring some methods over others, namely by elimination through criticism. For example, you wouldn't flip a coin (his example) to decide on a theory, because by the same method a conflicting theory could be 'shown' to be true as well. And the same theory could be 'shown' to be false shortly after, and then flip back and forth. So that can't work, because we know – also from theory – that reality doesn't flip like that. And you can't choose the method of sorting theories alphabetically (also his example) because then their truthiness would depend on spelling, and reality doesn't care about how we spell things. Importantly, justificationism can't work because it leads to an infinite regress, and we know – again from theory – to reject infinite regresses.
If you keep eliminating methods this way, pretty soon you are left with very few, maybe only one, way of choosing whether to tentatively consider a theory true and whether to act on it. I think that's why Popper put such emphasis on criticism: it's not just theories we can criticize, but also our methods of evaluating theories (which are themselves theories), our preferences for doing so (ditto), etc.
Related to that, Deutsch writes in ch. 13 of The Beginning of Infinity:
I think you could reformulate this quote as follows so it applies to the issue at hand: 'During the course of a creative process, one is not struggling to distinguish between countless different methods of nearly equal merit for judging conflicting theories; typically, one is struggling to create even one good method, and, having succeeded, one is glad to be rid of the rest.'
In light of that, after I read on a bit, I found that Salmon quotes Popper on p. 123 as saying:
This is precisely the conclusion at which I have arrived independently above.
I will say: Salmon is right to point out that there are problems with Popper's concept of corroboration. Others have written about that. But I think you can retain much of Popper's epistemology just fine without accepting that concept. It's not that important.
An article that may interest you is this one by Elliot, which collects several different articles on the topic of how to resolve conflicts between ideas rationally. (I have not read the linked articles yet apart from the one I mention below.) Note that this is slightly different from Salmon's problem of rational prediction in particular – and I think he's mistaken in his focus on prediction over explanation – but it seems to me that once you have rationally chosen an idea, you can rationally make predictions using that idea.
There's also this article by Elliot, which you may wish to read first, in which he writes:
Which sounds right up your alley since it's about the problem of practical decision-making as referenced by Salmon.
I plan to read these articles myself, and if you like, it could be fun and fruitful to compare notes and maybe discuss further afterwards.
Regarding the calculator stuff, I think it's notable that you commented on your experiences involving calculators quite a bit (the word 'experience' and variants thereof appear five times in your most recent comment). In particular, you wrote:
But that's not what I said. I made no claims about your experiences (claims about something subjective/psychological), only about how calculators work (claims about something objective/epistemological).
In addition to calculators, there's also the issue with Lamarckism I mentioned, which is an important factor in breaking symmetry in favor of the idea that execution-only information processing, to which animals seem to be constrained, cannot create new knowledge.
Then you wrote:
If somebody pointed out that this isn't logically valid reasoning, would you consider that a candidate refutation of your background knowledge (as you suggested as a way forward)?
Well, you can hardly criticize a theory for not doing something it's not meant to do!
Here's a video by Instagram user iamkylo_ of a cat 'drinking' from a faucet.
Not only does the cat have no idea what it's doing or that that's not working, it doesn't correct the error either. Nobody's home and the lights aren't even on.
Toward the end, as one commenter points out, it even swallows the non-existent water in its mouth. That makes me think swallowing in cats just happens at certain intervals while in a state of 'drinking', not based on how much water is in the mouth. (But the commenter just describes this behavior as "[a]dorable ❤️❤️", as expected. As of 2021-10-27, none of the commenters interpret this video as evidence that the cat isn't conscious.)
For those who have Instagram, here are two other videos of the same cat exhibiting the same bug:
https://www.instagram.com/p/CQl4XW4AzdP/ and https://www.instagram.com/p/CRlOBe2jWj3/
These are interesting because they start with the cat doing it right, then getting into the erroneous state (again without correction).
I may do that if I am wrong about Salmon misrepresenting Popper's account of scientific knowledge. If I'm not wrong about that, Salmon's misrepresentation seems grave enough that it's reasonable to expect not much of value to be gathered from his text. So – am I wrong?
Although this can sometimes, in effect, be what one ends up doing, I think the approach is a critical one, with the goal of eliminating one of the conflicting theories, not elevating the other in some way by providing support for it.
I believe you're correct.
This is a variation on Deutsch's criterion of reality. From The Beginning of Infinity, chapter 1:
We need some way to determine, tentatively, whether calculators are conscious. Going off of whether our best explanations tell us they are is a good way, I think. And no matter which way we choose, we can always say 'but they still might be conscious' – but then we never break the symmetry. In other words: yes, it's always possible to be mistaken about how to break the symmetry, but one has to try one way or another. I think the fact that our best explanations of calculators – which are fantastic since we have invented them and know how to build and control them – don't mention consciousness is an almost irrevocably fatal blow to the idea that calculators are conscious, only to be reconsidered if our explanations of calculators change accordingly.
Additionally, there are no big unknowns in our understanding of how calculators work, neither their hardware nor software. With the brain that's different – when it comes to the brain's hardware (well, wetware), in addition to being a universal computer, it seems to have all kinds of special-purpose information-processing hardware built in and connected to it (like eyes), some of which we don't understand well yet. But those are not important for consciousness, and we do understand universal computers well, be they made of wetware or hardware.
Then you wrote:
Well, the parenthetical "(except as something to be explained)" makes all the difference here. Our explanations of calculators don't have that gaping hole. (Though technically that gaping hole lies not in our explanations of brain hardware but brain software. So, to be clear, and for the comparison to work, when I speak of explanations of calculators, I really mean explanations of their software. For calculators we have great explanations for both their hardware and their software. For the human brain as a universal computer we have great explanations, while for some of its software, especially creativity and consciousness, we do not.)
All that said, I believe your condition of providing "some fact about the world that the claim ‘creativity is required for consciousness’ explains so well that it would be implausible to think otherwise" is still met.
Luke,
Yeah, that's the kind of unnecessarily complicated academic lingo people will use in their theses.
How does the part "what this also means" follow?
No, it's because we both run software in our brains that makes us conscious. Physiology cannot matter due to computation being substrate independent. See also this entry in my FAQ on animal sentience.
I don't believe I said that, but you seem to be implying that I did. Do you have a quote? (You follow this up by referring to errors in animal behavior, which I did write about, but I don't believe I claimed that the presence of errors indicates a lack of complexity.)
I don't think so. Following David Deutsch, I believe intelligence is something you either have or don't have – it's a binary thing, not a matter of degree. And, also following Deutsch, nothing can be more intelligent than people – what people refer to as 'superintelligence' can't exist because what we might call the 'intelligence repertoire' of people is already universal. Lastly, if consciousness really does result from intelligence, then any entity that's intelligent would also be conscious – it couldn't be intelligent without also being conscious.
If, one the other hand, you're referring to the smarts of an entity – which can exist in degrees – then yes, we can imagine an entity that's smarter than humans without being conscious. But I don't think this presents a conflict for me. It's just that smarts and intelligence are orthogonal, as I have written.
While I agree that behavior can't definitely tell us either way – no evidence can – I think humans are so far off the mark in their creativity compared to animals, as Elliot Temple once pointed out to me, that I'm not worried that humans are potentially not really creative or conscious. Humans are markedly different from animals in what they have achieved. Many people like to dehumanize the human race by claiming humans are not conscious or creative or don't have free will – I'm not one of those people.
Don't hold your breath, as neurobiology is pretty much useless in this regard.
In case "Fear and Intimidation" sounds like an unnecessary repetition: fear keeps people caring for animals, intimidation recruits people.
Not to people who study that stuff, as you said yourself. But again, he doesn't address those people.
Works of entertainment can be true and clear. There's no conflict there.
Him knowing that is a prerequisite for his capitalizing on it to impress his fans.
Speaking of what's "pretty clear if you’ve ever heard Weinstein interviewed" – about other topics, too – he uses obscurantist language in verbal discussions as well. For example, when asked about Bitcoin's "most interesting property", he responded:
What the fuck is he talking about?
His obscurantism isn't hard to find. This is the first interview I picked off of YouTube at random, and this quote is from the first time he talks. People will have no idea what he just said, but they'll think they heard something profound they're just not smart enough to understand. One commenter wrote:
While somebody else wrote:
So impressing people this way actually works. In another interview, the second one I picked at random, the interviewer praises him in front of the live audience:
LOL.
LMF,
As I wrote in a previous comment:
You wrote:
First, I made a mistake: I originally wrote Weinstein is interested in "impressing his peers" (emphasis added). He's not – he's interested in impressing his fans, most of whom aren't his peers, which is exactly the problem. I'm claiming that he's using math lingo his target audience doesn't understand to impress them. Target audiences for "work[s] of entertainment" consist mostly of laymen, whom he specifically addresses, and only of few mathematicians or theoretical physicists.
However, I didn't claim that mathematicians don't talk like that, or that what he says is vague. (Academic obscurantism is often vague/hard to pin down but I didn't claim that in this case.) If we were addressing his actual peers (not wannabe peers like his fans), I wouldn't have a problem with it.
You'd still be able to tell that what Weinstein wrote is obscurantist if you weren't a theoretical physicist. In fact, you might be better suited to tell if you weren't because it would be more jarring to you.
You misquoted Weinstein btw.
2021-09-27: I did some light editing of this post shortly after publishing it.
I think consciousness may not feature explicitly in code in the sense that you could 'read it off', but code that is conscious when run would be novel in an unexpected way. I don't expect it to look just like any old program people have written before.
When I wrote
I meant to point out that that code looks just like any other: it's the same old mindless execution of pre-existing knowledge.
Regarding the linking theory to see which parts of the code 'light up' with consciousness – I really like that phrasing by the way – I expect once we understand consciousness we will know what to look for in code to tell, without running it, whether it (or parts of it) will be conscious when run. In other words, I'd guess a theory of consciousness would come with such a linking theory.
To be clear, I don't argue that
because this is based on the misconception that evolution is always adaptive and constitutes progress/fulfills some function. There are plenty of examples in the biosphere of 'adaptations' not fulfilling any apparent purpose or even being plain disadvantageous.
Having said that, in the particular case of creativity, I think consciousness arises as an emergent byproduct of creativity, and creativity is hugely advantageous. For one thing, any genetic mutation that reduces a gene's ability to do its 'job' (meaning most mutations) can be 'fixed' at runtime by creativity.
But I think you know that I'm not arguing that animals are conscious, so if by 'you' you mean a hypothetical 'someone': well, they'd be wrong to argue that, for the reasons I just said.
I'm not sure we know that either way. As a conjecture, I have an inkling that it is false.
Again, I view the chain of causation the other way round. Do you have a refutation of that view?
See above.
To be clear to others who read this: I'm not arguing from necessity. The quote continues:
Why/how does that follow?
Adam, you provided a blockquote without a source. Where's that quote from? Or was it instead meant to be emphasized text which you wrote?
To address what you wrote:
I think it's the other way round: creativity is needed for consciousness. The latter arises only from the former.
It seems to have to do with automation, among other things. Once you've automated riding your bike you're not conscious of all the minute movements you make, just the overall experience. Whereas when you first learn to ride a bike you're aware of the smallest movements because you need to correct lots of errors in them.
We also seem to be aware only of ideas which have spread sufficiently through our minds.
I speculate more about why we are conscious of some thing and not others in the referenced post 'The Neo-Darwinian Theory of the Mind'.
Another thing you may find interesting is fleeting properties of computer programs, which I have been thinking about lately. What's promising about them is that they don't exist before runtime, meaning they need to be (and can be!) created first. They don't already exist just by virtue of the program existing, which seems to be common for properties of present-day programs. You can read more about them in my article 'What Makes Creative Computer Programs Different from Non-creative Ones?'.
@Shane: Reminds me of https://www.youtube.com/watch?v=9SGV3ctLlu4
To think that Lennon could have had virtually any pussy he wanted. Yet he chose that one. Makes you think, doesn't it?
Another attempt at intimidation in the context of bullfighting: https://twitter.com/clairelouwhoo/status/1433533542307295262
By the way, and to be clear, thanks to the use of
setTimeout
it doesn't matter if your 'recursive' call is a tail call. It could be called anywhere in the fn and it shouldn't make a difference. Just know that the rest of your function runs first.hasen, I wrote that humans have the same bug in other situations and that we do realize it.
Exactly. It should have an incentive to stop – it's not touching water! It should understand that incentive and adjust its behavior accordingly (without going through dozens of iterations of reinforcement 'learning', i.e. what you call "housetraining"). If a person tried to swim above water you'd say: Dude, what the fuck are you doing. But if an animal does it people rush to defend it. I don't get it.
Yes. See the example of me thinking my face is wet. But as I said, humans deal with that very differently than dogs, and somewhere in that difference, I think, lies the difference between sentient and non-sentient.
I was informed by a cat owner that cats have the same 'swimming' bug dogs have.
Code here: https://gist.github.com/dchacke/0d17dc266475cda7090e4c69164428f8
550...
Aaron Stupple suggests adding Bayesianism to the list, which arguably falls under inductivism, but deserves an honorable mention due to its recent popularity.
Thatchaphol Saranurak has pointed out to me that computer programs, despite their deterministic nature, are not always entirely predictable. There's the halting problem, of course.
As a result, I think my focus on predictability was wrong. That's not where creativity and computer programs clash. I have crossed out the corresponding parts.
Notwithstanding, I think my conclusion that there is a problem to be solved still holds, only that it now focuses solely on determinism, not on predictability. In short, the problem is: computer programs are predetermined; creativity is not predetermined; yet creativity is also a computer program. And the "emergent approach" is still the only way I see out of that conundrum.
Another way to think of the connection between the prevailing conception of physics and programming is this:
Computations are physical processes (Deutsch) and as such they must be deterministic, like all other physical processes.
So it's not just that the two prevailing conceptions are analogous as I wrote in the original post above—it's that computations are just one specific kind of physical process, and that is why computations are always deterministic.
Others have mentioned that there is quantum computation. To be clear, that's not what I have in mind when I speak of different modes of computation. I'm looking for something that doesn't adhere to the prevailing conception in programming. I'd guess that quantum computation adheres to it, too, but I'm no expert on quantum computation. Also, IIRC, the set of computable algorithms (i.e. runnable programs), which includes creativity, is exactly the same for both classical and quantum computers.
The in-text uptalk... ugh. This is a sentence, but she ends it in two question marks.
I see that David uses the phrase "taking ideas seriously" in his book The Fabric of Reality, and I read that years ago, before I read any of Rand's books—maybe that's what inspired me to use the phrase originally!
I don't think Weinstein is addressing physicists. In a footnote on the first page, he writes:
He calls his paper a "work of entertainment". Hence it is aimed at a general audience, specifically the audience of his podcast, most of whom he knows won't understand him. I think "obscurantism" captures it aptly.
Following a suggestion, I have changed the following passage
to say "non-refuted moral explanation" instead. As suggested, "there has been plenty of moral philosophy dealing with that, starting with Hobbes and Rousseau. Whether the theories are satisfying or not, they do exist."
Likewise, the following sentence that used to say
has been changed to
I just realized that Weinstein published his paper on April 1st, so... I hope it's not just an April Fools' joke 😂
The simplification of "to reassess the neurobiological substrates of conscious experience and related behaviors in human and non-human animals" to the clearer "to think about consciousness in all animals" reminds me of the following passage from Richard Feynman's Surely You're Joking, Mr. Feynman!:
Same for the Cambridge Declaration above. There is nothing to it. Maybe a good term for this phenomenon is "academic obscurantism."
First of all, it's not about asking them. That implies that they'd get a chance to respond with "no, I'm not going to do that." In reality, if wealth tax is instated, they won't get that chance—it's an initiation of force.
Second, "contribute [...] back to society" (emphasis mine) implies that they haven't contributed anything yet, when in reality they've contributed lots in wages and taxes. They didn't take anything to get rich—society is not a zero-sum game.
Third, regarding "if they have billions": it doesn't matter how much money they have. Extracting money from someone against their will is theft, be they a poor person or a billionaire. And theft, as all aggressive coercion, is evil.
Is that really what you think? Can you think of other ways people make lots of money?
Following a suggestion, I want to point out that some situations do not require explicit consent. Instead, consent can sometimes be implied. For example, enthusiastic participation in an activity such as sex can reasonably be understood as consent. In other words, explicit asking and granting of consent is not always necessary for something to be consent.
What is necessary is:
One has neither option when it comes to taxation. And my point that a lack of resistance to force does not imply consent stands.
Isn't it a bit ironic that the Mises Institute has a copyright notice on page 3?
Looking at this again, I noticed that I mix having and skipping parentheses in Ruby for method invocations. It would probably be better to settle on one approach (maybe parentheses because those are never ambiguous) and then use it consistently.