Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Dennis Hackethal’s Comments
✓
Identity verified
My blog about philosophy, coding, and anything else that interests me.
Thanks.
I think you mean creativity (or, more precisely, the genes coding for it) wouldn’t be favored and so on.
It’s true that most genes (in humans at least) don’t code for learnable knowledge, but creativity can make up for some physical shortcomings, too: if your genes give you a faulty leg, say, you can use your creativity to make a cane.
Having said that, you make a fair point. I’ve revised the article to be more specific about which mutations creativity can make up for. (I think one could control the metabolic processes of one’s liver by creating and taking the right medicine, but early humans obviously didn’t have that knowledge.)
Skipping some, you write:
Teslo spoke of “30,000-35,000 genes”, not 20,000.
Regarding the rest of your comment, epistemology also predicts that humans have more junk in their genes (in the neo-Darwinian sense) than any other species. And I could see missing or faulty protein synthesis leading to behavioral errors which creativity can then make up for. But I’ve edited the post to reflect the distinction you mention.
This cat has the exact same meowing pattern on three occasions: https://www.instagram.com/reel/C-41ee_JXup/
The video even calls the cat an ‘NPC’ (non-playable character, ie dumb video-game AI, which often makes the exact same utterances):
It’s a joke but shouldn’t be.
Here’s Naval pandering to mystics:
Gross. I’ve gotten ‘guru’ vibes (metaphor!) from Naval before. I don’t know why Deutsch associates with someone like that.
Not a physicist but I doubt they cover all of physical reality in the sense that they’re some ‘ultimate’ explanation. Even our best theories are always going to have shortcomings. That includes theories we come up with after the unification you mention. There’s never a guarantee that tomorrow we won’t find some new aspect of physical reality which our best theories do not yet cover.
Consciousness is always the result of a physical process. But that in itself doesn’t explain consciousness. Any viable explanation of consciousness will let us program it on a computer.
AS has since misquoted again: http://www.quote-checker.com/diffs/atlas-society-misquotes-cameron-winklevoss
Diffs for two of the mentioned misquotes can be found here:
Arguably, the link between explaining and controlling reality is also an objectivist insight.
In short, you're suggesting the reason for his incomprehensible style of communication is not obscurantism but social incompetence?
Keep your response short.
Uncompetative,
Re #627. I like when people catch misquotes, but you technically misquoted Witten yourself. You replaced a hyphen with a space. Not sure how that happened – did you not copy/paste the quote?
Re #629. You wrote:
He's going to have to find some solution if he wants to address laymen. He could explain the terms. For example, as a software engineer, when I speak to laymen about programming, I either explain the terms or use analogies they will understand. I can 'dumb things down' just enough without compromising on accuracy. And if I do want to talk about more advanced programming topics, I don't address laymen. Because they're laymen.
Popper's and Feynman's books are great examples of how to speak to laymen on complex issues without compromising on quality.
Not sure how big an obstacle this could present to someone like Weinstein.
Re #630. You wrote:
Friends can be fans. And not all of the hundreds of thousands views and listens he gets online are from the Portal community. Even if all of the people in the Portal community were smart enough to parse his statements, most others in the general populace aren't.
Public intellectuals shouldn't rely on others to parse their statements for them. They are responsible for making themselves intelligible. Consider this quote by Ayn Rand:
You made that same moral threat when you accused me of being on the intellectual level of a toddler ("I am sure you can find a Cocomelon video which is more your speed."). That kind of threat doesn't impress me, but it does many others, and it's exactly the kind of tactic Weinstein and his fans, including you, evidently rely upon to spread his ideas.
Re #631. You wrote:
Not a lawyer but I'm not sure someone else could copyright text for him. Or maybe I don't know enough about academia. Regardless, the goal you mention is compatible with intentional obscurantism.
I have redacted the remainder of #631 because if you're going to make claims that potentially harm people's reputation you better provide a source for each claim.
Re #632. You wrote:
I've read the added context and I think it does little to aid in understanding him. It also makes a new point, which only adds to the complexity of what he's saying.
It's been a while since I quoted that passage so I don't remember what was going through my mind at the time but I'm a conscientious quoter. I don't leave out stuff to misrepresent people.
Her comment that her cat isn't touching her clothes is false, though: she said her cat kneads the air when she picks it up, so her cat is still touching her clothes, or at least her skin, which is also soft (if not with its paws then with other parts of its body).
She thought she disproved my point because the cat kneads the air, ie its paws aren't touching her clothes.
The only point that potentially disproved mine was that the cat 'kneads' hardwood and tiles – assuming it does so when not touching anything soft with any part of its body.
sopheannn wrote:
This isn't bad. Like, it still doesn't make sense for cats to 'knead' things that can't be kneaded, but what she suggests may refute my original claim that soft materials trigger kneading because they're reminiscent of a mother's belly.
Or maybe her cat is particularly buggy. In any case, something like my explanation will be true – there's some automatic trigger of kneading that cats execute uncritically, ie like robots.
The reason the original video reached for a humanizing explanation along the lines of cats feeling “safe” or “content” is that the creators don't consider that cats are robots. Uncritically kneading things that can't be kneaded (air, hardwood, tile) is still robotic behavior.
It just occurred to me that crypto-fallibilists are like 'vegans' who eat meat once in a while but then lie to themselves and still think they're vegans.
It's fine to try being vegan and fail at it. But don't lie to yourself about your failure just so you can keep that unearned title. Maybe try being vegan on Mondays only, and once you're pretty good at that, add Fridays, and so on. But as a matter of simple logic, you're not a vegan unless you don't consume any animal products, on any day of the week.
It's the same with fallibilism, only harder, because changing one's ways of thinking is harder than changing one's diet. You're not a fallibilist if you won't consider that someone else could be right and you wrong even once. It then takes time to earn the title of 'fallibilist' again. You don't lose it forever over a single mistake, or even a dozen mistakes, but you do have to try anew after each mistake.
But it's not about titles, that's surface-level stuff. It's about logic. A vegan is someone who never eats animal products. A fallibilist is someone who is always willing to consider that he could be wrong.
Like veganism, fallibilism, by definition, is indivisible and can't make room for any compromises. This indivisibility leaves no room for lies or evasions, and I guess that that is why some of those cryptos who want the unearned title of 'fallibilist' deride my stance as 'purity testing'.
You figured right; let's conclude the discussion with an impasse due to insufficient interest on both sides (though for different reasons).
Why have you stopped discussing?
I don't think it matters cuz I'm not currently trying to counter refute my own refutation :)
My original refutation?
Which calculators don't have, right? In which case calculators aren't conscious after all.
#599. It conflicts because my refutation showed by invoking modus tollens etc. that calculators are not conscious.
I'm not saying he was. James Taggart wasn't obligated to agree to a contract, nor is Deutsch obligated to write a textbook on quantum physics. That's not the point.
PS: Thinking more about this, overall, I can see that your claim that 'calculators are conscious after all; our best explanations of them just didn't need to mention consciousness' conflicts with my refutation, but why should we break symmetry in favor of the former and not the latter? It seems to me that it's not really a counter refutation unless it explains that, too.
First, please explain when consciousness must have an effect on information processing and when it can't. Otherwise it's too hand wavy to work as a counter refutation. (And even then such an explanation is only necessary but maybe not sufficient, I'll have to think more about it.)
Second, you seem to be saying that the only reason consciousness would figure into our best explanations of calculators is if consciousness had an effect on information processing. Why must that be the case? Why couldn't consciousness figure into our best explanations of calculators for other reasons, despite not having any effect?
Third, you say that "[c]onsciousness is created as a result of information processing." All information processing?
Of course, if you divvy them up into those explanations that have something to do with consciousness and those that do not, then only some of them are going to change. But for animals/calculators as a whole, the explanations would change. (Imagine how much our explanations of humans would change if we learned that humans are not conscious! We wouldn't then say 'but explanations of our muscles remained the same'.) In a related context, you wanted me to "consider all of our current best theories, not a subset", ie have a holistic picture – now you want me to consider only subsets of theories about calculators.
The fact that none of our explanations of calculators say anything about conscious is the very reason you should think they're not conscious. And again, the four concepts from our background knowledge taken together do show that calculators really aren't conscious, and then they do conflict with your theory of consciousness because it predicts that calculators are conscious.
Again, I suggest phrasing things in more absolute terms such as 'must', 'cannot' etc. If calculators may be conscious it's too easy to evade criticism. If you're going to constrain it, specify under which conditions calculators must be conscious and why and under which conditions they cannot be conscious and why not.
But yes, I believe I understand your point here: you're saying our explanations of the operation of calculators, their hardware, and so on would not change. However, I do think that, if we had a working theory of consciousness that implied that even calculators are conscious, people would get busy trying to understand what it is about calculators that makes them conscious and then amend our explanations of calculators as a whole accordingly, at the very least by adding an implicit reference to such a working theory of consciousness.
I think any explanation of calculators as a whole would need to be amended to include how their functionality gives rise to consciousness, at least by implicitly referencing your theory of consciousness. I don't know in detail what our current explanation of calculators looks like – I don't manufacture calculators; I would just refer to higher level concepts such as basic arithmetic on a programming level – but I do know that that explanation doesn't currently speak of consciousness, or else it would be commonly thought that calculators are conscious. I also don't know in detail what the amendment would look like since I don't know how consciousness works. Of course, tautologically, the sub explanations that don't have to do with consciousness aren't going to need to change, not even by implicit reference. (Note that this is another reason I don't find neuroscience promising when it comes to the brain; presumably, explanations of wouldn't change.)
Please provide a counter-refutation to my refutation. Otherwise, I think it's likely that we're going to reach an impasse. If you're not sure yet how to refute it, asking questions about it or steelmanning it could be a good way forward.
PS: Re when you wrote:
To be clear, it's not just that existing explanations do not imply calculator consciousness (although that would be enough of a challenge) – together, the four pieces of background knowledge I've referenced rule out calculator consciousness.
Agreed. That just means there's a conflict; an opportunity to break symmetry.
Correct, I do not. But, per Popper, new theories should explain, at least implicitly, why their predecessors are wrong. Which is what I've suggested as one of the four attack vectors: you could explain why our explanations of calculators are wrong; why they don't imply the absence of consciousness (which you seem to attempt below anyway in your remark about calculator operations). That way, you would break symmetry in favor of the prediction that calculators are (or at least might be) conscious, and then modus tollens doesn't rule out anymore that non-creative algorithms create consciousness.
If you're repeating that our explanations (not just operations) of calculators need not change if calculators are conscious, then I repeat that you then also shouldn't think our explanations of animals should change if they are (or aren't) conscious. But it seems that you do want them to change in the case of animals. And you also want them to change in the case of the human brain (where you don't restrict yourself just to operations but want neuroscience to explain how those operations result in consciousness, and such an explanation would then form part of the explanation of the brain). So, if only for consistency – but also in an attempt to understand reality – you should want them to change when it comes to calculators, too.
It sounds like you have a somewhat instrumentalist view of explanations (when it suits you), which leads you to reduce explanations of calculators to a description of their operations only. But that isn't a valid way around my application of the modus tollens.
Why consider a refuted explanation?
The quote from BoI chapter 1 goes:
Note the singular "explanation". And a refuted explanation can't be good anymore – the following quote (also from chapter 1) is about science in particular but you can easily imagine how it applies to symmetry breaking in general:
With that said, back to your comment:
Our current best theories do not include refuted ones. So I don't think my application of the criterion is incorrect.
A refuted theory does not deserve consideration until the refutation is counter-refuted. Four potential targets to attack my refutation are:
If you refute any one of them, my refutation is invalid, and then your theory regains the status 'non-refuted' and can thus be reconsidered.
With that in mind, when you wrote a bit further up...
...even if non-refuted, that theory conflicts with explanations of calculators, which presumably form part of your background knowledge (unless you successfully refute them as per point 1 in the above list). If they do, you'd first want to break symmetry there.
By the way, a better way to state the criterion, ie in a non-justificationist way, IMO, is to simply say 'something is real if and only if it figures in a non-refuted explanation of something' – that phrasing also happens to leave room for multiple non-refuted explanations.
No. That sounds like a justificationist perversion of Popperian epistemology because it would involve 'weighing' conjectures somehow based on how 'good' they are. DD explains in BoI ch. 13 why that's a bad idea. (Ironically – and, IIRC, Elliot points this out somewhere – that means DD himself is a justificationist since he wants to weigh and choose explanations based on how "good" ("hard to vary") they are, as opposed to choosing based on whether they are refuted vs. non-refuted, which I believe would be Elliot's approach, ie the binary approach, which I am advocating.)
This I agree with, and I see now why you thought my argument was circular. Some other explanation is needed: either from our background knowledge or a new one. Either way it can't be one of the conflicting theories. But I'm not choosing one of those (see below). Note also that Popperian epistemology gives us a process of elimination that leaves us, ideally, with one non-refuted conjecture, which need not be the 'best' (depending on how you weigh).
That isn't my argument; I think there's been a misunderstanding. Here's how I'd change your description of our discussion:
D: ‘Creative algorithms cause consciousness’ is the
bestonly explanation because there are no plausible alternatives.K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because
we tentatively assume that only creative algorithms can cause consciousnessour best explanations of calculators and NPCs do not invoke consciousness, so, per DD's criterion of reality, they really aren't conscious.I invoke these explanations to show that the claim that "'[n]on-creative algorithms cause consciousness' is a plausible alternative" must be false, by modus tollens, since that claim makes a prediction about calculators that isn't true. Hence the claim is eliminated, I think, while DD's claim – that creativity causes consciousness – is still standing.
As you can see, the circularity you spoke of is not there as I do not reference the theory under question as a symmetry breaker. Instead, I refer to our best explanations of calculators and NPCs as well as the criterion of reality and the modus tollens, all four of which form part of my background knowledge.
Maybe I'm talking out of my ass here, but I don't think you understand it well enough (see the beginning of this comment). You'd be proceeding with what you think is a Popperian 'framework' but is actually justificationism in Popperian clothing.
I’d like to honor your request to avoid the epistemology discussion (though you've already continued it with your comments around how not to break symmetry), but I don't currently see how to avoid it. Perhaps a way forward is a discussion around the criterion of reality in particular and understanding better our apparent disagreement around that criterion? If you have other ideas, I'm open to them, too.
For clarity, I think there are two possibilities: that calculators are conscious either follows from our best explanation of consciousness, or it follows from our best explanation of calculators.
This is the standard Popperian approach: we assume, tentatively, that a conjecture is true until it is refuted, even if that conjecture is currently "under question". I don't see how that leads to circularity. Since all our conjectures are always tentative in this way, they're always "under question"/open to revision anyway.
Sure.
An alternate, if lesser, resolution is that we simply have different epistemologies; that it's going to be difficult for us to come to a resolution on the question of animal consciousness until we resolve the epistemological difference. That's not surprising since the question of animal consciousness is directly influenced by epistemological considerations. But we still got to understand each other's (and our own) viewpoints better, which, as Popper would say, is more than enough.
Presumably for the same reason you want the word 'consciousness' to appear in the explanation for how animals work.
Calculators are math machines, animals are gene-spreading machines (per Dawkins). You ask whether you'd be able to calculate your taxes better if consciousness figured in our best explanations of how calculators work, but you don't ask whether animals would be able to spread their genes better if consciousness figured in our best explanations of how they work. And yet, presumably, your answer to the latter question would be 'yes' whereas your implied answer to the former is 'no'. How does that fit together?
My guess is they'd both change, but at least our best explanations of calculators would have a big unknown ('why are they conscious?'), and that unknown would form at least an implicit part of such explanations. That would be an improvement at least in the sense that there'd be a pointer toward an open problem and more progress.
If it doesn't leave a trace, that means even the brain's hardware remains unchanged. So what good is neuroscience?
Your statement is vague; it leaves room for evasions when you encounter criticism. I think it would be better to phrase it in decisive, more attackable terms, such as 'consciousness can never leave a trace' or at least 'consciousness only leaves a trace when...'.
The statement 'consciousness can never leaves a trace', for example, sounds false because if someone experiences pain, say, they usually want to fix that, and then do stuff that helps fix that (eg move out of an uncomfortable position into a more comfortable one). At which point there's a trace even though the experience is totally private.
Otherwise it's like saying, in OOP terms: private methods on a class never cause any side effects. Which isn't true.
Now it sounds like you've adopted (and applied) the criterion?!
When something may as well have been done mindlessly, it cannot be evidence of consciousness. So I don't need to substantiate. We need some behavior which must have been the result of consciousness. You disagree that consciousness necessarily has any behavioral impact, but that makes things more difficult for you because then you can't point at any animal behavior and say that must have been the result of consciousness. In which case animals may as well not be conscious. Or anything at all may as well be conscious, including rocks, planets, and so on.
How could you come to know that if not through explanations?
No. Just the introduction of the word 'consciousness', effectual or not, into our explanations of calculators would be a change. In which case the use of DD's criterion of reality would be appropriate after all, thereby negating #589.
Why should sufficient complexity give rise to consciousness?
Again, video-game NPCs do this stuff all the time and our best explanations of them do not invoke consciousness. So they're not conscious. (I know I'm repeating myself but more on the criterion of reality below.)
And presumably of the modeling of the modeling of the modeling...? Sounds like an infinite regress. If it isn't, how many levels are required for consciousness?
Calculators, NPCs, criterion of reality.
It seems to me we have two disagreements, each on a different level. On a basic level, it seems to me we need to break symmetry between the claims 'sufficiently complex modeling of oneself and one's surroundings gives rise to consciousness' and 'creativity gives rise to consciousness'. On a more general level, we have an epistemological disagreement re the criterion of reality and whether its use is appropriate in this context. (I think I have shown at the beginning of this comment that it is.)
Do you think that's an accurate summary of the disagreement? It seems to me that, to break symmetry between the two claims, it would be helpful to find a resolution re the criterion of reality first (cuz if we don't have some criterion for what's real that we are willing to follow without exception we can always ignore criticism as invoking something that isn't real).
How could they not? Discovering that calculators are conscious would be remarkable. The fact that our explanations fail to predict the consciousness of calculators would be a problem we'd want to solve. We'd want to know how it is that calculators are conscious and update our explanations of them accordingly.
Why should that require or give rise to consciousness? Aren't you just describing homeostasis? The simplest of organisms have homeostasis – organisms which you presumably do not think are conscious.
It is, of course, true that certain neural states give rise to consciousness, but the reason neuronal correlates, or any other explanations relying on the brain, are ruled out as fundamental is computational universality: computers not made of neurons can also be conscious if programmed correctly. Therefore, such explanations can at best be parochially true. Neurons do somehow give rise to consciousness, but the fact that they're neurons is incidental. It's the program that matters.
Consider this alternative explanation for why the cursor moves along: because the user pressed the right-arrow key, and the program is configured to move the cursor to the right anytime that happens. While your explanation on the low, CPU level is the kind of explanation that may well be technically correct, I think mine is not just correct but also operates on a more appropriate level of emergence. This becomes important once we entertain other kinds of computers that don't have a von Neumann architecture (which, it seems to me, the brain does not!). We also lose an understanding of causality when we go too low: it's not really the register in the CPU that moves the cursor along, it's the program. Recall DD's analysis in BoI ch. 5 of Hofstadter's program that instructs certain dominos to fall or not to fall.
I'm guessing you have read BoI ch. 5. Do you have refutations of it? Or of the CBC interview with DD I linked to?
Only of certain, special algorithms – and we don't yet know what distinguishes them from conventional ones (presumably the distinguishing factor is creativity and/or ephemeral properties).
For conventional algorithms, I agree.
Once we rule out the destruction of knowledge, yes.
I think it can't be "Consciousness doesn’t figure into our best explanations of creative algorithms" because consciousness has to live in one of 1) creative or 2) non-creative algorithms. For the reasons I've explained, I don't think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness. Unless we're both wrong that consciousness is real and it lives in neither category!
Yes. As I said at the beginning, our best explanations of how calculators work don't refer to consciousness. So whatever information processing they do does not, to our current best understanding, result in consciousness.
This is, again, an application of DD's criterion of reality. You don't have a refutation of it, yet you don't want to apply it, which then leads to situations where you "can't rule either option out".
We could be wrong, of course. And one day we may realize that. But until then, we have to take our best existing explanations seriously. I think the underlying issue is that you don't think something is knowledge unless it is certain.
Having an "internal world", including thoughts and particularly feelings, arguably presupposes consciousness. In which case your argument sounds circular.
Why is it so hard for you to quote me properly? The first sentence makes it sound like I forgot a word at the beginning but I didn't. The proper way to quote me would have been to write '[T]he Popperian view...' and so on.
I wrote in #109 that "Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. [...] I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important."
Wouldn't it be more analogous, from your POV, to say that we don't understand how the software works, and that our starting point is the hardware? Cuz that's what neuroscientists are doing.
In any case, it seems to me that, in popular culture, we understand more about the brain as hardware than about the mind as software. But, contrary to what I think you're suggesting, I came up with the neo-Darwinian theory of the mind I have mentioned previously. And I did so without studying the brain ~at all, simply by making guesses about the mind and criticizing those guesses. Even though this theory is by no means complete, it has not been refuted and has been very fruitful, and it has enabled me to solve other, related problems I did not anticipate (which is a good sign!).
I see – I should have placed emphasis, in my mind, on when you wrote "algorithmic" in what you had written previously; I had missed that.
I think you'd want to rule either one out as false, not as improbable. I rule out that algorithmic processes (with one exception, see below) could lead to consciousness because the mere, mindless execution of pre-existing knowledge (which is represented by those algorithms) precludes consciousness (or else it wouldn't be mindless). The destruction of knowledge can just be done mindlessly, too. So the only option that's left is the creation of knowledge. Which brings us back to creativity.
To be clear, whatever program gives rise to consciousness must itself be executable mindlessly, too (or else it wouldn't give rise to but depend on consciousness). So there is one exception, and to that extent we're in agreement. But there's something different about that program – something our current best explanations of information processing don't take into account yet.
To tackle this problem, the most promising approach to consciousness that I am aware of is the study of ephemeral properties of computer programs. Can you think of any such properties? I have found that to be surprisingly difficult!
I want to clarify for others reading this discussion what I mean by 'algorithmic'. Whatever software gives rise to consciousness is still an 'algorithm' in the sense that a Turing machine could run it. By 'algorithmic' I instead mean something that doesn't require reflection, introspection, knowledge creation, wonder – that kind of thing. Just something that can be done mindlessly. 'Robotic' is another word for it.
That's basically been Deutsch's and my claim all along – where you and I seem to disagree is whether all information processing results in consciousness or just some (and, in the latter case, which kinds). You had previously argued that all kinds might – now you're saying maybe only one does. Which is it?
Surely not. Your consciousness has causal power, does it not? It's at least causing you to write comments on this blog.
You just switched from "modelling of the external world" to the much more general "mental model". Thoughts and feelings aren't part of a model of the world around you. Also, consider whether a human brain in a vat would still be conscious. It couldn't do any modeling of the external world, but I think it would still be conscious. Don't you?
I forget who said this and the exact wording, but at most such correlations could corroborate the view that psychophysical parallelism is indeed very parallel. More generally – and we're getting back to core epistemological disagreements here – the Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status 'true' to the theory remains unchanged.
I think neuroscience is generally a bad approach to the question of how consciousness works because neuroscience operates on the wrong level of emergence. The level is too low. You wouldn't study computer hardware to understand how a word processor works. We need explanations on the appropriate level of emergence. I doubt colorful pictures of the brain can help us here; I'd disregard the brain and focus on the mind. Consciousness is an epistemological subject, not a neuroscientific one. Neuroscience has also led to such nonsense as this and this. It surely has value when it comes to understanding the brain's hardware, including medical use cases, but when it comes to the mind I think it's severely limited.
Translation: something in the brain causes consciousness. Clearly. How does that tell us anything new?
I think the answer to my question is 'no, the explanation of the source code for NPCs and Roombas does not refer to consciousness'. Note also that people have been able to program such NPCs and Roombas without first having to know how consciousness works. It's possible programmers accidentally made them conscious, but that would lead to unintended behavior in the NPCs. Programmers would seek to understand and probably get rid of this behavior as they demand absolute obedience. Also, usually, explanations come before major discoveries.
Doesn't that just amount to saying: 'There's some algorithm in the brain that makes it conscious, and if an NPC runs the same algorithm, it's also conscious'?
I find that easy to agree with, but you haven't explained why that algorithm should involve modeling the external world. In #581, you wrote you "find it plausible that the brains [sic] modelling of the external world could be an important part of of [sic] this." But why?
Better yet, see if you can explain why whatever algorithm produces consciousness must have to do with modeling the external world, ie cannot be anything else. Without using 'induction'. That would be convincing.
Your answer is littered with inductivism and the strength of your beliefs. I wasn't asking how likely your theories are, how strongly you believe in them, or anything else about your psychology. I was asking whether, in objective reality, roombas and video-game NPCs are conscious. They either are or they aren't.
If you looked at the source code of a video-game NPC, would your explanation of how the code works refer to consciousness?
That's a common claim, let's look into it. Roombas also model the external world, as do many NPCs in video games. Are they conscious?
I don't hate anybody, and neither should you.
If you're asking how people can learn to enjoy being part of a Socratic dialog, I refer you to what I wrote about slowly exposing oneself to criticism, not seeking to evangelize, and having modest expectations.
Use your real name if you want to discuss further.
In BoI chapter 17, Deutsch writes:
Deutsch's view is that static societies are ultra dogmatic; they suppress critical thinking as much as possible. Therefore, they cannot adapt; that's why they must ultimately fail.
Popper writes here (on p. 8; bold emphasis added):
Deutsch gives no credit to Popper for the discovery that societies which lack adaptability will fail. Arguably, this is the central thesis of chapter 17.
As usual, Popper is more nuanced than Deutsch when Popper writes "almost of necessity" as opposed to Deutsch's "must eventually".
h/t to Martin Thaulow for providing the Popper quote.
Maybe I'm missing something, but I think it's merely a repetition. In other words, if I propose a claim a, and you propose a conflicting claim b, and I then say 'no, I still think a', that isn't circular. Granted, it may be repetitive, but I think it would only be circular if I said, directly or indirectly, 'a because a'.
In any case, I would use a different refutation. The claim that "the execution of certain inborn algorithms by certain means (e.g. by an animal brain) gives rise to conscious experience" seems to imply that there is something special about wetware such as animal brains. As DD and others have pointed out before me, that cannot be true since it's in violation of computational universality: there's nothing a computer made of metal and silicon couldn't do that one made of wetware could (and vice versa). Our computers are universal simulators (within memory and processing-power constraints).
This refutation refers to neither previously stated syllogism, and instead to a different concept altogether (computational universality), so I don't see any circularity here.
I agree that engineering projects shouldn't be attempted on people's free choices. To be very clear, I think men would benefit from focusing less on women, but I'm not prepared to tell anyone what they should and should not do (unless they employ coercion).
You wrote:
Let's see how this compares to other markets. Continuing with the car market, if someone is looking to buy a car but decides the market isn't favorable at the moment, isn't he right to wait until conditions improve? Or, if he decides not to participate in that market because he finds some fundamental flaws with it, isn't he right to withdraw from it? And if the answer to both questions is 'yes', how is the dating market different?
I agree that planned mass intervention would be a disaster – I'm a libertarian so I think that any such top-down attempt would be immoral anyway, let alone impossible. Instead, I was talking about slowly changing the culture from within.
Creating awareness of the issues as I've described them could be a start. Men could decide to pay less attention to beauty in women and instead value other traits more. Or they could both decide to deprioritize sex and dating in general.
I dismiss my previous syllogism and instead refer back to the DD quote I gave in the main article from BoI ch. 7:
To put this in syllogistic form:
Building on this syllogism, we can address animals separately (I think one of the weaknesses of my circular syllogism, and potentially the reason for its circularity, was that it did too much at once):
This particular argument first, then potentially my view on animal intelligence in general.
Same to you.
Yes.
I see the problem. If premise 1 itself depends on creativity being necessary for consciousness, then that means I (unwittingly) snuck that assumption into my original premise 2, when it was the conclusion I wanted to arrive at. Circular reasoning.
Thanks for pointing this out. Time for me to go back to the drawing board.
OK.
You've misquoted me again; as a result, the formatting is off. You can see an explanation here (that site is under development and the link may break). You can use that site to check quotes before submission (expect bugs). Or you can paste your quote into the browser's word search and, if you only get one match (the one in the textarea), it must be a misquote (that won't work in this instance because of the enumeration but it's a decent quick-glance approach in general).
I suspect that an explanatory theory of consciousness will provide such an argument. I'm afraid I do not have one yet, but you seem to imply that my claim's epistemic status will increase if it's a conclusion rather than a standalone conjecture.
That cannot be true because we'd always need infinitely many new theories to accept just one new one. Imagine if Einstein had proposed GR and then people had said 'but what does it follow from?' We still don't know. Coming up with the next theory (from which GR follows, if only as an approximation) is another creative act. And if we do find that next theory, people can then always say 'well but what does that theory follow from?'.
This approach exhibits the infinite regress of justificationism, so I'm skeptical as to whether you can "provid[e] [me] with a refutation of [my] claims about animal consciousness [...] without us clashing on epistemology [...]".
All that being sad, I am still interested in your plan of demonstrating circularity, and this path...
...is still open. (You can see here that my quote is accurate.) I think your request can be rephrased in terms of breaking symmetry between the claims 'creativity is necessary for consciousness' and 'creativity is not necessary for consciousness'. I can then meet your request for my "preferred argument in syllogistic form" by breaking symmetry as follows:
Thus there should be a way for you "to demonstrate the circularity [you] see in [my] reasoning."
NYT article about Square making it harder for small businesses during the pandemic by increasing their money-withholding practice with little warning. But publicly they present themselves as caring about small businesses.
There is a petition with over 3,000 signatures on change.org to end this shady practice:
The petition links to https://squarevictims.org but unfortunately that site isn't working for me at the moment.
I've signed the petition to show my support.
this guy didn't end up doing anything.
he deleted my post even though it was exactly the kind of thing the fb group description asked for.
In BoI chapter 10, Deutsch has Socrates say:
But that isn't true. Previously, Socrates starts imagining a Spartan Socrates on his own and Hermes merely points it out:
It links to The Fountainhead.
Right, because they don't meet other criteria (such as not being "fantastical/crazy"). We have all kinds of criteria good theories must meet. DD wrote about this in BoI.
Re induction, I have pointed out that people use 'induction' psychologically. I do not disagree that past successes can be used to convince people to adopt a theory. That doesn't refer to induction as a process that can create knowledge.
If you're going to hold on to induction – Peirce's or someone else's – you better come up with a refutation of Hume's and Popper's work on it. I'm not interested in refuting induction for you, nor in making it work.
Regarding "[t]he source code of the universe", when I wrote "[i]n the above examples, reality is the underlying algorithm – the source code", I was debating whether I should clarify that I do NOT mean that reality is made up of source code. Looks like I was wrong not to. So, to be clear: I was merely using source code as a stand-in for reality.
Not in the scenario I've described, where you'd have no 'reason to believe' in your theory whatsoever, nor would anyone else, yet you'd be 100% correct. In addition, I quote BoI ch. 10 once more:
You wrote:
I think your request for "a good argument in favor" is indicative of a larger problem in this discussion. You seek supportive arguments, whereas I seek refutations, and I also don't consider a 'supportive argument' a success or as causing any sort of increase in a theory's epistemic status. Your methodology is justificationist in nature, mine is Popperian/'refutationist'. The reason you should accept the claim is that you cannot find a refutation of it (if indeed you cannot find one), not that I haven't given enough arguments in favor of it.
This difference in our respective approaches may lead to an impasse in this discussion. That doesn't mean we can't learn from each other, but I follow Elliot in thinking that if you're going to have a fruitful discussion, you better make decisive, yes/no arguments. I'd love for you to offer me a brutal refutation of the claim that animals are not sentient. Conversely, I'm not interested in providing "a good argument in favor" of my claims re animal sentience – not only do I doubt that any such argument will ever convince you because there could always be more justifications, but I also don't ask for such an argument in favor of the claim that animals are sentient after all.
Your first attempt at refutation was this:
Notably, this isn't a deductive syllogism of the kind you requested from me. It's inductive. But in any case, that is how we then got to the example with the beads, and this first attempt doesn't work, IMO, for the reasons I've explained re induction. But you can convince me that I'm wrong by refuting Hume's and Popper's work on induction – not by giving arguments in support of your view, but by refuting theirs.
I believe your only other attempt at refutation has been the claim that my argument is circular. I don't see it. But here's the syllogism you requested:
You can arrive at this syllogism by taking yours from #488 and reversing 1) and 2). (The major premise should come before the minor premise.)
As I hinted in #482, the syllogism may instead be:
(Given the links between creativity and criticism, we may eventually find these two syllogisms to be the same, but the difference in focus may be important in understanding animals and consciousness.)
Please explain how these syllogisms are circular? My current guess is that you're looking for a justification for 1), you think 3) would constitute such a justification, and so you misinterpret the syllogism as being circular.
You also wrote:
No. You had requested "a good answer to these problems" so you may "have a much more elegant epistemology to employ". The Popper reference was an attempt to help with that.
You can know that from theory.
I'm distinguishing between the epistemological and the psychological, as Popper did. That distinction matters because the two fields are often after different things. For example, I've quoted Popper here as saying:
Deutsch picked up the same difference in BoI – in ch. 9 he speaks of "matters not of philosophy but of psychology – more ‘spin’ than substance". And in #252, I mentioned the difference between the logical and the psychological problems of induction.
Back to your comment:
Not just because it's psychological, but yes, inductive reasoning is bad.
Depending on the underlying explanation, yes.
Say you have a bead-drawing algorithm (the kind of thing you might see in a virtual casino). Given that the algorithm works as follows...
...the 'inductive' approach would happen to be spot on.
But given that it works as follows...
...the same approach would fail pretty soon – although you might find yourself very unlucky (or lucky, depending on how you look at it) and have it repeat the same color many times.
And given that it works as follows...
...the 'inductive' approach would be spot on for the first 999 draws, and then it would suddenly fail when you're more confident than ever before that it's correct (like people were with Newtonian physics).
The Popperian approach says that making predictions is only part of reasoning, and it's not the main part. Reasoning is mainly about explaining reality, which involves resolving contradictions between ideas. In the above examples, reality is the underlying algorithm – the source code. If it's hidden from you, like reality, all you have is your knowledge of which beads you've drawn in the past, and even that you only have fallibly. But you don't limit yourself to predicting which beads will be drawn in the future. You look for cases where your predictions do not come true so you can improve your idea of what the algorithm looks like, i.e., resolve contradictions between what you think the source code is and its return values on the one hand, and the real source code and its so-far observed return values on the other. While we typically make predictions that are in line with past observations, doing so shouldn't be mistaken for induction.
Re the last example,
draw-bead''
, you might ask: 'If we've only drawn 500 beads, what earthly reason would we have to suspect that the code flips after 999?' As in: we would continue to think thatdraw-bead
is the correct solution. We wouldn't conjecture that the algorithm contains that conditional(if (< beads-drawn 1000) ...)
– after all, our predictions have always come true so far, so we've had no reason to adjust our model of the algorithm to include that conditional. In other words: we wouldn't be justified in introducing the conditional; we should only change our code when a prediction fails. And I would agree. But if somebody made that change, even without justification, they'd happen to be right! Not only would they be vindicated after 500 more draws, but they'd have discovered the true source code without any justification. So how can justification possibly matter?No, I think you may have misread me. In #476, you asked, "[w]hat sort of evidence tells us that animals are not conscious?" I responded with a link to my 'Buggy Dogs' post. To be sure, there was an aside of mine in between on how consciousness isn't the same as creativity but follows from it, but when I wrote "For specific evidence [...]", I was referring specifically to your question.
I consider premise 2 – "Creativity is required for consciousness." – to be uncontroversial for the moment. But if you have arguments why that premise cannot be true, ie refutations, I want to know.
As I've said in #109 re corroboration:
That leaves the problem of induction. You also wrote:
Popper has addressed induction thoroughly. Have you read chapter 1 of his book Objective Knowledge, titled 'Conjectural Knowledge: My Solution of the Problem of Induction'?
On second thought, re when I wrote:
I don't think I need to update that line. The implication is that, when I log in to Square using some email address, Square must have that email address on file. (Otherwise I wouldn't be able to log in with it.)
Square had my up-to-date email address on file. When I wrote that they sent an email "to an old email address I do not use to log in to Square anymore", I meant that they did so despite having my new email. I may update that line for clarity.
Square isn't a credit-card company. They're a payment processor.
I realize that. I don't think it would have helped anyway since no amount of activity in my bank account could convince them that my client is not a fraudster. (Recall that this particular issue was caused by my client's cards being declined repeatedly.)
I disagree. Even if their closing my account was legitimate – and I don't think it was – that is a separate issue from them keeping my money past their self-imposed deadline without explanation. I cannot imagine that the latter is legal.
I wrote:
(Which you misquoted, btw, by not italicizing the 'and'. Those italics are important. Continuing with my quote:)
Then you said:
One can mistakenly think that it worked in some situations and also mistakenly think that it didn't work in others. We're fallible in our interpretation of test results, too. But in any case, I wouldn't restrict my truth claims about the theory to only those applications of it that I have observed (and correctly think worked). A major 'reason to believe' – and I'm phrasing this in justificationist terms on purpose – that a theory is true, or closer to the truth, is that it solves previously unsolved problems. People can and do make such truth claims without ever testing a theory – so there can be no corroboration or (psychological) induction at play.
Regarding your adjusted GPS example about precision timing of industrial-control systems, you wrote:
As I believe I've said before, there was a massive sample size of tests of Newton's theories over the centuries, they were all successful, and yet Newton was wrong. Do I doubt that one could convince people based on past success? As I've said: no. But that's a psychological question. Sometimes, just a few decisive negative test results undo thousands of corroborations.
To be clear, the Deutschian claim as I understand it is that some entity is conscious if and only if it is creative. (Though I have wondered whether it's really: some entity is conscious if and only if it is critical. But I digress.)
Since it is an 'if and only if', we can deduce a lack of creativity from a lack of consciousness, and vice versa – can we not?
In #481, you wrote:
Are you interested in being right or in finding flaws in your thinking?