Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Tweets
An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.
But in case I will, you can subscribe via RSS – without a Twitter account. Rationale
The quote is also evidence that Musk is definitely a reductionist, because he throws together concepts like AI (which is software) and the limbic system and cortices (which are hardware), which doesn't make sense.
In the wannabe-AGI-slaveholder department Musk is heavily influenced by Bostrom.
Do you see how much pessimistic prophecy suddenly found its way into a meeting of otherwise optimistic celebration of engineering progress? Existential threats? Society-wide planning? Etc.
"And so I think it's going to be important from an existential-threat standpoint to achieve a good AI symbiosis, and that's what I think might be the most important thing that a device like this achieves."
"...and having that symbiosis be good such that the future of the world is controlled by the combined will of the people of Earth. That's obviously going to be the future that we want, presumably, if it's the sum of our collective will..."
"On a species-level basis I think it's going to be important for us to figure out how we coexist with advanced artificial intelligence [and] achieving some kind of AI symbiosis where you have an AI extension of yourself, like a tertiary layer above the limbic system and cortex.."
Okay the live stream is over so I can now rewind to the corresponding passage and quote him properly:
Musk's approaches to AGI "safety" and social "planning" are oddly totalitarian and pessimistic sounding. Knowledge of engineering won't help with that. He needs more knowledge of philosophy. Disconcerting given how much influence he has in terms of being able to spread ideas.
Third, summing the will of everyone with the link into the "will of society" doesn't work without introducing all kinds of inconsistencies and paradoxes. See Balinski's and Young's theorem as explained in BoI chapter 13. There can be no coherent "will of the people."
AGI "control problems" are, as I have explained elsewhere, cynical, slaveholder-like, mind-control games. Don't control AGIs, let them be. They're people like you and me.
First, an AI "extension" of oneself (if it is AGI) doesn't make sense. An AGI is a person. You can't be more of a person through some extension. You either have a second person sharing your brain for resources, or you don't. And you either are a person, or you're not.
There may be mistakes in this summary because he said it quickly and I couldn't write it all down in time. BUT if accurate, there are lots of problems with what he said.
Musk just claimed that the biggest thing the link could achieve is (paraphrasing) "symbiosis" with an AI extension of oneself in a tertiary layer of the limbic system and "controlled" AI, and summing together the will of every person into an emergent "will of society."
Yes, I've heard about that "low-level," hardware-based approach. I think it's deficient (let alone dehumanizing).
Maybe then neuronal patterns somehow help you refute your conjectured explanations. But the conjecture has to happen first.
In other words: you can't just "read off" the explanation for consciousness from neuronal patterns. You have to conjecture bold explanations.
Popper decades ahead once again. Paraphrasing from a conversation with Konrad Lorenz (fallibly, from memory): If neuronal patterns were shown to correspond directly to qualia, all you could claim is a tight parallelism, but that says nothing about the mental states themselves.
Employee claims that with the link they could now observe neuronal patterns that associate with mental phenomena (qualia?)
Judging by these statements it seems to me they are reductionist (maybe without explicitly realizing), which will put a pretty hard limit on how much they can understand about the mind.
Musk claims link can shed light on consciousness. Employee expands and claims the "hard problem will vanish very quickly."
Prophecy.
I'm not saying they shouldn't test on animals. I'm saying animal welfare is not an issue.
Employees now signaling how well they take care of the animals. 🙄 "We don't force things on them as much as possible."
The "as much as possible" doing a lot of work there. For, how could the pigs possibly consent to surgery?
Musk says link could serve to create backups of memories and let you upload and download them.
As I've hinted at, in addition to hiring a neuroscientist, I think they would benefit from hiring a Popperian neurophilosopher (i.e. epistemologist) if they really want to tackle big problems like qualia.
Musk just said that eventually they want to get to being able to type using your thoughts. For now they want to help quadriplegic people.
Calling the pigs happy reminds me of the stuff in BoI chapter 12 about mistaking a proxy for the underlying phenomenon.
A brain implant can't solve your problems — that requires creativity. Unless they run an AGI on that implant, which would mean you'd have another person physically inside your head. Don't think that's what they're going for.
See David Deutsch's conjecture that unhappiness is a result of being chronically baulked in one's attempts to solve problems.
Musk claims interfacing with hypothalamus could cure depression, anxiety. I find that highly doubtful. These phenomena are best explained on the level of ideas.
Happiness and other qualia are philosophical problems and properties (or perhaps pieces) of software, not hardware. Do they make that distinction? Are they reductionist? Have they thought about these questions?
Have Musk and the team given thought to the mind-body problem? Do they want to work on the mind in addition to the brain? How can they make predictions about the link's effect on human happiness without an investigation into what happiness is? Etc.
Musk says main purpose of the presentation is to recruit the best people. No prior experience on brains required. The wants to hire in the following domains. What's missing most notably from this list? PHILOSOPHY. https://t.co/XohBOpOww3
Link would connect to phone through an app. Range roughly 15 to 30 feet.
Musk claims link can predict movements of pig on treadmill fairly accurately.
Link in pig that has link is making beeping sounds for incoming neural signals.
At most you could say (for now) that the pig's hardware is functioning properly after link removal.
First pig does not have the link. Second pig used to but doesn't anymore. He claims that second pig is "healthy and happy."
Big epistemological mistake: making a prophecy about human happiness without explaining happiness or similarities and differences between pigs and humans.
I'm glad they tested on animals before testing on humans, which is both safe and ethical.
Musk is presenting demo with live pigs, some of which have the implant.
Musk claims brain does not bleed during procedure as wires are inserted.
Device installed in hole in skull and replaces that portion of the skull.
Device implantable in outpatient procedure in < 1hr without general anesthesia.
If I just understood him correctly, the device he's presenting could play music in your head.
Musk claims Neuralink can fix below problems. From what I can tell so far, Neuralink creates hardware solutions. Therefore, I find it doubtful that it could help with depression and anxiety. Memory seems to me a hybrid software/hardware problem. The rest are conceivable. https://t.co/QHHvqUfEY3
Broadly speaking, Neuralink wants to solve brain and spine problems.
I'll be live-tweeting my thoughts and Popperian comments on the Neuralink keynote happening now: youtube.com/watch?v=DVvmgj…
👇
One does have to wonder how much more and faster progress Neuralink could be making with better epistemology.
That's a great pic, because you can see both phenomena at once!
And yes, this all makes sense now — appreciate the explanations.
How do you make sure short-term decisions like these do not lead to inconsistencies with the BB story line? It must be hard to think everything through every episode.
Yes. (Though, to be clear, in humans, inborn algorithms other than creativity play a very small role in good explanations of human behavior and mental states. It’s mostly about the ideas they create during their lifetime.)
Therefore, we tentatively conclude that animal consciousness is not real.
Also, recall David’s criterion of reality: something is real if it plays a role in our best explanations of something. All animal behavior is perfectly explicable through inborn algorithms. Consciousness does not play a role in those explanations.
There is an explanation linking creativity and consciousness. And an explanation of the genetic mutation that gave rise to both, but is missing in animals.
We don’t have a good theory of consciousness, but we have good, non-refuted theories according to which animals aren’t conscious. Important to distinguish there.
You don’t need to know how to play the piano to detect a flaw in a pianist’s performance.
Yet your husband’s view that the Nazis were aggressors in part because of their genes is surprisingly close to this mistaken view, is it not? I am referring to this public discussion: youtu.be/hYzU-DoEV6k
The plot thickens. If true, highly relevant and eye-opening as to how the communist party in China has been covertly pressuring Western politicians into executing lockdown measures. twitter.com/MichaelPSenger…
David Deutsch offers a compelling, hopeful and inspiring vision for society in his book “The Beginning of Infinity.”
Forget @jack’s disastrous donation of $10m to aggressor @DrIbram. Peanuts, it would seem: BBC now allocating £100m to finance self-censorship and, presumably, active discrimination against those who fit “the wrong script,” meaning white colleagues and collaborators. twitter.com/BBC/status/127…
RT @ChipkinLogan:
My story about Constructor Theory has been published with Gizmodo - gizmodo.com/a-meta-theory-…
@RosePastore @gizmodo @DrBri…
@infexm1 @ExmuslimsOrg
Interesting thought. To me it seemed to indicate she knew exactly what he was getting at but didn't want to admit it.
I didn’t say that was the purpose. That’s the way he wants to achieve his purpose. And surely you would agree that it’s ironic if his goal really is to reduce racism?
A better way to reduce racism is not to put so much emphasis on race and not to stir up hatred like Kendi does
@ckshowalter @giantcat9 @_Islamicat
Jabril is has left for Somalia
By discriminating against white people for being white? Do you see the irony in that?
And today they're writing articles arguing that telegrams are racist.
Does this quote sound to you like he wants to help heal the recovering wounds of the past, or tear them open and put salt in them for political gain?
A quote from his recent book:
"The only remedy to racist discrimination is antiracist discrimination. The only remedy to past discrimination is present discrimination. The only remedy to present discrimination is future discrimination."
$10M toward hate, division, and coercion. twitter.com/jack/status/12…
Remember when "the lungs of the Earth" were burning and everyone thought the world was ending?
Was Leonardo DiCaprio able to fix it or what happened?
I’ve thought about it. But then tomorrow substack might support BLM. May be better to go fully self-hosted.
Are you familiar with the concept of universality? Explanatory and computational in particular?
When I worked at General Assembly (bootcamp web-dev course), they also had a check mark in every row.
@Yankuniz @CNC3P0 @SamHarrisOrg
If someone told you to obey some demand you disagree with—e.g., they force you to go outside and mingle with old people, thereby increasing their exposure to the virus—how would that make you feel? Would you not consider resistance your moral duty?
@berndj @ModelsofMind @CNC3P0 @SamHarrisOrg
If we are mistaken—and we almost always are—coercion must entrench our mistakes.
@berndj @ModelsofMind @CNC3P0 @SamHarrisOrg
That’s not to mention that we can be mistaken about which ideas are true or false, better or worse, etc.
@berndj @ModelsofMind @CNC3P0 @SamHarrisOrg
I didn't say all opinions have equal merit (they don't).
I was arguing against coercion. It's not okay to coerce someone even if they have ridiculously false (seemingly or not) ideas.
Oooh that’s a very good idea.
How come there’s no shadow of the plane itself at the tip? Too small/blurry maybe?
It moved with the plane and got longer over time. https://t.co/BxrPxUucdl
I once saw a single, thin, black line on the surface of the Pacific Ocean. What might that have been?
Maybe one of the most exciting silver linings of Popperian epistemology being underrated is that, recent progress notwithstanding, it's still largely underdeveloped—many great discoveries remain to be made and you can actively shape the field.
It's a good time to be a Popperian.
Wasn't meant as a refutation, sir, only as a contribution.
Only problem is, you can’t have one without the other.
@_Islamicat @JarvisDupont @TitaniaMcGrath @TheBabylonBee
Is sad day for free speech but great day for catliphate.
I'm not sure there is a "best" way, but I suggest focusing on problems you want to solve and pursuing what's fun and interesting to you. That's what I did when I started. In case it helps, I wrote a bit about the topic here: medium.com/swlh/anyone-ca…
That, despite being false, the claim is so widespread?
RT @jdnoc:
The reason my product generates $35k MRR is because my product generates $1,000,000 MRR combined for my customers (1000 customer…