Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Tweets
An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.
But in case I will, you can subscribe via RSS – without a Twitter account. Rationale
Ah yes, my mistake.
The problem remains, however, as oftentimes the opposite argument is made: that simplifications are made to words people use often (eg so that spelling gets easier/shorter etc).
A solution to a moral problem either works or it doesn’t. And the more moral problems one solves the more one has progressed morally. That’s what I mean by morals are objective.
I don't think so. Morals are objective, and solutions to moral problems either work or they don't.
Besides, ethics as "the negotiation of conflicts of interest under conditions of shared purpose" sounds impressive (maybe) but it's vacuous.
When I was working on my last ebook, I found unsplash.com to be immensely helpful in finding beautiful, free-to-use, high-resolution images of all sorts. May even save you $$ if you can find a photo good enough for your cover so you don't need to hire a cover designer.
Simpler: ethics is the study or moral problems.
Better yet, it solves some of your own problems. If you’re both the creator and consumer, you can find and improve its flaws more easily, and your product will be so much better for it 💪
Custom implementation of 3 * 4 in JavaScript:
Array.from(Array(3).keys()).map(() => 4).reduce((acc, curr) => acc + curr);
Same in Berlin:
reduce(+ repeat(3 4))
Which do you prefer?
🙏
That's not to mention that wellbeing cannot be maximized because it can always get better.
I don't think slavery was abolished to maximize the well being of society. It was abolished because it is abhorrent. It was an instance of error correction—in this case a (grave) moral error—not an instance of optimization.
I'm afraid such a simulation is impossible because the main influencing factor—knowledge creation—is unpredictable in principle.
(It isn't really a problem anyhow!)
Well, it's like I said: through conjecture and criticism we can solve problems of all kind—be they scientific, moral, or otherwise. This interplay between conjecture and criticism is at the heart of Popperian epistemology, and it has enough reach to solve the is-ought problem.
Ah, I agree that science won't solve ethical problems, at least not most of the time. Moral problems are philosophical in nature, not scientific, yes.
Then presumably you think that abolishing slavery was no moral progress—because there is no such thing as moral progress?
I know it. I favor Popperian epistemology. The famous "you can't derive an ought from an is" is not a problem after all: as David Deutsch once said, we're not after deriving, we're after explaining! And explaining is done via conjecture, both for morals and otherwise.
I'm familiar with the work on "AI safety." Applying that stuff to narrow AI is morally ok, but applying it to AGI—who, by definition, are people (conscious, creative, etc)—is very sinister. Turns people into slaveholders.
Intelligence is the ability to solve problems. Including morals ones. These things are not orthogonal.
@HeuristicAndy
Boredom is your mind's way of telling you that something isn't for you. If someone thinks coding is dreadfully boring, they shouldn't do it.
Re GPT-3: Probably not.
When you put all of these things together: low bar to entry, good pay, beginner-friendliness, low cost, ability to make $$$ fast—how could you not recommend someone enduring hardship learn to code? It's their ticket out!
Seventh, the pay is good. Really good, even for beginners. I've never heard of a junior developer struggling to make ends meet. Your skills are too valuable for that to ever happen.
Related to that and sixth, because programming skills are incredibly valuable, you will rarely find yourself jobless, and if you do, it won't be for long. You can always find some job as a programmer to pay the bills.
Fifth, there are waaay more jobs than developers out there. Writing halfway decent JavaScript code will make $$$ within a few months of writing your first line. Getting that first paycheck thanks to a self-taught skill is one of the most empowering things you can experience.
Fourth, progress is rapid and transparent. Odds are whatever platform you're interested in working on gets improved at least every couple of months. That makes your life easier and increases your productivity. What other industry can say that?
Third, the industry is mostly open. You can find millions of lines of code for free, ask questions about it, tweak it, run it, tweak it again, contribute. The support is unlike any other and especially friendly to beginners who show they want to contribute and offer value.
That's part of what's so great about this industry: what matters is skills, not degrees. Talkers don't get far. Only qualified doers do.
I have been a software engineer for just short of ten years and never once has anyone asked me for any degrees. All my clients/employers ever wanted to know was wether I could solve a problem. Whenever I demonstrated I could, they hired me.
Second, you do not need a degree. In fact, I suggest you don't get one. You can learn everything you need online for free. And you can learn it in a matter of months. Compare that to a four-year degree and thousands of dollars of debt. It's a no-brainer.
First of all, coding is relatively easy. There, I said it. Granted, it's not the easiest thing in the world, but it's easier than many other career paths, esp. those involving manual labor, long hours at night, or work outside. That coding requiring genius is misleading folklore.
Detractors allege that telling someone who's in a bad spot to "learn to code" trivializes their hardship. But that's not at all the case: learning to code really is the best way to escape hardship. Why?
Actually, suggesting someone learn to code is great career advice.
A thread 🧵
What the let's-worry-about-AI folks don't know is: moral knowledge grows by the same logic as all other knowledge, and increased processing speeds make possible increased error correction of moral knowledge, too. So, we should give AGIs as much processing speed as possible!
Fairly smart, but not intelligent at all.
Yes, highly relevant for today's discussions with social-justice warriors.
Yes, automation frees people up to be more creative. By not having to execute tasks mindlessly, they can solve problems creatively.
"Machine speeds rather than human speeds"—the important constraining factor is going to be the performance characteristics of software, not hardware, as an ungainly and slow piece of software runs slowly even on a fast computer.
@SpaceX @NASA @AstroBehnken @Astro_Doug @Space_Station
Very nice. Congrats.
Saw this interesting question on Reddit: if Corona hadn't happened, how would the past ~4 months have played out differently for you?
I would have been able to go to the gym. And I probably would have eaten out more.
How would things have been different for you?
Exactamente. Tal vez le gusta también lo que dijo Popper de que todos somos iguales en nuestra ignorancia infinita.
@FranklinAmoo @JulienSLauret @TDataScience
Unless GPT-3 brings us closer in our understand of how the human mind works, it's not a step toward AGI. Nor can AGI be achieved through incremental steps—it requires something wholly new and qualitatively different.
By definition AGI and human intelligence are equivalent.
Agreed, because AGI cannot be achieved in steps—it requires something wholly new and qualitatively different. We won't be able to build it unless we understand how the human mind works.
Sí, el principio de optimismo probablemente es uno de los principios más importantes que conocemos. También su conjetura que problemas son solubles y inevitables.
Increased intelligence goes along with increased error correction, including the correction of moral errors and stability-related errors.
Regulation employs coercion and subdues creativity (which is powered by error correction) and therefore makes society less stable.
@Ronald_vanLoon @DeepCaked @demishassabis @goodfellow_ian
Good stuff. Would be cool if they could make the voice sound younger, too, to go along with it.
Re the first one: I would guess the word "colleague" is used more frequently than the word "monolog."
Actually I don't know that you're a sir, maybe you're a ma'am, but you know what I mean.
In my head a quiet voice went, "huh, that could sound like 'fish'," but thinking I had to give the right answer, I pronounced it "go-tea."
It's funny you mention that one. My high-school English teacher showed us that one. She simply wrote "Ghoti" on the blackboard and asked us how we would pronounce it.
The differences between British and American English are inconsistent.
It's monologue/monolog, dialogue/dialog—but why not colleague/colleag?
Similarly, it's metre/meter, centre/center—but why not table/tabel?
“You can travel from anywhere on Earth to every other place on Earth in less than an hour if you go through space.”
Hopefully very, very soon! twitter.com/HumanProgress/…
Also interesting how some of the very people who are fascinated by its capabilities are automatically worried about its implications, too.
Instead of saying, “wow, this is neat. I can use this!” they say, “wow, this is neat. But most people are going to lose their jobs!”
I like the self-delete. What scenes do you think that one and the dying one would have been used for?
@thenumber8008 @RichardDawkins
I like “idiomemes” and “idemes.” Not bad ideas!
A good example of how well-meant positive rights turn into hellish nightmares.
They don't derive knowledge. They create it afresh.
RT @PessimistsArc:
1897: Guy asks Washington D.C. for permission to use his new fangled horseless carriage.
Their response? Banning all ho…
@lostintaut @RichardDawkins
Oh cool, wasn't familiar with "protologism."
"Intrameme" could work, but ideally the word would be as simple as "meme." Something single-syllable, maybe rhyming with meme...
@lostintaut @RichardDawkins
One last thing: "inter" means "between" or "across," so "inter" may confuse people into thinking that we're talking about ideas that spread between minds.
"Intra" means "within" which fits better.
So there's a difference in meaning. E.g. intranational != international.
@lostintaut @RichardDawkins
Hmm not bad. Intra maybe better. But a mouthful in any case.
- The Communication of Authority
- The Disambiguation of Hilarity
- The Artificiality of Anxiety
- The Machiavellianism of Uncertainty
- The Latitudinarianism of Diversity
- The Schizosaccharomycetaceae of Disparity
Your algorithm has reach.
@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner
Interesting. I've had similar thoughts about humor (though not in terms of rewards).
RT @realchrisrufo:
Seattle is quickly moving forward with its plan to "abolish prisons."
I've received a trove of leaked documents from w…
Yes. I think this tells us something new about meme theory: not only does a meme have to be good at spreading between people and getting its holders to enact certain behaviors, it must first spread within minds as a meon. That's how it causes behavior in the first place.
@fwmm @micahtredding @RichardDawkins
I believe computational universality follows from the laws of physics as a conclusion (see Deutsch's work IIRC), but even if it were purely conjectural, so what? All theories are conjectural (Popper), so that in itself is not a criticism.
@fwmm @micahtredding @RichardDawkins
The abstractions I wrote about are not at all arbitrary. They are deliberately placed in the context of a larger theory that's hard to vary, and their place in it is hard to vary, too.
@Crit_Rat @ChipkinLogan
BTW, are you by chance referring to a specific theory of ideas that replicate within a mind/have you heard of that concept before? I've been trying to find evidence that maybe the neo-Darwinian theory of the mind isn't new.
@Crit_Rat @ChipkinLogan
Ok but sometimes I want to speak only of the former not the latter, so it would be good to disambiguate them.
@fwmm @micahtredding @RichardDawkins
If we understand the mind tomorrow, and build AGI on a computer, that AGI will not have anything to do with brain hardware—nor could it possibly, because it won't run on a brain, nor would the computer it runs on have to imitate the brain in any way because both are univrsl alrdy
@fwmm @micahtredding @RichardDawkins
That our progression in computer science was bottom-up does not refute computational universality or the substrate-independence of software.
And because computation is universal, there is no different kind of computation going on in a mind.
Maybe “meons”...
To be clear, they are already part of the neo-Darwinian evolution that occurs in a mind even if they never become memes. Both meme evolution and meon evolution are neo-Darwinian.
Actually scratch that, “neon” is already a name for a chemical element.
what do you think? As in the “neo” in “neo-Darwinian theory of the mind.”
Title suggestion for David’s next book: “The Beginning of Infinity 2: The Middle of Infinity.”
@fwmm @micahtredding @RichardDawkins
We don’t explain word processors in terms of hardware, do we? Why should we when it comes to the mind?
@fwmm @micahtredding @RichardDawkins
Why? The mind, like all software, is substrate-independent.
If only the word weren’t already taken!
Those are one’s gym-related ideas only. ;)
@lostintaut @RichardDawkins
Looks as though that distinction is already in use:
And I’d rather not have the word “meme” in the word. A rhyme of it would be cool maybe. Or something completely new. twitter.com/micahtredding/…
@fwmm @micahtredding @RichardDawkins
I would just focus on ideas replicating as software, independent of their substrate.
After all, the whole point of AGI research is to run the program on a computer that isn’t the brain.
@fwmm @micahtredding @RichardDawkins
I would ignore the brain entirely and focus on the mind. Yes all information processing is physical but we have figured out how to translate abstractions into physical movements (by building and instructing computers) and the brain is a computer.
@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner
Glad you read the article and appreciate the feedback.
@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner
There are many “theories of consciousness” along those lines but I’m not particularly impressed with them. Self-referentiality has a woo-woo status somehow. May have to do with recursion being intimidating. Don’t see how that explains anything.
@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner
Yes, I’ve had similar thoughts.
Also interesting how we forget most dreams quickly after waking up. Must be short-lived replicators. Adapted to the dream state, overwhelmed in the waking state.
@tjaulow @ChipkinLogan @astupple @popper1902 @RichardDawkins @ella_hoeppner
Not sure I understand—mind elaborating on humor? You’re saying there’s an explanation of humor that says the same thing I wrote about consciousness?
I like it but too easy to confuse with memes when heard spoken.
@micahtredding @RichardDawkins
Ah. Not sure why you threw linguistics in there but yes, generally speaking, just thinking of an idea may help spread it through the mind.
Judging by my notifications it took you three min to read a fifteen min essay?
@micahtredding @RichardDawkins
What do you mean by "simulation" here?