Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Tweets
An archive of my tweets and retweets through . They may be formatted slightly differently than on Twitter. API access has since gotten prohibitively expensive – I don't know whether or when I'll be able to update this archive.
But in case I will, you can subscribe via RSS – without a Twitter account. Rationale
That reminded me of this scene of Family Guy: youtu.be/NCBvQX1TafY
I think barre is going help you get that physique also.
It could. It’s not guaranteed to succeed, but it’s a soluble problem.
AGI would be fully operational without any IO whatsoever, btw.
An unbounded general purpose problem solver would be an AGI. I guess that error correction is a requirement for being unbounded. So “no” to the second part of your question: that would violate AGI’s universality.
Present day computer programs are pessimistic: they are written to perform a specific purpose, and then terminate. They cannot keep going in an unbounded fashion or correct errors the programmer made. They are designed to oppose unbounded progress.
Error correction on a software level, yes. The brain's architecture doesn't matter though, see episode 03. For it to matter, it would need to violate computational universality. soundcloud.com/dchacke/artifi…
You open a book by a scientist and on the first page they profess a belief in god.
Do you a) put the book down immediately because a scientist who believes in god doesn’t get what science is about (good explanations) or b) keep reading because that’s ad hominem?
Yes. It’s a good example of both reductionism and ignorance of computational universality.
Episode 10, “Of Books and Code”, on the similarities between non-universal printing and present day software engineering, is out. Heavily inspired by @DavidDeutschOxf. soundcloud.com/dchacke/artifi…
In addition to hardware speed, I think it depends on the thought’s performance characteristics as well. Sometimes improving that is better than increasing hardware speed.
I read joanna-bryson.blogspot.com/2014/09/artifi… I think we can use the term AGI without discounting successes in AI research. However, successes toward AGI have been nearly nil, because AI and AGI are basically opposite technologies. See: soundcloud.com/dchacke/artifi…
@ks445599 @noa_lange @DavidDeutschOxf
Could it be both? Or are they mutually exclusive?
AI research has been progressing swiftly. Progress in AGI research is comparably low, with the only contributions coming out of philosophy so far.
That was also my first thought.
@RealtimeAI @atShruti
I’d use my smooth talking - uh, talk - talking... uhm. Words. I’d use words.
(Stolen from Family Guy, and it’s more fun when read with Peter’s voice.)
RT @Madisonkanna:
Love this article and especially this last part of it by @rivatez
medium.com/@rivamelissate… https://t.co/htttkIw5uh
@sfiscience @seanmcarroll @KateAdamala
What about life is problem solving?
Any books from the field you’d recommend?
@HermesofReason @SamHarrisOrg @wakingup
Sometimes Sam is surprisingly optimistic.
RT @naval:
“the destruction of optimism, whether in a civilization or an individual, have been unspeakable catastrophes...we should take it…
Considering reading Dawkins to learn more about memes. Does he cover mostly transmission of ideas between people, or does he also go into what happens to ideas in a single mind (origin, competition, etc)?
@noa_lange @ks445599 @DavidDeutschOxf
Agreed that we should always assume to find conflict between any two ideas.
For some this is hard to imagine: how could knowledge of multiplication conflict with knowledge of how to hold a spoon?
Not sure I understand - can you elaborate?
That’s what I mean. You’re an empiricist. Empiricism is false. You won’t get around epistemology. Building an AGI is nothing but epistemology.
My worry about pseudo-randomness is that it’s reproducible and loops; I think it would “guide” evolution somehow, make it not blind.
Agreed. I wonder if any two conflicting inborn expectations will do and it will grow from there? Or does it have to be specific ones?
I’ll still entertain an explanation of how your AGI works but at this point I doubt you have one.
You need to study epistemology if you want to contribute to AGI research in any serious way. “The Beginning of Infinity” by David Deutsch is great. If you don’t read it you’ll waste your time. Alternatively (but worse) you can listen to my podcast: soundcloud.com/dchacke
Our knowledge of the world. Explanations of how the world works.
How do we think?
It's not clear to me what you mean by "symbolic schemas", but we do not create knowledge as a result of observation. That's an empiricist mistake.
Sounds like you're already working on implementing it, which means you have an explanation of how it works?
If your research is not contributing to Popperian epistemology, your efforts are futile. Unless you have something better, hence my question about how it works.
Episode 9 of the podcast on artificial creativity is out, about the problem of specification, and other problems with present day evolutionary algorithms. As always, greatly inspired by @DavidDeutschOxf.
Are you studying epistemology in general and Popperian epistemology in particular at all?
This has nothing to do with AGI. OpenAI is not working on it, despite appearances. Listen to this episode from my podcast to find out what they’re doing wrong: soundcloud.com/dchacke/artifi…
If you’ve been listenging to my podcast, you’ll find errors in virtually every paragraph of this announcement. twitter.com/OpenAI/status/…
@markcannon5 @jasonio_ @bnielson01
What is a network of neurons as opposed to a neural network?
I’m in Vienna right now and thought I’d go to Popper’s old address and I was so determined to take a selfie in front of it BUT it turns out there’s scaffolding all around it and you can’t see the building at all! So the best I got is this photo of the address sign on the building https://t.co/880n9PjCvJ
Yeah that’s what the “G” is for. Moral knowledge is also knowledge and an AGI is a universal knowledge creator. If it can’t create moral knowledge, or all other kinds of knowledge, it’s not an AGI.
RT @CodeWisdom:
“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” - Martin…
I’m going to go on the record as saying 1) they won’t mention Popper or reference/use his work anywhere (mistake) 2) therefore their work will be at first over hyped and then disappointing.
But I really want to be wrong on either or both of those.
RT @DavidDeutschOxf:
AI is the opposite of AGI.
Trying to shackle an AGI's thinking is slavery.
Explained in my essay "Beyond Reward and…
One beautiful ramification of this (I think) is that an AGI would work just fine without any input or output channels. Those aren’t part of the required hardware.
@lynz_h55 @davidarredondo @ashik_shanks
If you’re getting at the problem of sources, those don’t matter. Only content matters. If you have a great insight in a dream, it doesn’t make sense to discount that insight. The real source is always the same anyway: your mind.
@lynz_h55 @davidarredondo @ashik_shanks
You’re saying you had the dream so yes, the dream is real no matter its contents. I guess you’re really asking whether the contents are real. They’re not real as in “out there in the physical world”. But they’re real as in “abstractions in your mind”.
Also remember that if you enter California you’re entering everyone California has ever been with and that’s a LOT of people.
@lynz_h55 @davidarredondo @ashik_shanks
Whatever doesn’t figure in our best explanations. Eg god, magic, etc.
To be clear, just because both are real doesn’t necessarily mean they interact; but they do. Eg software affects the physical world.
Causality = (tentatively held, conjectured) explanation
According to Deutsch, something is real if it figures in our best explanations of something, see “The Beginning of Infinity”. That’s his criterion of reality.
If you insist however, IIRC Popper took Tarsky’s definition of truth (= correspondence to facts) and amended it a little by saying that whatever is part of a true theory should be considered real. Would need to check the source though, probably also somewhere in C&R.
Instead of looking for definitions I’d go with the common sense definition of reality that everyone knows.
The latter one is shorter and better (imo).
I don’t have it in front of me right now, but I think in “Conjectures and Refutations”. You can also just read the chapter “The Reality of Abstractions” in David Deutsch’s “The Beginning of Infinity”.
Ah, more Chomsky nonsense. No doubt they do interact, and Popper explained how.
AirPods could use head’s heat as energy source? Something for @Apple to think about.
RT @PessimistsArc:
TWITTER MICROFILM 🔎 📰
Read 1896 article on physicians blaming bicycle for lunacy below: https://t.co/XxT8whG5t5
We don’t die from that. It’s narrow AI; if it ever got dangerous (and that’s a big “if”), we could think of something it can’t yet do to overwhelm and disarm it. It’s unclear how you get from bluffing at poker to death so quickly.
Agree with 1). I’d grant more progress re 2): what we now consider “common sense” (no slavery, suffrage, universal human rights, etc) used to be very controversial. I agree however that 1) has progressed much faster than 2)!
Did you like the movie? I’ve been considering watching it.
RT @ToKTeacher:
@Azaeres
Long before the most well subscribed ideas proved themselves useful, they must first have been created in the mind…
I had skimmed it when I asked. Unless there’s some nifty CS thing that tells us every algorithm can be written using recursion, I don’t see recursion being as fundamental as you’re suggesting.
It’s not AI. It’s a piece of glass with all its knowledge instantiated. No knowledge creation at runtime. No software, even. Continue bastardizing the term “AI” until it means nothing at all.
What do you mean by “recursion epistemology”? And what by “coded into action”?
My hope is 100% genetic so one day we can in principle read its implementation. But can’t say for sure. I think David thinks it’s part gene part meme.
I suggest that on the day the first AGI is born, we consider Popper its grandfather.
@ChristopherCode @DavidDeutschOxf
Oh bummer, Twitter needs syntax highlighting :)
@ChristopherCode @DavidDeutschOxf
FWIW I consider a fn as mundane as #(println "Hello, " %)
to be emergent in the sense that we can explain its behavior well in terms of println
without reading println's implementation. I.e. functions may not need to be passed fn parameters explicitly to be emergent.
@ChristopherCode @DavidDeutschOxf
I think so, at least any Turing complete language that doesn't have explicit support for higher order functions will offer some way of modeling higher order functions anyway, see e.g. creative ways people came up with expressing higher order functions in older versions of Java.
Just released episode 8 of the podcast on artificial creativity, again heavily inspired by @DavidDeutschOxf. This one is about the central role of problems in the creative process.
Ah, yes, makes sense - criticizing a theory's structure based on some criterion is another thing we can do.
Not following exactly. What would be an example of a different but not opposing prediction?
Likewise, if I have a moral theory that "predicts" that one should lie, it will conflict with other moral theories.
So read "theory predicts" as "one can deduce from the theory".
Yeah, I don't mean just physical theories. If I have a mathematical theory of multiplication that "predicts" that 2*2=5, then it will conflict with the prevailing one.
Reviewing chapter 1 of BoI again. I can think of only one way to find a conflict between two theories: comparing their predictions and finding that they make at least one differing prediction when it should be the same.
Are there other ways?
@dela3499 @mizroba @DavidDeutschOxf
Don’t remember much about him. His common sense definition of truth sounds right to me.
@RealtimeAI
What do you mean by “in reverse”? Don’t things in dreams still happen in order?
I found Popper’s own writings about Tarski accessible. In his autobiography IIRC. Unless you’ve already read that. :)
@DavidDeutschOxf @RealtimeAI
What I mean is, if everything in the universe can be simulated with arbitrary precision on a Turing machine, yet the Turing machine's architecture is nothing like the simulated object's, then nothing depends on architecture to work, does it?
@DavidDeutschOxf @RealtimeAI
Is there anything at all that depends on architecture to work?
RT @DavidDeutschOxf:
@RealtimeAI
Because of computational universality, we can't possibly have cognitive-architecture-dependent qualia.
RT @DavidDeutschOxf:
@RealtimeAI
Indeed, before modern science, prevailing ideas interpreting qualia in terms of cognitive architecture wer…
@pmathies @RealtimeAI @DavidDeutschOxf
Why does doing math require Turing completeness? And why does having emotions not require it?
@pmathies @RealtimeAI @DavidDeutschOxf
May be an indicator of software similarities though.
@pmathies @RealtimeAI @DavidDeutschOxf
I also tend to think emotions are something that humans and (some) animals share. And they seem to have survival value (eg being scared of a predator).
My laptop could simulate all of them but is as different from my brain as it gets, so no hardware similarities needed.
@pmathies @RealtimeAI @DavidDeutschOxf
I don’t know how we could find out. Maybe neuroscientists already know the answer, idk. Ability to do math is irrelevant. A Turing complete system on its own can’t do math either. It needs the knowledge of how to do (certain kinds of, eg addition) math.
@RealtimeAI @DavidDeutschOxf
Got it. What are some software similarities between people and other animals then?