I thought that debate was about free will.
ocr-fork
This isn’t obvious. In particular, note that your “obvious example” violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.
Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.
Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.
The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.
The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don’t even need it...
He’s also written a book called “Thank God for Evolution,” in which he sprays God all over science to make it more palatable to christians.
I dedicate this book to the glory of God. Not any “God” me may think about , speak about , believe in , or deny , but the one true God we all know and experience.
If he really is trying to deconvert people, I suspect it won’t work. They won’t take the final step from his pleasant , featureless god to no god, because the featureless one gives them a warm glow without any intellectual conflict.
How much more information is in the ontogenic environment, then?
Off the top of my head:
The laws of physics
9 months in the womb
The rest of your organs. (maybe)
Your entire childhood...
These are barriers developing Kurzweil’s simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That’s idiotic.
One of the facts about ‘hard’ AI, as is required for profitable NLP, is that the coders who developed it don’t even understand completely how it works. If they did, it would just be a regular program.
TLDR: this definitely is emergent behavior—it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.
Yuck.
The first two questions aren’t about decisions.
“I live in a perfectly simulated matrix”?
This question is meaningless. It’s equivalent to “There is a God, but he’s unreachable and he never does anything.”
it might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn’t sure so I used BB(2^(n+1)) in my example instead.
You can find it by emulating the Busy Beaver.
Oh.
I feel stupid now.
EDIT: Wouldn’t it also break even by predicting the next Busy Beaver number? “All 1′s except for BB(1...2^n+1)” is also only slightly less likely. EDIT: I feel more stupid.
Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don’t they? Suicide is, I think, a good indicator that someone is having a bad life.
Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.
What about the agent using Solomonoff’s distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak. It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.
I don’t understand how the bolded part follows. The best explanation by round BB(2^n) would be “All 1′s except for the Busy Beaver numbers up to 2^n”, right?
Right, and...
A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), …, Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.
So why can’t the universal prior use it?
Umm… what about my argument that a human can represent their predictions symbolically like “P(next bit is 1)=i-th bit of BB(100)” instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can’t incorporate this?
BB(100) is computable. Am I missing something?
But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
I’ve read the post. That excuse is actually relevant.
To cite one field that I’m especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.
I don’t see how bayesian utility maximizers lack the “philosophical abilities” to discover these ideas. Also, the last one is only half true. The “wrong” link is about decision theory paradoxes, but a bayesian utility maximizer would overcome these with practice.
astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that’s random.
But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don’t know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.
Does anyone else feel like this just a weird remake of cached thoughts?
They remember being themselves, so they’d say “yes.”
I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant “why do you think a vitrified brain is conscious if a book isn’t.”
Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.
There’s a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.
That’s orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.
CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.