Epistemic status: Random ranting about things I’m confused about. Hopefully this post makes you more confused about optimization/agency/etc.
A few parables of counterfactuals
What if 2+2 equaled 5?
American: Hmm, Yeah. What if giving me two dollars and then another two had me end up with 5? I can imagine that.
Me: What! This can obviously never happen! This would break literally everything! It’s literally a logical contradiction!
What if Quantum computers could solve NP-complete problems?
Me: Hmm sounds cool, I can imagine-
Scott Aaronson: What! They obviously can’t because (unintelligible). You’d have to fundamentally alter how reality works, and even then things might not be salvageable.
What if there wasn’t a storm today in Texas?
Ancient Greek: I’m imagining Zeus wasn’t as angry down, perhaps Hera calmed him down.
Weather Expert: I knew there’d be a storm 3 days ago, if there wasn’t a storm today, that would necessarily change the weather in the adjacent states over the past few days.
Superintelligence: Uh… There had to be a storm today...
What is optimization?
Suppose you view optimization as “pushing the world into low-probability states.” Consider the following
Does an asteroid perform optimization? We can predict its path years in advance, so how “low-probability” was the collision?
Many here would intuitively agree that Elon Musk is a powerful optimizer (pushed the world into low-probability states). Yet, a sufficiently powerful predictor wouldn’t have been surprised by Tesla and SpaceX succeeding.
Does the bible perform optimization (insofar as the world looks different in the counterfactual without it)? Or does the “credit” go to its authors? (Same with an imaginary being everyone believes in)
Can we really say optimization is a thing in the territory? Or is it an artifact of the map?
The subjectiveness of probability “infects” all the concepts that use it. As do the limitations of the theory (like being compute-bounded; aka logical counterfactuals)
For example, if optimization is pushing the universe into subjectively unlikely states then all the confusion gets pushed into the word “pushing”[1].
This is what philosophy does to you. This is why bits of the universe shouldn’t think too hard about themselves.
I suspect I’ve been nerdsniped by a wrong question somehow. This line of thought doesn’t seem productive for aligning the AI… Curious to hear takes in the comments.
[ASoT] Probability Infects Concepts it Touches
Epistemic status: Random ranting about things I’m confused about. Hopefully this post makes you more confused about optimization/agency/etc.
A few parables of counterfactuals
What if 2+2 equaled 5?
American: Hmm, Yeah. What if giving me two dollars and then another two had me end up with 5? I can imagine that.
Me: What! This can obviously never happen! This would break literally everything! It’s literally a logical contradiction!
What if Quantum computers could solve NP-complete problems?
Me: Hmm sounds cool, I can imagine-
Scott Aaronson: What! They obviously can’t because (unintelligible). You’d have to fundamentally alter how reality works, and even then things might not be salvageable.
What if there wasn’t a storm today in Texas?
Ancient Greek: I’m imagining Zeus wasn’t as angry down, perhaps Hera calmed him down.
Weather Expert: I knew there’d be a storm 3 days ago, if there wasn’t a storm today, that would necessarily change the weather in the adjacent states over the past few days.
Superintelligence: Uh… There had to be a storm today...
What is optimization?
Suppose you view optimization as “pushing the world into low-probability states.” Consider the following
Does an asteroid perform optimization? We can predict its path years in advance, so how “low-probability” was the collision?
Many here would intuitively agree that Elon Musk is a powerful optimizer (pushed the world into low-probability states). Yet, a sufficiently powerful predictor wouldn’t have been surprised by Tesla and SpaceX succeeding.
Does the bible perform optimization (insofar as the world looks different in the counterfactual without it)? Or does the “credit” go to its authors? (Same with an imaginary being everyone believes in)
Can we really say optimization is a thing in the territory? Or is it an artifact of the map?
The subjectiveness of probability “infects” all the concepts that use it. As do the limitations of the theory (like being compute-bounded; aka logical counterfactuals)
For example, if optimization is pushing the universe into subjectively unlikely states then all the confusion gets pushed into the word “pushing”[1].
This is what philosophy does to you. This is why bits of the universe shouldn’t think too hard about themselves.
I suspect I’ve been nerdsniped by a wrong question somehow. This line of thought doesn’t seem productive for aligning the AI… Curious to hear takes in the comments.
Pun intended.