Meta-point: your communication pattern fits with following pattern:
Crackpot: <controversial statement>
Person: this statement is false, for such-n-such reasons
Crackpot: do you understand that this is trivially true because of <reasons that are hard to connect with topic>
Person: no, I don’t.
Crackpot: <responds with link to giant blogpost filled with esoteric language and vague theory>
Person: I’m not reading this crackpottery, which looks and smells like crackpottery.
The reason why smart people find themselves in this pattern is because they expect short inferential distances, i.e., they see their argumentation not like vague esoteric crackpottery, but like a set of very clear statements and fail to put themselves in shoes of people who are going to read this, and they especially fail to account for fact that readers already distrust them because they started conversation with <controversial statement>.
On object level, as stated, you are wrong. Observing heuristic failing should decrease your confidence ih heuristic. You can argue that your update should be small, due to, say, measurement errors or strong priors, but direction of update should be strictly down.
The following responses from EY are more in genre “I ain’t reading this”, because he is more using you as example for other readers than talking directly to you, with following block.
What if objectionists had a correct thermodynamics-style heuristic that implied superintelligence/RSI is impossible, but which could not answer the question of where exactly it failed? Then the failure of objectionists doesn’t mean they were wrong.
We have to be willing to investigate the new evidence as it arrives, perform root cause analysis on why A but not B happened, and use this to update our models.
And the evidence I’ve gotten since then suggests something like “it is impossible to do something without assistance from a higher power”/”greater things can cause lesser things but not vice versa”, as a sort of generalization of the laws of thermodynamics.
If appropriate thought had been applied by a knowledgeable person back in 2004, maybe they could have taken this model and realized that nanotech violates this ordering constraint while AlphaProteo does not. Either way, we have the relevant info now.
And part 2:
The particular way the objectionists failed was in that they didn’t give a concrete prediction that matched the way stuff played out.
Part 2 is what Eliezer said was false, but it’s not really central to my point (hence why I didn’t write much about it in the original thread), and so it is self-sabotaging of Eliezer to zoom into this rather than the actually informative point.
Meta-point: your communication pattern fits with following pattern:
The reason why smart people find themselves in this pattern is because they expect short inferential distances, i.e., they see their argumentation not like vague esoteric crackpottery, but like a set of very clear statements and fail to put themselves in shoes of people who are going to read this, and they especially fail to account for fact that readers already distrust them because they started conversation with <controversial statement>.
On object level, as stated, you are wrong. Observing heuristic failing should decrease your confidence ih heuristic. You can argue that your update should be small, due to, say, measurement errors or strong priors, but direction of update should be strictly down.
Can you fill in a particular example of me engaging in that pattern so we can address it in the concrete rather than in the abstract?
To be clear, I mean “your communication in this particular thread”.
Pattern:
<controversial statement>
<this statement is false>
<controversial statement>
<this statement is false>
<mix of “this is trivially true because” and “here is my blogpost with esoteric terminology”>
The following responses from EY are more in genre “I ain’t reading this”, because he is more using you as example for other readers than talking directly to you, with following block.
This statement had two parts. Part 1:
And part 2:
Part 2 is what Eliezer said was false, but it’s not really central to my point (hence why I didn’t write much about it in the original thread), and so it is self-sabotaging of Eliezer to zoom into this rather than the actually informative point.