I have no idea because I don’t understand it. It reads vaguely like a summary of crankery. Possibly I would need to read Forrest Landry’s work, but given that it’s also difficult to read...
This is honest.
Maybe it would be good to wait for people who can spend the time to consider the argument to come back on this?
I mentioned that Anders Sandberg has spent 6 hours discussing the argument in-depth. Several others are looking into the argument.
What feels concerning is when people rely on surface-level impressions, such as the ones you cited, to make judgements about an argument where the inferential gap is high.
It’s not good for the epistemic health of our community when insiders spread quick confident judgements about work by outside researchers. It can create an epistemic echo chamber.
...and I currently give 90%+ that it’s crankery, you must understand why I don’t
I do get this, given the sheer number of projects in AI Safety that may seem worth considering.
Even if your quick probability guess is 95% for the reasoning being scientifically unsound, what about the remaining 5%?
What is the value of information given the possibility of discovering that alignment efforts will unfortunately not work out? How much would such a discovery change our actions, and the areas of action we would explore and start to understand better?
Historically, changes in scientific paradigms came from unexpected places. Arguments were often written in ways that felt weird and inscrutable to insiders (take a look at Gödel’s first incompleteness theorem).
How much should a community rely on people’s first intuitions on whether some new supposedly paradigm-shifting argument is crankery or not?
Should the presentation of a formal argument (technical proof) be judged on the basis of social proof?
This is honest.
Maybe it would be good to wait for people who can spend the time to consider the argument to come back on this?
I mentioned that Anders Sandberg has spent 6 hours discussing the argument in-depth. Several others are looking into the argument.
What feels concerning is when people rely on surface-level impressions, such as the ones you cited, to make judgements about an argument where the inferential gap is high.
It’s not good for the epistemic health of our community when insiders spread quick confident judgements about work by outside researchers. It can create an epistemic echo chamber.
I do get this, given the sheer number of projects in AI Safety that may seem worth considering.
Having said that, the argument is literally about why AGI could not be sufficiently controlled to stay safe.
Even if your quick probability guess is 95% for the reasoning being scientifically unsound, what about the remaining 5%?
What is the value of information given the possibility of discovering that alignment efforts will unfortunately not work out? How much would such a discovery change our actions, and the areas of action we would explore and start to understand better?
Historically, changes in scientific paradigms came from unexpected places. Arguments were often written in ways that felt weird and inscrutable to insiders (take a look at Gödel’s first incompleteness theorem).
How much should a community rely on people’s first intuitions on whether some new supposedly paradigm-shifting argument is crankery or not?
Should the presentation of a formal argument (technical proof) be judged on the basis of social proof?