This is a great idea Kaj, thanks for taking the initiative.
As noted by others, one issue with the AI Risks chapter is that it attempts to cover so much ground. I would suggest starting with just hard take-off, or local take-off, and presenting a focused case for that, without also getting into the FAI questions. This could also cut back on some duplication of effort, as SIAI folk were already planning to submit a paper (refined from some work done for a recent conference) for that issue on “machine ethics for superintelligence”, which will be discussing the FAI problem.
P.S. If people would like to volunteer to review drafts of those papers in a few weeks to give feedback before they are submitted, that would be much appreciated. (You can email me at myfirstname AT mylastname at gmail)
Thanks for the heads-up—if there’s already a FAI paper in the works, then yes, it’s certainly better for this one to concentrate merely on the FOOM aspect.
This is a great idea Kaj, thanks for taking the initiative.
As noted by others, one issue with the AI Risks chapter is that it attempts to cover so much ground. I would suggest starting with just hard take-off, or local take-off, and presenting a focused case for that, without also getting into the FAI questions. This could also cut back on some duplication of effort, as SIAI folk were already planning to submit a paper (refined from some work done for a recent conference) for that issue on “machine ethics for superintelligence”, which will be discussing the FAI problem.
P.S. If people would like to volunteer to review drafts of those papers in a few weeks to give feedback before they are submitted, that would be much appreciated. (You can email me at myfirstname AT mylastname at gmail)
Is the first AT to really confuse bots or am I missing something technical?
I think it was supposed to be a “dahtt.”
Thanks for the heads-up—if there’s already a FAI paper in the works, then yes, it’s certainly better for this one to concentrate merely on the FOOM aspect.