Anonymous: I’d hire all AI researchers to date to work under Eliezer and start seriously studying to be able to evaluate myself whether flipping the “on” switch would result in a friendly singularity.
(emphasis mine)
I doubt this is the way to go. I want a medium sized, talented and rational team who seriously care, not every AI programmer in the world who smells money. I bring Eliezer a blank cheque and listen to his and those he trusts arguments for best use, though he’d have to convince me where we disagreed, he seems good at that.
Also, even after years of studying for it I wouldn’t trust me, or anyone else for that matter, to make that switch-on decision alone.
I doubt this is the way to go. I want a medium sized, talented and rational team who seriously care, not every AI programmer in the world who smells money. I bring Eliezer a blank cheque and listen to his and those he trusts arguments for best use, though he’d have to convince me where we disagreed, he seems good at that.
Also, even after years of studying for it I wouldn’t trust me, or anyone else for that matter, to make that switch-on decision alone.