I would love to see you say why you consider these bad ideas. Obvious such AI’s could be unaligned themselves or is it more along the lines of these assistants needing a complete model of human values to be truly useful?
John’s Why Not Just… sequence is a series of somewhat rough takes on a few of them. (though I think many of them are not written up super comprehensively)
I would love to see you say why you consider these bad ideas. Obvious such AI’s could be unaligned themselves or is it more along the lines of these assistants needing a complete model of human values to be truly useful?
John’s Why Not Just… sequence is a series of somewhat rough takes on a few of them. (though I think many of them are not written up super comprehensively)