Nice article! I can’t find anything I disagree with, and especially like the distinction between enhancement, merging and alignment aid.
Also good point about the grounding in value learning. Outside of value learning, perhaps one won’t need to ground BCI signals in actual sentiments? Especially if we decide to focus more on human imitation, just the raw signals might be enough. Or we learn how to extract some representation of inconsistent proto-preferences from the BCI data and then apply some methods to make them consistent (though that might require a much more detailed understanding of the brain).
There’s also a small typo where you credit Anders “Samberg” instead of “Sandberg”, unless there’s two researchers with very similar names in this area :-)
Nice article! I can’t find anything I disagree with, and especially like the distinction between enhancement, merging and alignment aid.
Also good point about the grounding in value learning. Outside of value learning, perhaps one won’t need to ground BCI signals in actual sentiments? Especially if we decide to focus more on human imitation, just the raw signals might be enough. Or we learn how to extract some representation of inconsistent proto-preferences from the BCI data and then apply some methods to make them consistent (though that might require a much more detailed understanding of the brain).
There’s also a small typo where you credit Anders “Samberg” instead of “Sandberg”, unless there’s two researchers with very similar names in this area :-)
fixed the “Samberg” typo—thanks!