I’m excited to see more of this. As the field grows (in funding, people, etc) it seems like there’s a lot more room to grow the portfolio of bets of Alignment Approaches, and brain-based methods seem to be interesting enough to allocate to.
I think I’ve been underwhelmed by past progress in this direction, but that doesn’t preclude someone coming in and finding a tractable angle to start grinding away at.
Things I most interested in (personally and selfishly):
How can this suggest alignment approaches that are intractable or infeasible with other framings/directions
What neuroscience research does this suggest, such that the results of that neuroscience research would be valuable contributions to alignment
What neuroscience tools would allow us to run new experiments, such that the results would be valuable contributions to alignment
What high level overview of {brains,brain-like-AGI,brain-like-AGI-alignment} do you wish most AI alignment researchers knew
What does it look like for new people to join this research direction? How would they know they’re a good fit?
Things I’m not that interested in (personally, things I’ve been unimpressed by in the past):
Ex post facto explanations about how some deep learning phenomenon is like some neurological phenomenon
AGI proposals that just are about building digital minds (and not about aligning them or addressing alignment risks)
Anyways I should probably just sit tight and wait for the rest of it for now.
I’m excited to see more of this. As the field grows (in funding, people, etc) it seems like there’s a lot more room to grow the portfolio of bets of Alignment Approaches, and brain-based methods seem to be interesting enough to allocate to.
I think I’ve been underwhelmed by past progress in this direction, but that doesn’t preclude someone coming in and finding a tractable angle to start grinding away at.
Things I most interested in (personally and selfishly):
How can this suggest alignment approaches that are intractable or infeasible with other framings/directions
What neuroscience research does this suggest, such that the results of that neuroscience research would be valuable contributions to alignment
What neuroscience tools would allow us to run new experiments, such that the results would be valuable contributions to alignment
What high level overview of {brains,brain-like-AGI,brain-like-AGI-alignment} do you wish most AI alignment researchers knew
What does it look like for new people to join this research direction? How would they know they’re a good fit?
Things I’m not that interested in (personally, things I’ve been unimpressed by in the past):
Ex post facto explanations about how some deep learning phenomenon is like some neurological phenomenon
AGI proposals that just are about building digital minds (and not about aligning them or addressing alignment risks)
Anyways I should probably just sit tight and wait for the rest of it for now.