I do. More people doing detailed moral psychology research (such as Jonathan Haidt’s work), or moral philosophy with the aim of understanding what procedure we would actually want followed, would be amazing.
Research into how to build a powerful AI is probably best not done in public, because it makes it easier to make unsafe AI. But there’s no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.
Research into how to build a powerful AI is probably best not done in public...
Is the SIAI concerned with the data security of its research? Is the latest research saved unencrypted on EY’s laptop and shared between all SIAI members? Could a visiting fellow just walk into the SIAI house, plug-in a USB stick and run with the draft for a seed AI? Those questions arise when you make a distinction between you and the “public”.
But there’s no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.
Can that research be detached from decision theory? Since you’re working on solutions applicable to AGI, is it actually possible to differentiate between the mathematical formalism of an AGI’s utility function and the fields of moral psychology and meta-ethics. In other words, can you learn a lot by engaging with researchers if you don’t share the math? That is why I asked if the work can effectively be subdivided if you are concerned with security.
Research into how to build a powerful AI is probably best not done in public
I find this dubious—has this belief been explored in public on this site?
If AI research is completely open and public, then more minds and computational resources will be available to analyze safety. In addition, in the event that a design actually does work, it is far less likely to have any significant first mover advantage.
Making SIAI’s research public and open also appears to be nearly mandatory for proving progress and joining the larger scientific community.
I do. More people doing detailed moral psychology research (such as Jonathan Haidt’s work), or moral philosophy with the aim of understanding what procedure we would actually want followed, would be amazing.
Research into how to build a powerful AI is probably best not done in public, because it makes it easier to make unsafe AI. But there’s no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.
Is the SIAI concerned with the data security of its research? Is the latest research saved unencrypted on EY’s laptop and shared between all SIAI members? Could a visiting fellow just walk into the SIAI house, plug-in a USB stick and run with the draft for a seed AI? Those questions arise when you make a distinction between you and the “public”.
Can that research be detached from decision theory? Since you’re working on solutions applicable to AGI, is it actually possible to differentiate between the mathematical formalism of an AGI’s utility function and the fields of moral psychology and meta-ethics. In other words, can you learn a lot by engaging with researchers if you don’t share the math? That is why I asked if the work can effectively be subdivided if you are concerned with security.
I find this dubious—has this belief been explored in public on this site?
If AI research is completely open and public, then more minds and computational resources will be available to analyze safety. In addition, in the event that a design actually does work, it is far less likely to have any significant first mover advantage.
Making SIAI’s research public and open also appears to be nearly mandatory for proving progress and joining the larger scientific community.