Rather, my point is that we need lots of smart people working on these meta-ethical questions.
I’m curious if the SIAI shares that opinion. Is Michael Vassar trying to hire more people or is his opinion that a small team will be able to solve the problem? Can the problem be subdivided into parts, is it subject to taskification?
I do. More people doing detailed moral psychology research (such as Jonathan Haidt’s work), or moral philosophy with the aim of understanding what procedure we would actually want followed, would be amazing.
Research into how to build a powerful AI is probably best not done in public, because it makes it easier to make unsafe AI. But there’s no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.
Research into how to build a powerful AI is probably best not done in public...
Is the SIAI concerned with the data security of its research? Is the latest research saved unencrypted on EY’s laptop and shared between all SIAI members? Could a visiting fellow just walk into the SIAI house, plug-in a USB stick and run with the draft for a seed AI? Those questions arise when you make a distinction between you and the “public”.
But there’s no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.
Can that research be detached from decision theory? Since you’re working on solutions applicable to AGI, is it actually possible to differentiate between the mathematical formalism of an AGI’s utility function and the fields of moral psychology and meta-ethics. In other words, can you learn a lot by engaging with researchers if you don’t share the math? That is why I asked if the work can effectively be subdivided if you are concerned with security.
Research into how to build a powerful AI is probably best not done in public
I find this dubious—has this belief been explored in public on this site?
If AI research is completely open and public, then more minds and computational resources will be available to analyze safety. In addition, in the event that a design actually does work, it is far less likely to have any significant first mover advantage.
Making SIAI’s research public and open also appears to be nearly mandatory for proving progress and joining the larger scientific community.
I tend to think that the broadest issues, such as those of meta-ethics, may be discussed by a wider professional community, though of course most meta-ethicists will have little to contribute (divine command theorists, for example). It may still be the case that a small team is best for solving more specific technical problems in programming the AI’s utility function and proving that it will not reprogram its own terminal values.
It may still be the case that a small team is best for solving more specific technical problems in programming the AI’s utility function and proving that it will not reprogram its own terminal values.
My intuition leads me to disagree with the suggestion that a small team might be better. The only conceivable (to me) advantage of keeping the team small would be to minimize the pedagogical effort of educating a large team on the subtleties of the problem and the technicalities of the chosen jargon. But my experience has been that investment in clarity of pedagogy yields dividends in your own understanding of the problem, even if you never get any work or useful ideas out of the yahoos you have trained. And, of course, you probably will get some useful ideas from those people. There are plenty of smart folks out there. Whole generations of them.
My intuition leads me to disagree with the suggestion that a small team might be better.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition. Do you rather trust 5 people (likely friends) or a hundred strangers working for fame and money? After all we’re talking about a project that will result in the implementation of a superhuman AI to destine the future of the universe. A handful of people might do anything, regardless of what they are signaling. But a hundred people are much harder to control. So the security argument runs both ways. The question is what will maximize the chance of success. Here I agree that it will take many more people than are currently working on the various problems.
I agree. But, with Luke, I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI. Only the second of those two questions requires security—there is no reason not to pursue Friendliness theory openly.
I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition.
The “smaller is better” idea seems linked to “security through obscurity”—a common term of ridicule in computer security circles.
The NSA manage to get away with some security through obscurity—but they are hardly a very small team.
Is Michael Vassar trying to hire more people or is his opinion that a small team will be able to solve the problem?
On this sort of problem that can be safely researched publicly, SIAI need not hire people to get them to work on the problem. They can also engage the larger academic community to get them interested in finding practical answers to these questions.
Has there ever been a time when philosophical questions were solved by a committee?
It could be argued that they are never usefully answered in anyotherway. Isolated great thinkers are rare in most fields. They are practically non-existent in philosophy.
I’m curious if the SIAI shares that opinion. Is Michael Vassar trying to hire more people or is his opinion that a small team will be able to solve the problem? Can the problem be subdivided into parts, is it subject to taskification?
I do. More people doing detailed moral psychology research (such as Jonathan Haidt’s work), or moral philosophy with the aim of understanding what procedure we would actually want followed, would be amazing.
Research into how to build a powerful AI is probably best not done in public, because it makes it easier to make unsafe AI. But there’s no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.
Is the SIAI concerned with the data security of its research? Is the latest research saved unencrypted on EY’s laptop and shared between all SIAI members? Could a visiting fellow just walk into the SIAI house, plug-in a USB stick and run with the draft for a seed AI? Those questions arise when you make a distinction between you and the “public”.
Can that research be detached from decision theory? Since you’re working on solutions applicable to AGI, is it actually possible to differentiate between the mathematical formalism of an AGI’s utility function and the fields of moral psychology and meta-ethics. In other words, can you learn a lot by engaging with researchers if you don’t share the math? That is why I asked if the work can effectively be subdivided if you are concerned with security.
I find this dubious—has this belief been explored in public on this site?
If AI research is completely open and public, then more minds and computational resources will be available to analyze safety. In addition, in the event that a design actually does work, it is far less likely to have any significant first mover advantage.
Making SIAI’s research public and open also appears to be nearly mandatory for proving progress and joining the larger scientific community.
I tend to think that the broadest issues, such as those of meta-ethics, may be discussed by a wider professional community, though of course most meta-ethicists will have little to contribute (divine command theorists, for example). It may still be the case that a small team is best for solving more specific technical problems in programming the AI’s utility function and proving that it will not reprogram its own terminal values.
But I don’t know what SIAI’s position is.
My intuition leads me to disagree with the suggestion that a small team might be better. The only conceivable (to me) advantage of keeping the team small would be to minimize the pedagogical effort of educating a large team on the subtleties of the problem and the technicalities of the chosen jargon. But my experience has been that investment in clarity of pedagogy yields dividends in your own understanding of the problem, even if you never get any work or useful ideas out of the yahoos you have trained. And, of course, you probably will get some useful ideas from those people. There are plenty of smart folks out there. Whole generations of them.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition. Do you rather trust 5 people (likely friends) or a hundred strangers working for fame and money? After all we’re talking about a project that will result in the implementation of a superhuman AI to destine the future of the universe. A handful of people might do anything, regardless of what they are signaling. But a hundred people are much harder to control. So the security argument runs both ways. The question is what will maximize the chance of success. Here I agree that it will take many more people than are currently working on the various problems.
I agree. But, with Luke, I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI. Only the second of those two questions requires security—there is no reason not to pursue Friendliness theory openly.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.
The “smaller is better” idea seems linked to “security through obscurity”—a common term of ridicule in computer security circles.
The NSA manage to get away with some security through obscurity—but they are hardly a very small team.
On this sort of problem that can be safely researched publicly, SIAI need not hire people to get them to work on the problem. They can also engage the larger academic community to get them interested in finding practical answers to these questions.
Has there ever been a time when philosophical questions were solved by a committee?
It could be argued that they are never usefully answered in any other way. Isolated great thinkers are rare in most fields. They are practically non-existent in philosophy.