I have no problem with a billion dollars spend on friendly AI research. But that doesn’t mean that I agree that the SIAI needs a billion dollars right now or that I agree that the current evidence is enough to tell people to stop researching cancer therapies or create educational videos about basic algebra. I don’t think we know enough about risks from AI to justify such advice. I also don’t think that we should all become expected utility maximizer’s because we don’t know enough about economics, game theory, and decision theory and especially about human nature and the nature of discovery.
This is the part I’d like to focus on. Restating that position from my understanding, you are unconvinced that SIAI is important to fund, and you will not pay them to convince you, and it would be perfectly fine for other people to fund them, and you will be following the area to see if they provide convincing things in the future. Is that a fair characterization?
...you are unconvinced that SIAI is important to fund, and you will not pay them to convince you, and it would be perfectly fine for other people to fund them, and you will be following the area to see if they provide convincing things in the future. Is that a fair characterization?
Almost, I think it is important that the SIAI continues to receive at least as much as it did last year. If the SIAI’s sustainability was at stake I would contribute money, I just don’t know how much. I would probably devote some time to think about the whole issue, more thoroughly than I have until now. Which also hints at a general problem, I think many people lack the initial incentive that is necessary to take the whole topic seriously in the first place, seriously enough to even invest the required time and resources to analyze the available data sufficiently.
I recently hinted at some problems that need to be addressed in order to convince me that the SIAI needs more money. I am currently waiting for the “exciting developments”, that have been mentioned in the subsequent comments thread, to take place.
Another problem is the secretive approach the SIAI seems to subscribe to. I am not convinced that a secretive approach is the right thing to do. I also don’t have enough confidence to just take their word for it if they say that they are making progress. They have to figure out how to convince people that actual progress is being made, or at least attempted, without revealing too much detail. They also have to explain if they suspect Eliezer Yudkowsky to be able to solve friendly AI on his own, or otherwise how they are going to guarantee the “friendliness” of future employees.
I think this thread started by timtyler is more representative of the opinion of most people (if they knew about the SIAI) than those members of lesswrong who are already sold. People here seem overly confident in what they are told without asking for further evidence. Not that I care about the AI box experiment, even prison guards can be persuaded by humans to let them out of the jail. But as timtyler said, the secretive approach employed by the SIAI, “don’t ask don’t tell”, isn’t going to convince many people any time soon. I doubt actual researchers would just trust the SIAI if they claimed they proved something without providing any evidence supporting the claim.
This is the part I’d like to focus on. Restating that position from my understanding, you are unconvinced that SIAI is important to fund, and you will not pay them to convince you, and it would be perfectly fine for other people to fund them, and you will be following the area to see if they provide convincing things in the future. Is that a fair characterization?
Almost, I think it is important that the SIAI continues to receive at least as much as it did last year. If the SIAI’s sustainability was at stake I would contribute money, I just don’t know how much. I would probably devote some time to think about the whole issue, more thoroughly than I have until now. Which also hints at a general problem, I think many people lack the initial incentive that is necessary to take the whole topic seriously in the first place, seriously enough to even invest the required time and resources to analyze the available data sufficiently.
I recently hinted at some problems that need to be addressed in order to convince me that the SIAI needs more money. I am currently waiting for the “exciting developments”, that have been mentioned in the subsequent comments thread, to take place.
Another problem is the secretive approach the SIAI seems to subscribe to. I am not convinced that a secretive approach is the right thing to do. I also don’t have enough confidence to just take their word for it if they say that they are making progress. They have to figure out how to convince people that actual progress is being made, or at least attempted, without revealing too much detail. They also have to explain if they suspect Eliezer Yudkowsky to be able to solve friendly AI on his own, or otherwise how they are going to guarantee the “friendliness” of future employees.
I think this thread started by timtyler is more representative of the opinion of most people (if they knew about the SIAI) than those members of lesswrong who are already sold. People here seem overly confident in what they are told without asking for further evidence. Not that I care about the AI box experiment, even prison guards can be persuaded by humans to let them out of the jail. But as timtyler said, the secretive approach employed by the SIAI, “don’t ask don’t tell”, isn’t going to convince many people any time soon. I doubt actual researchers would just trust the SIAI if they claimed they proved something without providing any evidence supporting the claim.