Devising a procedure to figure out what to do in arbitrary situations is obviously even harder than creating a human-equivalent AI, so I wouldn’t wish this problem upon myself! First I’d like to see an exhaustive list of reasons for action that actual people use in ordinary situations that feel “clear-cut”. Then we can look at this data and figure out the next step.
Then we can look at this data and figure out the next step.
Sounds like an excuse to postpone figuring out the next step. What do you expect to see, and what would you do depending on what you see? “List of reasons for action that actual people use in ordinary situations” doesn’t look useful.
Thinking you can figure out the next step today is unsubstantiated arrogance. You cannot write a program that will win the Netflix Prize if you don’t have the test dataset. Yeah I guess a superintelligence could write it blindly from first principles, using just a textbook on machine learning, but seriously, WTF.
With Netflix Prize, you need for training the kind of data that you want to predict. Predicting what stories people will tell in novel situations when deciding to act is not our goal.
Why not? I think you could use that knowledge to design a utopia that won’t make people go aaaargh. Then build it, using AIs or whatever tools you have.
The usual complexity of value considerations. The meaning of the stories (i.e. specifications detailed enough to actually implement, the way they should be and not simply the way a human would try elaborating) is not given just by the text of the stories, and once you’re able to figure out the way things should be, you no longer need human-generated stories.
This is a different kind of object, and having lots of stories doesn’t obviously help. Even if the stories would serve some purpose, I don’t quite see how waiting for an explicit collection of stories is going to help in developing the tools that use them.
Devising a procedure to figure out what to do in arbitrary situations is obviously even harder than creating a human-equivalent AI, so I wouldn’t wish this problem upon myself! First I’d like to see an exhaustive list of reasons for action that actual people use in ordinary situations that feel “clear-cut”. Then we can look at this data and figure out the next step.
Yes, blowing up the universe with an intelligence explosion is much easier than preserving human values.
Sounds like an excuse to postpone figuring out the next step. What do you expect to see, and what would you do depending on what you see? “List of reasons for action that actual people use in ordinary situations” doesn’t look useful.
Thinking you can figure out the next step today is unsubstantiated arrogance. You cannot write a program that will win the Netflix Prize if you don’t have the test dataset. Yeah I guess a superintelligence could write it blindly from first principles, using just a textbook on machine learning, but seriously, WTF.
With Netflix Prize, you need for training the kind of data that you want to predict. Predicting what stories people will tell in novel situations when deciding to act is not our goal.
Why not? I think you could use that knowledge to design a utopia that won’t make people go aaaargh. Then build it, using AIs or whatever tools you have.
The usual complexity of value considerations. The meaning of the stories (i.e. specifications detailed enough to actually implement, the way they should be and not simply the way a human would try elaborating) is not given just by the text of the stories, and once you’re able to figure out the way things should be, you no longer need human-generated stories.
This is a different kind of object, and having lots of stories doesn’t obviously help. Even if the stories would serve some purpose, I don’t quite see how waiting for an explicit collection of stories is going to help in developing the tools that use them.