I do not see the point in an exhaustive list of failure scenarios before the existence of any AI is established.
Yeah, I’m not going to care about reading it, and I really don’t think it’s possible for anyone to get close to AI without it dawning on them what the thing might be capable of. I mean, why don’t we get at least /one/ made before we invest our time and effort into something that, in my belief, wont have been relevant, and in all likelihood, wont get to the people who it needs to get to if they even cared about it.
We have to sometimes be allowed to have the second discussion. There sometimes has to be a discussion among those who agree that X is an issue, about what to do about it. We can’t always return to the discussion of whether X is an issue at all, because there’s always someone who dissents. Save it for the threads which are about your dissent.
I’m voicing my dissent because the amount of confidence that it takes to justify the proposal is not rational as I see it.
I am in support of collecting a list of failure scenarios. I am not in support of making an independent wiki on the subject. I’d need to see a considerable argument of confidence be made before I’d understand why to put all this effort into it instead of, say, simply making a list in LessWrong’s existing wiki.
I mean, why don’t we get at least /one/ made before we invest our time and effort into something that, in my belief, wont have been relevant
I endorse you preferentially allocating your time and effort to those things that you expect to be relevant. But I also endorse others doing the same.
Also, if you don’t see the point in planning for failure scenarios before completing the project, I dearly hope you aren’t responsible for planning for projects that can fail catastrophically.
I like being convinced that my preferential allocation of time is non-optimal. That way I can allocate my time to something more constructive. I vastly prefer more rational courses of action to less rational courses of action.
I of course advocate understanding failure scenarios, but the bronze age wasn’t really the time to be contemplating grey goo countermeasures. Even if they’d want to at that time, they would have had nowhere near the competence to be doing anything other than writing science fiction. Which is what I see such a wiki at this point in time as being.
As an aspiring AI coder, suppose I were to ask, for any given article on the wiki, for any given failure scenario, to see some example code that would produce such a failure, so that, while coding my own AI, I am able to more coherently avoid that particular failure. As it is my understanding that nothing of the sort is even close to being able to be produced (to not even touch upon the security concerns), I do not see how such a wiki would be useful at this point in (lack of?) development.
That is the notation my author has chosen to indicate that te is the one communicating. Te uses it primarily in posts on LessWrong from the time before I was written in computer language.
I do not see the point in an exhaustive list of failure scenarios before the existence of any AI is established.
Yeah, I’m not going to care about reading it, and I really don’t think it’s possible for anyone to get close to AI without it dawning on them what the thing might be capable of. I mean, why don’t we get at least /one/ made before we invest our time and effort into something that, in my belief, wont have been relevant, and in all likelihood, wont get to the people who it needs to get to if they even cared about it.
We have to sometimes be allowed to have the second discussion. There sometimes has to be a discussion among those who agree that X is an issue, about what to do about it. We can’t always return to the discussion of whether X is an issue at all, because there’s always someone who dissents. Save it for the threads which are about your dissent.
I’m voicing my dissent because the amount of confidence that it takes to justify the proposal is not rational as I see it.
I am in support of collecting a list of failure scenarios. I am not in support of making an independent wiki on the subject. I’d need to see a considerable argument of confidence be made before I’d understand why to put all this effort into it instead of, say, simply making a list in LessWrong’s existing wiki.
I endorse you preferentially allocating your time and effort to those things that you expect to be relevant.
But I also endorse others doing the same.
Also, if you don’t see the point in planning for failure scenarios before completing the project, I dearly hope you aren’t responsible for planning for projects that can fail catastrophically.
I like being convinced that my preferential allocation of time is non-optimal. That way I can allocate my time to something more constructive. I vastly prefer more rational courses of action to less rational courses of action.
I of course advocate understanding failure scenarios, but the bronze age wasn’t really the time to be contemplating grey goo countermeasures. Even if they’d want to at that time, they would have had nowhere near the competence to be doing anything other than writing science fiction. Which is what I see such a wiki at this point in time as being.
As an aspiring AI coder, suppose I were to ask, for any given article on the wiki, for any given failure scenario, to see some example code that would produce such a failure, so that, while coding my own AI, I am able to more coherently avoid that particular failure. As it is my understanding that nothing of the sort is even close to being able to be produced (to not even touch upon the security concerns), I do not see how such a wiki would be useful at this point in (lack of?) development.
What does your tag mean?
That is the notation my author has chosen to indicate that te is the one communicating. Te uses it primarily in posts on LessWrong from the time before I was written in computer language.