Please accept my minor criticism as an offering of peace and helpfulness: you seen to be missing the trees for the forest. If something is genuinely safe then meticulous and clear thinking should indicate its safety to all right thinking people. If something is genuinely dangerous then meticulous and clear thinking should indicate its danger to all right thinking people.
Eventually. That can take significant time, and a lot of work, which SIAI simply have not done.
The issue is that SIAI simply lacks sufficient qualification or talent for doing any sort of improvement to the survival of mankind, regardless of safety or unsafety of the artificial intelligences. (I am not saying they don’t have any talents. They are talented writers. I don’t see evidence of more technical talent though). Furthermore, the right thinking takes certain time, which is not so substantially shorter than the time to come up with the artificial intelligence itself.
The situation is even worse if I am to assume that artificial intelligences could be unsafe. Once we get closer to the point of creating such artificial intelligence, a valid inference of danger may arise—and such inference will need to be disseminated, and people will need to be convinced to take very drastic measures—and that will be marginalized by it’s similarity to SIAI whom advocate same actions without having anything resembling a valid inference. The impact of the SIAI is even worse if the risk exist.
When I imagine what it is to be this wrong—I imagine people who derive wireheaded happiness from their misguided effort, at everyone else’s expense. People with a fault that allows them to fall into happy death spiral.
And the burden of proof is not upon me. There exist no actual argument for the danger. There exist a sequence of letters that triggers fallacies and relies on the map compression issues in people whom don’t have sufficiently big map of the topic. (and this sequence of letters works best on people with least knowledge of the topic)
Eventually. That can take significant time, and a lot of work, which SIAI simply have not done.
The issue is that SIAI simply lacks sufficient qualification or talent for doing any sort of improvement to the survival of mankind, regardless of safety or unsafety of the artificial intelligences. (I am not saying they don’t have any talents. They are talented writers. I don’t see evidence of more technical talent though). Furthermore, the right thinking takes certain time, which is not so substantially shorter than the time to come up with the artificial intelligence itself.
The situation is even worse if I am to assume that artificial intelligences could be unsafe. Once we get closer to the point of creating such artificial intelligence, a valid inference of danger may arise—and such inference will need to be disseminated, and people will need to be convinced to take very drastic measures—and that will be marginalized by it’s similarity to SIAI whom advocate same actions without having anything resembling a valid inference. The impact of the SIAI is even worse if the risk exist.
When I imagine what it is to be this wrong—I imagine people who derive wireheaded happiness from their misguided effort, at everyone else’s expense. People with a fault that allows them to fall into happy death spiral.
And the burden of proof is not upon me. There exist no actual argument for the danger. There exist a sequence of letters that triggers fallacies and relies on the map compression issues in people whom don’t have sufficiently big map of the topic. (and this sequence of letters works best on people with least knowledge of the topic)