Thank you, GradientDissenter. I am pleased and honored that you though to invite me, andin the normal course of events would have jumped at the invitation. I think I could add some value to the conversation about X-risks.
Unfortunately, I am recovering from major surgery and don’t have the physical stamina to do a conference yet. If your event had been scheduled even a month later I think I would be able to give you a different answer.
If you run any future events of this kind I would be very interested in attending.
Of course the word “might” is doing a lot of work here! Because there is no guaranteed happy solution, the best we can do is steer away from futures we absolutely know we we do not want to be in, like a grinding totalitarianism rationalized by “We’re saving you from the looming threat of killer AIs!”
″ At least with the current system, corporations are able to test models before release”. The history of proprietary software does not inspire any confidence at all that this will be done adequately, or even at all; in a fight between time-to-market and software quality, getting their firstest almost always wins. It’s not reasonable to expect this to change simply because some people have strong opinions about AI risk.