Oh snap, I read and wrote “sarcasm” but what I was trying to do was satire.
Top-down control is less fragile than ever, thanks to our technology, so I really do fear people reacting to AI the way they generally do to terrorist attacks— with Patriot Acts and other “voluntary” freedom giving-ups.
I’ve had people I respect literally say “maybe we need to monitor all compute resources, Because AI”. Suggest we need to register all GPU and TPU chips so we Know What People Are Doing With Them. Somehow add watermarks to all “AI” output. Just nuts stuff, imho, but I fear plausible to some, and perhaps many.
Those are the ideas that frighten me. Not AI, per se, but what we would be willing to give up to in exchange for imaginary security from “bad AI”.
As a side note, I guess I should look for some “norms” posts here, and see if it’s like, customary to give karma upvotes to anyone who comments, and how they differ from agree/disagree on comments, etc. Thanks for giving me the idea to look for that info, I hadn’t put much thought into it.
The main problem with satire is Poe’s Law. There are people sincerely advocating for more extreme positions in many respects, so it is difficult to write a satirical post that is distinguishable from those sincere positions even after being told that it is satire. In your case I had to get about 90% of the way through before suspecting that it was anything other than an enthusiastic but poorly written sincere post.
seems like a very reasonable concern to me. how do you an anti-authority voluntarist information sharing pattern? it does seem to me that a key part of ai safety is going to be the ability to decide to retain strategic ambiguity. if anything, strongly safe ai should make it impossible for large monitoring networks to work, by construction!
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.
Oh snap, I read and wrote “sarcasm” but what I was trying to do was satire.
Top-down control is less fragile than ever, thanks to our technology, so I really do fear people reacting to AI the way they generally do to terrorist attacks— with Patriot Acts and other “voluntary” freedom giving-ups.
I’ve had people I respect literally say “maybe we need to monitor all compute resources, Because AI”. Suggest we need to register all GPU and TPU chips so we Know What People Are Doing With Them. Somehow add watermarks to all “AI” output. Just nuts stuff, imho, but I fear plausible to some, and perhaps many.
Those are the ideas that frighten me. Not AI, per se, but what we would be willing to give up to in exchange for imaginary security from “bad AI”.
As a side note, I guess I should look for some “norms” posts here, and see if it’s like, customary to give karma upvotes to anyone who comments, and how they differ from agree/disagree on comments, etc. Thanks for giving me the idea to look for that info, I hadn’t put much thought into it.
The main problem with satire is Poe’s Law. There are people sincerely advocating for more extreme positions in many respects, so it is difficult to write a satirical post that is distinguishable from those sincere positions even after being told that it is satire. In your case I had to get about 90% of the way through before suspecting that it was anything other than an enthusiastic but poorly written sincere post.
Bwahahahaha! Lord save us! =]
seems like a very reasonable concern to me. how do you an anti-authority voluntarist information sharing pattern? it does seem to me that a key part of ai safety is going to be the ability to decide to retain strategic ambiguity. if anything, strongly safe ai should make it impossible for large monitoring networks to work, by construction!
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.