There’s no proof that superintelligence is even possible. The idea of the updating AI that will rewrite itself to godlike intelligence isn’t supported.
There is just so much hand-wavey magical thinking going on in regard to the supposed superintelligence AI takeover.
The fact is that manufacturing networks are damn fragile. Power networks too. Some bad AI is still limited by these physical things. Oh, it’s going to start making its own drones? Cool, so it is running thirty mines, and various shops, plus refining the oil and all the rest of the network’s required just to make a sparkplug?
One tsunami in the RAM manufacturing district and that AI is crippled. Not to mention that so many pieces of information do not exist online. There are many things without patent. Many processes opaque.
We do in fact have multiple tries to get AI “right”.
We need to stop giving future AI magical powers. It cannot suddenly crack all cryptography instantly. It’s not mathematically possible.
The CCP once ran a campaign asking for criticism and then purged everyone who engaged.
I’d be super wary of participating in threads such as this one. A year ago I participated in a similar thread and got the rate limit ban hit.
If you talk about the very valid criticisms of LessWrong (which you can only find off LessWrong) then expect to be rate limited.
If you talk about some of the nutty things the creator of this site has said that may as well be “AI will use Avada Kadava” then expect to be rate limited.
I find it really sad honestly. The group think here is restrictive and bound up by verbose arguments that start with claims that someone hasn’t read the site. Or that there are subjects that are settled and must not be discussed.
Rate limiting works to push away anyone even slightly outside the narrow view.
I think the creator of this site is like a bad L. Ron Hubbard quite frankly except they never succeeded with their sci-fi and so turned to being a doomed prophet.
But hey, don’t talk about the weird stuff he has said. Don’t talk about the magic assumption that AI will suddenly be able to crack all encryption instantly.
I stopped participating because of the rate limit. I don’t think a read of my comments show that I was participating in bad faith or ignorance.
I just don’t fully agree...
Forums that do this just die eventually. This place will because no new advances can be made so long as there exists a body of so-called knowledge that you’re required to agree with to even start participating.
Better conversations are happening elsewhere and have been for a while now.