https://www.wattpad.com/myworks/263500574-singularity-soon
Flaglandbase
So I was banned from commenting on LessWrong . . .
My whole life I’ve been ranting about how incomprehensibly evil the world is. Maybe I’m the only one who thinks things shouldn’t be difficult in the way they are.
Evil is things that don’t work, but can’t be avoided. A type of invincible stupidity.
For example, software is almost supernaturally evil. I’ve been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-made implacability. It can’t be persuaded or reasoned with. Omnimalevolence as an emergent property.
Software is just a microcosm of society.
The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I’m not allowed to link either).
There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.
When I was banned from LessWrong commenting (after two requests to be reinstated), they said such talk was “weird”. Weird does NOT automatically mean wrong!
Studying the evilness of human-designed interfaces might reveal why the world has always sucked.
Seemingly simple things (like easy interfaces) are still absolutely impossible today. Only the illusion exists, and not for me.
Does that mean that seemingly impossible things (like an intelligence explosion) will turn out to be simple reality tomorrow?
Maybe. Heck PROBABLY. But maybe not.
The fact that it’s so difficult to make even the simplest systems not suck, may mean that much larger systems won’t work either.
In fact, it’s certain that many unexpected things will go wrong before then.
The only way to get transhuman AIs to work MAY be by connecting many existing smaller systems, perhaps even including groups of humans.
Flaglandbase’s Shortform
The past week my Windows 10 box has been almost unusable as it spent the days wasting kilowatts and processing cycles downloading worse-than-useless malware “updates” with no way to turn them off!
Evil is the most fundamental truth of the world. The Singularity cannot happen soon enough . . .
I just spent four hours trying to get a new cellphone to work (that others insist I should have), and failed totally.
There is something fantastically wrong with this shitplanet, but completely different than anyone is willing to talk about.
I didn’t realize there was an automatic threshold of total retaliation the moment Russia nukes Ramstein air base.
I guess simple text based browsers and websites that just show the minimal information you want in a way the user can control are not cool enough, and so we have all those EU regulations that “solve” a problem by making it worse.
If whoever is running Russia is suicidal, sure, but if they still want to win, it might make sense to use strategic weapons tactically to force the other side to accept a stalemate right up to the end.
Highest risk are probably NATO airbases in Poland, Slovakia, and Romania used to supply and support Ukraine. There may also be nuclear retaliation against north German naval bases. They’re more likely to attack smaller American cities first before escalating.
The only thing more difficult than getting readers for your blog is getting readers for your fiction (maybe not on here).
If the universe is really infinite, there should be an infinite number of possible rational minds. Any randomly selected mind from that list should statistically be infinite in size and capabilities.
Obviously, governments don’t believe in autonomous AI risk, only in the risk that AI can be used to invent more powerful weapons.
In the government’s case, that doubt may come from their experience that vastly expensive complex systems are always maximally dysfunctional, and require massive teams of human experts to accomplish a well-defined but difficult task.
Also, the fact that human minds (selected out of the list of all possible minds in the multiverse) are almost infinitely small, implies that intelligence may become exponentionally more difficult if not intractable as capacities increase.
This is a bit like how Scientology has tried to spread, but the E-hance is much better than the E-meter.
No reason to think he’s better or worse than other politicians, but he’s certainly very different.
In a world of almost omnimalevolent conformity, it’s strange to see the possibility that things could be different.
The biggest yet least discussed problem in the world today is the ever tightening web of monstrously evil, defective, and barely usable interfaces, most notably software. Efforts to make UIs seem “simpler” on a lowest-common-denominator super-shallow level are the greatest curse of the past two decades. Every attempt to even mention this problem here leads to a virtual shadowban. My proposed initial solution of requiring every program to have a single page text list of ALL options (no submenus) triggers even more hate.
The problem is we’re going about it all wrong. We’re trying to solve it at the complicated end while it’s forbidden to look at the basics. Right now, we live in a world with satanically complex and defective user interfaces at every level. The fact that “simple” software is allowed to be as bad as it is today is completely incomprehensible to me. In fact most software is already worse than useless, like a runaway AI but with zero capabilities.
My favorite paradigm research notion is to investigate all the ways in which today’s software fails, crashes, lags, doesn’t work, or most often just can’t be used. This despite CPUs being theoretically powerful enough to run much better software than what is currently available. So just the opposite situation of what is feared will happen when AI arrives.
Strange that change isn’t recognized, because change can be extremely bad. Like if even a single thing breaks down life can become horrible, even if that thing can or could be fixed.
If there is a way for data structures to survive forever it would be something we couldn’t imagine, like three leptons orbiting each other storing data in their precise separation distances, where it would take a godzillion eons to generate a single pixel in an ancient cat picture.
I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.