I don’t buy it. Lots of species have predators and have had them for a long time, but very few species have intelligence. It seems more likely that most of our intelligence is due to sexual selection, a Fisherian runaway that accidentally focused on intelligence instead of brightly colored tails or something.
cousin_it
An ASI project would be highly distinguishable from civilian AI applications and not integrated with a state’s economy
Why? I think there’s a smooth ramp from economically useful AI to superintelligence: AIs gradually become better at many tasks, and these tasks help more and more with improving AI in turn.
For cognitive enhancement, maybe we could have a system like “the smarter you are, the more aligned you must be to those less smart than you”? So enhancement would be available, but would make you less free in some ways.
I think the problem with WBE is that anyone who owns a computer and can decently hide it (or fly off in a spaceship with it) becomes able to own slaves, torture them and whatnot. So after that technology appears, we need some very strong oversight—it becomes almost mandatory to have a friendly AI watching over everything.
What about biological augmentation of intelligence? I think if other avenues are closed, this one can still go pretty far and make things just as weird and risky. You can imagine biological self-improving intelligences too.
So if you’re serious about closing all avenues, it amounts to creating a god that will forever watch over everything and prevent things from becoming too smart. It doesn’t seem like such a good idea anymore.
Sure. But in an economy with AIs, humans won’t be like Bob. They’ll be more like Carl the bottom-percentile employee who struggles to get any job at all. Even in today’s economy lots of such people exist, so any theoretical argument saying it can’t happen has got to be wrong.
And if the argument is quantitative—say, that the unemployment rate won’t get too high—then imagine an economy with 100x more AIs than people, where unemployment is only 1% but all people are unemployed. There’s no economic principle saying that can’t happen.
That, incidentally, implies that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.
Less skilled labor has a well-paying niche today?
Yeah, on further thought I think you’re right. This is pretty pessimistic then, AI companies will find it easy to align AIs to money interests, and the rest of us will be in a “natives vs the East India Company” situation. More time to spend on alignment then matters only if some companies actually try to align AIs to something good instead, and I’m not sure any companies will do that.
I wonder how hard it would be to make the Sun stop shining? Maybe the fusion reaction could be made subcritical by adding some “control rod” type stuff.
Edit: I see other commenters also mentioned spinning up the Sun, which would lower the density and stop the fusion. Not sure which approach is easier.
I guess the opposite point of view is that aligning AIs to AI companies’ money interests is harmful to the rest of us, so it might actually be better if AI companies didn’t have much time to do it, and the AIs got to keep some leftover morality from human texts. And WBE would enable the powerful to do some pretty horrible things to the powerless, so without some kind of benevolent oversight a world with WBE might be scary. But I’m not sure about any of this, maybe your points are right and mine are wrong.
Huh? Environmentalism means let things work as they naturally worked, not change them to be “reversible” or something else.
There have been many controversies about the World Bank. A good starting point is this paragraph from Naomi Klein’s article:
The truth is that the bank’s credibility was fatally compromised when it forced school fees on students in Ghana in exchange for a loan; when it demanded that Tanzania privatise its water system; when it made telecom privatisation a condition of aid for Hurricane Mitch; when it demanded labour “flexibility” in Sri Lanka in the aftermath of the Asian tsunami; when it pushed for eliminating food subsidies in post-invasion Iraq. Ecuadoreans care little about Wolfowitz’s girlfriend; more pressing is that in 2005 the World Bank withheld a promised $100m after the country dared to spend a portion of its oil revenues on health and education. Some anti-poverty organisation.
Whether she’s right or wrong, I like how the claims are laid out nicely. Anyone can fact-check and come to their own conclusions.
Fair enough. And it does seem to me like the action will be new laws, though you’re right it’s hard to predict.
This one isn’t quite a product though, it’s a service. The company receives a request from a criminal: “gather information about such-and-such person and write a personalized phishing email that would work on them”. And the company goes ahead and does it. It seems very fishy. The fact that the company fulfilled the request using AI doesn’t even seem very relevant, imagine if the company had a staff of secretaries instead, and these secretaries were willing to make personalized phishing emails for clients. Does that seem like something that should be legal? No? Then it shouldn’t be legal with AI either.
Though probably no action will be taken until some important people fall victim to such scams. After that, action will be taken in a hurry.
Yeah, this is really dumb. I wonder if it would’ve gone better if the AI profiles had been more honest to begin with, using actual datacenter photos as their profile pics and so on.
Are AI companies legally liable for enabling such misuse? Do they take the obvious steps to prevent it, e.g. by having another AI scan all chat logs and flag suspicious ones?
For every person saying “religion gave me a hangup about sex” there will be another who says “religion led to me marrying younger” or “religion led me to have more kids in marriage”. The right question is whether religion leads to more anti-reproduction attitude on average, but I can’t see how that can be true when religious people have higher fertility.
I’ve held this view for years and am even more pessimistic than you :-/
In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out.
Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won’t be democracy.
Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant
It seems really hard to think of any examples of such tech.
But many do maintain an explicit approval hierarchy that ranks celibacy and sexual restraint above typical sexual behavior
I think we just disagree here. The Bible doesn’t say married people shouldn’t have sex, and no prominent Christians say that either. There are norms against nonmarital sex, and there are norms against priests having sex, but between these things you draw a connection and generalization to all people which doesn’t sound right to me.
I don’t think this post addresses the main problem. Consider the exchange ratio between labor and land. You need land to live, and your food needs land to be grown. Will you be able to afford more land use for the same work hours, or less? (As programmer, manager, CEO, super high productivity job, whatever.) Well, if the same land can be used to run AIs that could do your job N times over, then from your labor you won’t be able to afford it, and that closes the case.
So basically, the only way the masses can survive long term is by some kind of handouts. It won’t just happen by itself due to tech progress and economic laws.