No. Committing a crime inflicts damage. But interacting with a person who committed a crime in the past doesn’t inflict any damage on you.
cousin_it
Because the smaller measure should (on my hypothesis) be enough to prevent crime, and inflicting more damage than necessary for that is evil.
Because otherwise everyone will gleefully discriminate against them in every way they possibly can.
I think the US has too much punishment as it is, with very high incarceration rate and prison conditions sometimes approaching torture (prison rape, supermax isolation).
I’d rather give serial criminals some kind of surveillance collars that would detect reoffending and notify the police. I think a lot of such people can be “cured” by high certainty of being caught, not by severity of punishment. There’d need to be laws to prevent discrimination against people with collars, though.
Yeah, I stumbled on this idea a long time ago as well. I never drink sugary drinks, my laptop is permanently in grayscale mode and so on. And it doesn’t feel like missing out on fun; on the contrary, it allows me to not miss out. When I “mute” some big, addictive, one-dimensional thing, I start noticing all the smaller things that were being drowned out by it. Like, as you say, noticing the deliciousness of baked potatoes when you’re not eating sugar every day, or noticing all the colors in my home and neighborhood when my screen is on grayscale.
I suppose the superassistants could form coalitions and end up as a kind of “society” without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That’s the real danger.
I don’t quite understand the plan. What if I get access to cheap friendly AI, but there’s also another much more powerful AI that wants my resources and doesn’t care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn’t true even now.
I also agree with all of this.
For what an okayish possible future could look like, I have two stories in mind:
-
Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.
-
Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.
A post-AI world where baseline humans are anything more than housecats seems hard to imagine, I’m afraid. And even getting to be housecats at all (rather than dodos) looks to be really difficult.
-
Thanks for writing this, it’s a great explanation-by-example of the entire housing crisis.
Well, Christianity sometimes spread by conquest, but other times it spread peacefully just as effectively. Same for democracy. So I don’t think the spread of moral values requires conquest.
Wait, but we know that people sometimes have happy moments. Is the idea that such moments are always outweighed by suffering elsewhere? It seems more likely that increasing the proportion of happy moments is doable, an engineering problem. So basically I’d be very happy to see a world such as in the first half of your story, and don’t think it would lead to the second half.
Your theory would predict that we’d be much better at modeling tigers (which hunted us) than at modeling antelopes (which we hunted), but in reality we’re about equally bad at modeling either, and much better at modeling other humans.
I don’t think this post addresses the main problem. Consider the exchange ratio between labor and land. You need land to live, and your food needs land to be grown. Will you be able to afford more land use for the same work hours, or less? (As programmer, manager, CEO, super high productivity job, whatever.) Well, if the same land can be used to run AIs that can do your job N times over, then from your labor you won’t be able to afford it, and that closes the case.
So basically, the only way the masses can survive long term is by some kind of handouts. It won’t just happen by itself due to tech progress and economic laws.
I don’t buy it. Lots of species have predators and have had them for a long time, but very few species have intelligence. It seems more likely that most of our intelligence is due to sexual selection, a Fisherian runaway that accidentally focused on intelligence instead of brightly colored tails or something.
An ASI project would be highly distinguishable from civilian AI applications and not integrated with a state’s economy
Why? I think there’s a smooth ramp from economically useful AI to superintelligence: AIs gradually become better at many tasks, and these tasks help more and more with improving AI in turn.
For cognitive enhancement, maybe we could have a system like “the smarter you are, the more aligned you must be to those less smart than you”? So enhancement would be available, but would make you less free in some ways.
I think the problem with WBE is that anyone who owns a computer and can decently hide it (or fly off in a spaceship with it) becomes able to own slaves, torture them and whatnot. So after that technology appears, we need some very strong oversight—it becomes almost mandatory to have a friendly AI watching over everything.
What about biological augmentation of intelligence? I think if other avenues are closed, this one can still go pretty far and make things just as weird and risky. You can imagine biological self-improving intelligences too.
So if you’re serious about closing all avenues, it amounts to creating a god that will forever watch over everything and prevent things from becoming too smart. It doesn’t seem like such a good idea anymore.
Sure. But in an economy with AIs, humans won’t be like Bob. They’ll be more like Carl the bottom-percentile employee who struggles to get any job at all. Even in today’s economy lots of such people exist, so any theoretical argument saying it can’t happen has got to be wrong.
And if the argument is quantitative—say, that the unemployment rate won’t get too high—then imagine an economy with 100x more AIs than people, where unemployment is only 1% but all people are unemployed. There’s no economic principle saying that can’t happen.
I don’t understand Eliezer’s explanation. Imagine Alice is hard-working and Bob is lazy. Then Alice can make goods and sell them to Bob. Half the money she’ll spend on having fun, the other half she’ll save. In this situation she’s rich and has a trade surplus, but the other parts of the explanation—different productivity between different parts of Alice (?) and inability to judge her own work fairly (?) - don’t seem to be present.