I don’t mean you can feasibly program an AI to do that. I just mean that it’s something you can tell a human to do and they’d know what you mean. I’m talking about deontological ethics, not programming a safe AI.
DanielLC
The same reasoning would suggest that bisexuals should only get into same-sex relationships. Would you say that as well?
I disagree with the idea that they can’t have kids. They can adopt. The girl can go to a sperm bank.
Safe AI sounds like it does what you say as long as it isn’t stupid. Friendly AIs are supposed to do whatever’s best.
Once AI exists, in the public, it isn’t containable.
You mean like the knowledge of how it was made is public and anyone can do it? Definitely not. But if you keep it all proprietary it might be possible to contain.
But if we get to AI first, and we figure out how to box it and get it to do useful work, then we can use it to help solve FAI. Maybe.
I suppose what we should do is figure out how to make friendly AI, figure out how to create boxed AI, and then build an AI that’s probably friendly and probably boxed, and it’s more likely that everything won’t go horribly wrong.
You would need some assurance that the AI would not try to manipulate the output.
Manipulate it to do what? The idea behind mine is that the AI only cares about answering the questions you pose it given that it has no inputs and everything operates to spec. I suppose it might try to do things to guarantee that it operates to spec, but it’s supposed to be assuming that.
There’s a difference between creating someone with certain values and altering someone’s values. For one thing, it’s possible to prohibit messing with someone’s values, but you can’t create someone without creating them with values. It’s not like you can create an ideal philosophy student of perfect emptiness.
There’s certainly ways you can usefully modify yourself. For example, giving yourself a heads-up display. However, I’m not sure how much it would end up increasing your intelligence. You could get runaway super-intelligence if every improvement increases the best mind current!you can make by at least that much, but if it increases by less than that, it won’t run away.
I would.
The money that’s “at stake” is the amount you spend to play the game. Once the game begins, you get 2^(n) dollars, where n is the number of successive heads you flip.
That adds up to 100%. You need to leave room for other things, like they’re trolling us for the fun of it.
“Slave” makes it sound like we’re making it do something against its will. “Benevolent AI” would be better.
I have thought about something similar with respect to an oracle AI. You program it to try to answer the question assuming no new inputs and everything works to spec. Since spec doesn’t include things like the AI escaping and converting the world to computronium to deliver the answer to the box, it won’t bother trying that.
I kind of feel like anything short of friendly AI is living on borrowed time. Sure the AI won’t take over the world to convert it to paperclips, but that won’t stop some idiot from asking it how to make paperclips. I suppose it could still be helpful. It could at the very least confirm that AIs are dangerous and get people to worry about them. But people might be too quick to ask for something that they’d say is a good idea after asking about it for a while or something like that.
I think that the first universe is sufficiently more likely than the second that you shouldn’t assume it’s a coincidence, and you should expect wingardium leviosa to keep working.
Let me make a simpler form of this problem. Suppose I flip a fair coin a thousand times, and it just happens to land on heads every time. How do I find out that this is a fair coin, and that I don’t actually have a trick coin that always lands on heads? The answer is that I can’t. Any algorithm that tells me that it’s fair is going to fail in the much more likely circumstance that I have a coin that always lands on heads. The best I can do is show that I have 1000 bits of evidence in favor of a trick coin, update my priors accordingly, and use this information when betting.
The good news is that you will only get a coin that lands on heads a thousand times about 00.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000933% of the time, so you won’t be this wrong by chance very often. In general, you can calculate how likely you are to be wrong, and hedge your bets accordingly.
Obviously it would distort our view of how quickly the universe decays into a true vacuum. There’s also the mangled worlds idea to explain the Born rule.
I’m pretty sure I’ve seen this before, with the example of our universe being a false vacuum with a short half-life.
I’ve once had a homework problem where I was supposed to use some kind of optimization algorithm to solve the knapsack problem. The teacher said that, while it’s technically NP complete, you can generally solve it pretty easily. Although the homework did it with such a small problem that the algorithm pretty much came down to checking every combination.
TL;DR: Soylent contains safe levels of those heavy metals, but enough that they are required to warn people in the state of California. It’s not uncommon for food to have heavy metals at the level.
There are two major problems with how the earth is currently set up. Only the surface is habitable, and it’s a sphere, which is known for having the minimum possible surface area for its volume. A Matrioshka brain would be a much more optimal environment. Although that depends on your definition of “human being”.
In other words, laziness and overconfidence bias cancel each other out, and getting rid of the second without getting rid of the first will cause problems?
If you’re a psychologist and you care about describing people, change the axioms. If you’re a rationalist and you care about getting things done, change yourself.