When asked a simple question about broad and controversial assertions, it is rude to link to outside resources tangentially related to the issue without providing (at minimum) a brief explanation of what those resources are intended to indicate.
nigerweiss
I don’t speak Old English, unfortunately. Could someone who does please provide me with a rough translation of the provided passage?
It isn’t the sort of bad argument that gets refuted. The best someone can do is point out that there’s no guarantee that MNT is possible. In which case, the response is ‘Are you prepared to bet the human species on that? Besides, it doesn’t actually matter, because [insert more sophisticated argument about optimization power here].’ It doesn’t hurt you, and with the overwhelming majority of semi-literate audiences, it helps.
Of course there is. For starters, most of the good arguments are much more difficult to concisely explain, or invite more arguments from flawed intuitions. Remember, we’re not trying to feel smug in our rational superiority here; we’re trying to save the world.
That’s… not a strong criticism. There are compelling reasons not to believe that God is going to be a major force in steering the direction the future takes. The exact opposite is true for MNT—I’d bet at better-than-even odds that MNT will be a major factor in how things play out basically no matter what happens.
All we’re doing is providing people with a plausible scenario that contradicts flawed intuitions that they might have, in an effort to get them to revisit those intuitions and reconsider them. There’s nothing wrong with that. Would we need to do it if people were rational agents? No—but, as you may be aware, we definitely don’t live in that universe.
I don’t have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like ’well, the machine won’t be able to do anything on its own because it’s just a computer—it’ll need humanity, therefore, it’ll never kill us all.” Even if MNT is impossible, that’s still true—but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn’t guaranteed to happen, but it’s also not unlikely, and it’s a powerful educational tool for showing people the sorts of things that strong AI may be capable of.
There’s a deeper question here: ideally, we would like our CEV to make choices for us that aren’t our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.
One obvious way to solve the problem you raise is to treat ’modifying your current value approximation″ as an object-level action by the AI, and one that requires it to compute your current EV—meaning that, if the logical consequences of the change (including all the future changes that the AI predicts will result from that change) don’t look palatable to you, the AI won’t make the first change. In other words, the AI will never assign you a value set that you find objectionable right now. This is safe in some sense, but not ideal. The profoundly racist will never accept a version of their values which, because of its exposure to more data and fewer cognitive biases, isn’t racist. Ditto for the devoutly religious. This model of CEV doesn’t offer the opportunity for growth.
It might be wise to compromise by locking the maximum number of edges in the graph between you and your EV to some small number, like two or three—a small enough number that value drift can’t take you somewhere horrifying, but not so tightly bound up that things can never change. If your CEV says it’s okay under this schema, then you can increase or decrease that number later.
I’ve read some of Dennet’s essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a ‘noisy quorum’ model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn’t actually that surprising. It’d be hard to design to a human-style system that didn’t have a similar internal behavior that it could talk about.
Yeah, The glia seem to serve some pretty crucial functions as information-carriers and network support infrastructure—and if you don’t track hormonal regulation properly, you’re going to be in for a world of hurt. Still, I think the point stands.
Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain.
Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists who are, in my opinion, allowing some very suspect intuitions to dominate their thinking. I don’t have any money right now to propose a bet, but if it turns out that the brain can’t be simulated on a sufficient supply of classical hardware, I will boil, shred, and eat my entire (rather expensive) hat.
Consciousness is a much thornier nut to crack. I don’t know that anyone has a good handle on that yet.
Daniel Dennet’s papers on the subject seem to be making a lot of sense to me. The details are still fuzzy, but I find that having read them, I am less confused on the subject, and I can begin to see how a deterministic system might be designed that would naturally begin to have behavior that would cause them to say the sorts of things about consciousness that I do.
When I was younger, I picked up ‘The Emperor’s New Mind’ in a used bookstore for about a dollar, because I was interested in AI, and it looked like an exciting, iconoclastic take on the idea. I was gravely disappointing when it took a sharp right turn into nonsense right out of the starting gate.
Building a whole brain emulation right now is completely impractical. In ten or twenty years, though… well, let’s just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain.
I’d also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There’s no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.
Evidence?
EDIT: Sigh. Post has changed contents to something reasonable. Ignore and move on.
Reply edit: I don’t have a copy of your original comment handy, so I can’t accurately comment on what I was thinking when I read it. However, I don’t recall it striking me as a joke, or even an exceptionally dumb thing for someone on the internet to profess belief in.
Watson is pretty clearly narrow AI, in the sense that if you called it General AI, you’d be wrong. There are simple cognitive tasks (like making a plan to solve a novel problem, modelling a new system, or even just playing Parcheesi) that it just can’t do, at least, not without a human writing a bunch of new code to add a module that that does that new thing. It’s not powerful in the way that a true GAI would be.
That said, Watson is a good deal less narrow than, say, for example, Deep Blue. Watson has a great deal of analytic depth in a reasonably broad domain (structured knowledge extraction from unformatted English) , which is a major leap forward. You might say that Watson is a rough analog to a language center connected to a memory system sitting in a box. It’s not a GAI by itself, but it could be a substantial component of one down the line.
Zero? Why?
At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn’t ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I’d be very surprised if nobody decided to play god like that. People go to Renaissance fairs, for goodness sakes. Do you think that nobody would take the opportunity to bring back whole lost eras of humanity in bottle-worlds?
As for the other point, if we decide that our simulators don’t resemble us, then calling them ‘people’ is spurious. We know nothing about them. We have no reason to believe that they’d tend to produce simulations containing observers like us (the vast majority of computable functions won’t). Any speculation, if you take that approach, that we might be living in a simulation is entirely baseless and unfounded. There is no reason to privilege that cosmological hypothesis over simpler ones.
I know some hardcore C’ers in real life who are absolutely convinced that centrally-planned Marxist/Leninist Communism is a great idea, and they’re sure we can get the kinks out if we just give it another shot.
Unless P=NP, I don’t think it’s obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we’re alarmingly close to solipsism again.
I guess one way to to test this hypothesis would be to try to construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior (“Project: Piss Off God”), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn’t be practically computable in our universe. If nothing interesting happens, then we have evidence that either we aren’t in a simulation, or P=NP.
We can be a simulation without being a simulation created by our descendants.
We can, but there’s no reason to think that we are. The simulation argument isn’t just ‘whoa, we could be living in a simulation’ - it’s ‘here’s a compelling anthropic argument that we’re living in a simulation’. If we disregard the idea that we’re being simulated by close analogues of our own descendants, we lose any reason to think that we’re in a simulation, because we can no longer speculate on the motives of our simulators.
That doesn’t actually solve the problem: if you’re simulating fewer people, that weakens the anthropic argument proportionately. You’ve still only got so much processor time to go around.
It’s going to be really hard to come up with any models that don’t run deeply and profoundly afoul of the Occam prior.