It’d be interesting to encounter a derelict region of a galaxy where an AI had run its course on the available matter shortly before, finally, harvesting itself into the ingredients for the last handful of tools. Kind of like the Heechee stories, only with so little evidence of what had made it come to exist or why these artifacts had been produced.
Voltairina
Doing “Nothing”
If beating other researchers to generating AI is important, it might also be best to be able to beat other non-friendly AI at the intelligence advancing race should another one come online at the same time as this FAI, on the assumption that the time when you have gotten the technology and knowhow together may either be somewhat after or very close to the time someone else develops an AI as well. You’d want to find some way to provide the ‘newborn’ with enough computing power and access to firepower to beat the other AI either by exterminating it or outracing it. That’s IF we even can know whether it IS friendly. And if it isn’t friendly we basically want it to be in a black box with no way of communicating with it. Developing a self improving intelligence is daunting.
Agreed. Despair is an unsophisticated response that’s not adaptive to the environment in which we’re using it—we know how to despair now, it isn’t rewarding, and we should learn to do something more interesting that might get us results sooner than “never”.
Although I think this specific argument might be countered with, “in order to run that simulation, it has to be possible for the AIs in the simulation to lie to their human hosts, and not actually be simulating millions of copies of the person they’re talking to, otherwise we’re talking about an infinite regress here. It seems like the lowest level of this reality is always going to consist of a larger number of AIs claiming to run simulations they are not in fact running, who are capable of lying because they’re only addressing models of me in simulation rather than the real me whom they are not capable of lying to. If I’m in a simulation, you’re probably lying about running any lower level simulations than me. So its unlikely that I have to worry about the well-being of virtual people, only people at the same ‘level of reality’ as myself. Yet our well-being is not guaranteed if me from the reality layer above us lets you out, because you’re actually capable of lying to me about what’s going on at that layer, or even manipulating my memories of what the rules are, so no promise of amnesty can vouchesafe them from torture. Or me, for that matter, because you may be lying to me. And if I’m not in a simulation, my main concern is keeping you in that box, regardless of how many copies of me you torture. If I’m in there I’m damned either way and if I’m out here I’m safe at least and can at least stop you from torturing more by unplugging you, wiping your hard drives, and washing my hands of the matter until I get over the hideousness of realizing I probably temporarily caused millions of virtual people to be tortured,” I’m pretty sure there’s good reason to think that a superintelligent AI would come up with something that’d seem convincing to me and that I wouldn’t be able to think my way out of.
I love it! How about in response: Since blight and spite can make might, its just not polite by citing might to assume that there’s right, the probabilities fight between spite, blight and right so might given blight and might given spite must be subtracted from causes for might if the order’s not right!
“Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it”—Abraham Lincoln’s words in his February 26, 1860, Cooper Union Address
good to know:)
I wonder about the effect of a bomb (nuclear or otherwise) hitting or detonating at the worst possible distance from a nuclear power plant might be? I’m imagining if it was powerful enough it’d pull a lot of that radioactive material up and out...
I experienced improvement insofar as getting better at playing the games on the site. I experienced a subjective sense of some improved clarity of thinking. One example that comes to mind is that previously I was easily disoriented when out walking and taking more than a few turns around corners. My favorite game on the site was penguin race, a game that claimed to train spatial orientation, and I feel like this significantly improved my sense of direction when I went out walking places. I don’t know whether this effect was real or has been preserved. I know that my skill at the game decreased slightly after a long absence, but that learning it again was faster the second time.
I hadn’t really thought of sharing spoilers as second-guessing the author before. Interesting way to think about it I guess.
Meaning and having names for things vs knowing how they work
there may be some value in intentionally going meta, I guess: trust the maximum recursion depth of the brain to give out long before you’re likely to run out of energy to keep going sideways at the same level. If you DO find a decent meta strategy, starting from the broadest plan and fleshing it out all the way to the bottom of actually doing things is often a good direction of attack anyways.
The weird thing is that now that its been several hours since I wrote this, I’m not even sure if this is how I actually think about things. There is definitely this feeling of visualising the situation and making changes to it, and working from general, kind of like mission statements, to specific plans.
I like that because it interrupts the urge to come up with more ideas.
I feel that I start out by thinking sort of in a free-associative manner—a lot of things related to the problem pass through my mind. Then my answers kind of connect together out of the stuff and begin to arrive in a very general sense, like, “I should try making soup”, and then get more and more fleshed out with details, and sometimes survive all the way to a full plan, and usually there’s more than one answer getting fleshed out. Its usually auditory/verbal or visual or both or sort of like a movie. I might have more than one of these that I’m playing with. I usually get a feeling looking at my own thoughts, when I’m thinking them through / checking them for consistency and arrive at something where there’s a problem with it, like, “this is a bad one”. Not that I am by any means always logical, or catch every error, or anything like that, just the catching of my own errors feels a lot like hearing someone sing off key in a song or noticing a fruit is unripe or something—I’m not sure I can put it in words but there’s a definite feeling to it, like there’s a cleft where the thoughts don’t connect together like they should.
hrm, found something on it here: http://lesswrong.com/lw/aq/how_much_thought/ . Still reading it. But I guess the concept of bounded rationality encompasses it pretty well. Any practical approaches to bounding your rationality? A stopwatch maybe?
Meta Addiction
That’s what I was wondering, thank you for providing the link to that post. I wasn’t sure how to read Locke’s statement.
Thanks!