Don’t bogart that joint, my friend.
Aron
It is true that the topic is too large for casual followers (such as myself). So rather than aiming at refining any of the points personally, I wonder in what ways Robin has convinced Eli, and vice-versa. Because certainly, if this were a productive debate, they would be able to describe how they are coming to consensus. And from my perspective there are distinct signals that the anticipation of a successful debate declines as posts become acknowledged for their quality as satire.
“In a foom that took two years..”
The people of the future will be in a considerably better position than you to evaluate their immediate future. More importantly, they are in a position to modify their future based on that knowledge. This anticipatory reaction is what makes both of your opinions exceedingly tenuous. Everyone else who embarks on pinning down the future at least has the sense to sell books.
In the light of this, the goal should be to use each other’s complementary talents to find the hardest rock solid platform not to sell the other a castle made of sand.
And I believe that if two very smart people manage to agree on where to go for lunch they have accomplished a lot for one day.
Perhaps you are marginally ahead of your time Eliezer, and the young individuals that will flush out the theory are still traipsing about in diapers. In which case, either being a billionare or a phD makes it more likely you can become their mentor. I’d do the former if you have a choice.
What could an AI do, yet still be unable to self-optimize? Quite a bit it turns out: everything that a modern human can do as a minimum and possibly a great deal more since we have yet to demonstrate that we can engineer intelligence. (I admit here that it may be college-level material once discovered)
If we define the singularity as the wall beyond which is unpredictable, I think we can have an effective singularity without FOOM. This follows from admitting that we can have computers that are superior to us in every way, without even achieving recursive modification. These machines then have all the attendant advantages of limitless hardware, replicability, perfect and expansive memory, deep serial computation, rationality by design, limitless external sensors, etc.
if it is useless to predict past the singularity, and if foom is unlikely to occur prior to the singularity, does this make the pursuit of friendliness irrelevant? Do we have to postulate foom = singularity in order to justify friendliness?
While awaiting my productivity to reemerge from chaos I stumbled upon an old interview with Ayn Rand and Tom Snyder in which she concludes with ‘Thank God for America’. So there ya go.
meh. My last point doesn’t make sense. Fixing the bias isn’t equivalent to fixing your problem.
So it can be a mind projection fallacy even when you are ultimately reasoning about your own mind? Something needs to cancel out in the divisor. A more accurate assessment of others’ mental nature may not assist you when you then tie it back into your own. You have mentioned this productivity issue a couple times, and yet don’t want solutions suggested. Now that could be because the solution itself is OT (identifying is ok, but fixing a bias is OT), or because you don’t think what works for others could actually work for you.
The Socrates paragraph stands out to me. It doesn’t seem sporting to downplay one approach in comparison to another by creating two scenarios, with one being what a five-year old might say and the other being what a college grad (or someone smart enough to go to college) might say. Can that point be illustrated without giving such an unbalanced appearance?
The problem of course (to the discussion and to the above example)is: how much do you think you know about the underlying mechanics of what you are analyzing?
We know we are in for a dramatic finale when the history of the universe is recounted as prologue. Fortunate for us that the searchable neighborhood has always held superior possibilities. And one would expect the endpoint of intelligence to be when this ceases to be the case. Perhaps there will be a sign at the end of the universe that says ‘You can’t get there from here. Sorry.’.
The leaders in the NetflixPrize competition for the last couple years have utilized ensembles of large numbers of models with a fairly straightforward integration procedure. You can only get so far with a given model, but if you randomly scramble its hyperparameters or training procedure, and then average multiple runs together, you will improve your performance. The logical path forward is to derandomize this procedure, and figure out how to predict, a priori, which model probabilities become more accurate and which don’t. But of course until you figure out how to do that, random is better than nothing.
As a process methodology, it seems useful to try random variations, find the ones that outperform and THEN seek to explain it.
″..people refusing to contemplate the real values of the opposition as the opposition sees it...”
Popular news and punditry seems saturated with this refusal, to the point that I desire to characterize the media’s real values in a wholly unbecoming and unfairly generalized manner. It would be a nice evolution of society if we introduced a ‘rationality’ class into the public school curriculum. Or perhaps developed a ‘Bayes Scouts’ with merit badges.
haha—damn—you beat me to it: http://lists.extropy.org/pipermail/extropy-chat/2008-March/042369.html
It’s a good thing that Eli’s out of the AI-box game. He’s too old to win anymore anyway—not as sharp. And all the things he’s been studying for the last 5+ years would only interfere with getting the job done. I would have liked to have seen him in his prime!
Speaking of gatekeeper and keymaster… Does the implied ‘AI in a box’ dialogue remind anyone else of the cloying and earnest attempts of teenagers (usually male) to cross certain taboo boundaries?
Oh well just me likely.
In keeping with that metaphor, however, I suspect part of the trick is to make the gatekeeper unwilling to disappoint the AI.
It seems relevant to the above post that the market reaction to the bailout passing on Friday was decidely negative.
Which puzzles me.
Consequentialist: Is it a fair universe where the wealthy live forever and the poor die in the relative blink of an eye? It seems hard for our current society to look past that when setting public policy. This doesn’t necessarily explain why there isn’t more private money put to the purpose, but I think many of the intelligent and wealthy at the present time would see eternal life quests as a millennial long cliche of laughable selfishness and not in tune with leaving a respectable legacy.
Strong AI doesn’t have to be the only thing that’s really frikkin’ hard.
Apparently Luke didn’t have to try for very long: http://www.cracked.com/article_16625_p2.html
We’ll likely see how long someone can spend straining to lift the starship out of the swamp with no success before giving up. More zebras than jedi masters in this near,near galaxy.
“The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through “impossible” problems to get to any sort of Good future whatsoever.”
But this is just repeating the same thing over and over. ‘Precise steering’ in your sense has never existed historically, yet we exist in a non-null state. This is essentially what Robin extrapolates as continuing, while you postulate a breakdown of historical precedent via abstractions he considers unvetted.
In other words, ‘loss of control’ is begging the question in this context.