That is an excellent question.
outlawpoet
I must agree with this, although video and most writing OTHER than short essays and polemics would be mostly novel, and interesting.
If every snide, unhelpful jokey reply you post is secretly a knowing reference to something only one other person in the world can recognize, I retract every bad thing I ever said about you.
Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?
I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.
It may not be a completely generic bias or fallacy, but it certainly can affect more than just human decision processes. There are a number of primitive systems that exhibit pathologies similar to what Eliezer is describing, speech recognition systems, for example, have a huge issue almost exactly isomorphic to this. Once some interpretation of a audio wave is a hypothesis, it is chosen in great excess to it’s real probability or confidence. This is the primary weakness of rule-based voice grammars, that their pre-determined possible interpretations lead to unexpected inputs being slotted into the nearest pre-existing hypothesis, rather than leading to a novel interpretation. The use of statistical grammars to try to pound interpretations to their ‘natural’ probabilistic initial weight is an attempt to avoid this issue.
This problem is also hidden in a great many AI decision systems within the ‘hypothesis generation’ system, or equivalent. However elegant the ranking and updating system, if your initial possible list is weak, you distort your whole decisions process.
The bloodstained sweater in the original song refers to an urban legend that Mr. Rogers was a Marine Sniper in real life.
Why on earth wouldn’t I consider whether or not I would play again? Am I barred from doing so?
If I know that the card game will continue to be available, and that Omega can truly double my expected utility every draw, either it’s a relatively insignificant increase of expected utility over the next few minutes it takes me to die, in which case it’s a foolish bet, compared to my expected utility over the decades I have left, conservatively, or Omega can somehow change the whole world in the radical fashion needed for my expected utility over the next few minutes it takes me to die to dwarf my expected utility right now.
This paradox seems to depend on the idea that the card game is somehow excepted from the 90% likely doubling of expected utility. As I mentioned before, my expected utility certainly includes the decisions I’m likely to make, and it’s easy to see that continuing to draw cards will result in my death. So, it depends on what you mean. If it’s just doubling expected utility over my expected life IF I don’t die in the card game, then it’s a foolish decision to draw the first or any number of cards. If it’s doubling expected utility in all cases, then I draw cards until I die, happily forcing Omega to make verifiable changes to the universe and myself.
Now, there are terms at which I would take the one round, IF you don’t die in the card game version of the gamble, but it would probably depend on how it’s implemented. I don’t have a way of accessing my utility function directly, and my ability to appreciate maximizing it is indirect at best. So I would be very concerned about the way Omega plans to double my expected utility, and how I’m meant to experience it.
In practice, of course, any possible doubt that it’s not Omega giving you this gamble far outweighs any possibility of such lofty returns, but the thought experiment has some interesting complexities.
I see, I misparsed the terms of the argument, I thought it was doubling my current utilons, you’re positing I have a 90% chance of doubling my currently expected utility over my entire life.
The reason I bring up the terms in my utility function, is that they reference concrete objects, people, time passing, and so on. So, measuring expected utility, for me, involves projecting the course of the world, and my place in it.
So, assuming I follow the suggested course of action, and keep drawing cards until I die, to fulfill the terms, Omega must either give me all the utilons before I die, or somehow compress the things I value into something that can be achieved in between drawing cards as fast as I can. This either involves massive changes to reality, which I can verify instantly, or some sort of orthogonal life I get to lead while simultaneously drawing cards, so I guess that’s fine.
Otherwise, given the certainty that I will die essentially immediately, I certainly don’t recognize that I’m getting a 90% chance of doubled expected utility, as my expectations certainly include whether or not I will draw a card.
I seem to have missed some context for this, I understand that once you’ve gone down the road of drawing the cards, you have no decision-theoretic reason to stop, but why would I ever draw the first card?
A mere doubling of my current utilons measured against a 10% chance of eliminating all possible future utilons is a sucker’s bet. I haven’t even hit a third of my expected lifespan given current technology, and my rate of utilon acquisition has been accelerating. Quite aside from the fact that I’m certain my utility function includes terms regarding living a long time, and experiencing certain anticipated future events.
It’s useful evidence that EURISKO was doing something. There were some extremely dedicated and obsessive people involved in Traveller, back then. The idea that someone unused to starship combat design of that type could come and develop fleets that won decisively two years in a row seems very unlikely.
It might be that EURISKO acted merely as a generic simulator of strategy and design, and Lenat did all the evaluating, and no one else in the contest had access to simulations of similar utility, which would negate much of the interest in EURISKO, I think.
There are a number of DARPA and IARPA projects we pay attention to, but I’d largely agree that their approaches and basic organization makes them much less worrying.
They tend towards large, bureaucratically hamstrung projects, like PAL, which the last time I looked included work and funding for teams at seven different universities, or they suffer from extreme narrow focus, like their intelligent communication initiatives, which went from being about adaptive routing via deep introspection of multimedia communication and intelligent networks, to just being software radios and error correction.
They’re worth keeping any eye on mostly because they have the money to fund any number of approaches, and often in long periods. But the biggest danger isn’t their funded, stated goals, it’s the possibility of someone going off-target, and working on generic AI in the hopes of increasing their funding or scope in the next evaluation, which could be a year or more later.
I jumped the theist fence after reading a book whose intellectual force was too great to be denied outright, and too difficult to refute point by point. I hate being wrong, and feeling stupid, and the arguments from the book stayed in my thoughts for a long time.
I didn’t formalize my thoughts until later, but if my atheism had a cause, it was THE CASE AGAINST GOD by George H Smith. I was very emotionally satisfied with my religion and it’s community beforehand.
I use ManicTime, myself
I’m not really interested in actual party divisions so much as I am interested in a survey of beliefs.
Affiliation seems like much less useful information, if we’re going to use Aumann-like agreement processes on this survey stuff.
Yes, it might be more useful to list some wedge issues that usually divide the parties in the US.
Doesn’t that make the problem worse, though?
If the feedback is esteem of students in the field, then you’re rewarding the mentor who picks his battles carefully, who can sell what happened on any encounter in a positive and understandable light. The honest mentors and ‘researchers’ who approach a varied population, analyze their performance without upselling, and accrete performance over time(as you’d expect with a real, generic skill) will lose out.
I found the last survey interesting because of the use of ranges and confidence measures. Are there any other examples of this that a community response would be helpful for?
What is the time-urgency, if you don’t mind my asking? Other than Vassar’s ascension, the Summer of Code projects, and LessWrong, I wasn’t aware of anything going on at SingInst with any kind of schedule.
My first attempt at volunteering for Eliezer ended badly, for outside and personal reasons, and I haven’t seriously considered it since, mostly because I didn’t really understand the short-term goals of SingInst(Or I didn’t agree with what I did understand of them).
Also, to be honest, the last thing that I found useful (in terms of my Singularitarian goals) to come out of it was CEV, which was quite a while ago now. Are there new projects, or private projects coming to public view? Why now?
Yes, that’s true. I think I was fighting a rearguard action here, trying to defend my hypothesis. I’ve changed my votes accordingly. Cheers to you and Yvain.
hi