I don’t see how this is a problem. Do you think it is a problem ? If so, then why specifically and do you have any ideas for a solution?
Duncan
To be fair, it’s really hard to figure out WTF is going on when humans are involved. Their reasoning is the result of multiple motivations and a vast array of potential reasoning errors. If you don’t believe me try the following board games with your friends: Avalon, Coup, Sheriff of Nottingham, Battlestar Galactica, or any that involve secrets and lying.
Edited phrasing to make it more clear....
Your phrasing makes it also look like a plausible mistake for someone in a new situation with little time to consider things.
A story for the masses is necessary and this doesn’t appear to be a bad stab at one. Harry can always bring trusted others on board by telling them what actually happened. He might have actually done that already and this is their plan. How much time did Harry have to do stuff before needing to show up anyhow (40m? 50m?)? Also, Prof. McGonagall is terrible at faking anything so telling her the truth before this seems like a bad idea.
Lucius is both dead and warm. I think he’s dead dead unless Eliezer has someone like Harry does something in a very narrow time window. Dumbledore is a much easier problem to solve (story wise) and can be solved at the same time as the Atlantis story thread if that is what the author plans.
If you want to make the scenario more realistic then put more time pressure on Voldemort or put him under more cognitive stress some other way. The hardest part for Voldemort is solving this problem in a short time span and NOT coming up with a solution that foils Harry. The reason experienced soldiers/gamers with little to no intelligence still win against highly intelligent combatants with no experience is that TIME matters when you’re limited to a single human’s processing power. In virtually every combat situation one is forced to make decisions faster than one can search the solution space. Only experience compensates for this deficit to any measurable degree. In this situation there are several aspects that Voldemort does not have experience with. If he must spends his cognitive resources considering these aspects and cannot draw from experience it makes mistakes much more likely.
I begin to wonder exactly how the story will be wrapped up. I had thought the source of magic would be unlocked or the Deathly Hallows riddle would be tied up. However, I wonder if there are enough chapters to do these things justice. I also wonder whether Eliezer will do anything like was done for Worm where the author invited suggestions for epilogs for specific characters.
I see your point, but Voldemort hasn’t encountered the AI Box problem has he? Further, I don’t think Voldemort has encountered a problem where he’s arguing with someone/something he knows is far smarter than himself. He still believes Harry isn’t as smart yet.
You should look at reddit to coordinate your actions with others. One idea I like is to organize the proposal of all reasonable ideas and minimize duplication. Organization thread here: http://www.reddit.com/r/HPMOR/comments/2xiabn/spoilers_ch_113_planning_thread/
I agree that this task is far “easier task than a standard AI box experiment”. I attacked it from a different angle though (HarryPrime can easily and honestly convince Voldemort he is doomed unless HarryPrime helps him).:
Quirrelmort would be disgusted with us if we refused to consider ‘cheating’ and would certainly kill us for refusing to ‘cheat’ if that was likely to be extremely helpful.
“Cheating is technique, the Defense Professor had once lectured them. Or rather, cheating is what the losers call technique, and will be worth extra Quirrell points when executed successfully.”
Actually, this isn’t anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is ‘good’, 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result ‘good’).
- 1 Mar 2015 5:45 UTC; 1 point) 's comment on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 by (
Why hasn’t Voldemort suspended Harry in air? He floated himself into the air as a precaution against proximity, line of sight problems, and probably magics that require a solid substance to transmit through. If Harry were suspended in air partial transfiguration options would be vastly reduced.
Why hasn’t Voldemort rendered Harry effectively blind/deaf/etc. - Harry is gaining far more information in real time than necessary for Voldemort’s purposes?
Also, it seems prudent not to let Harry get all over the place by shooting him, smashing him, etc. without some form of containment. I don’t know how some part of Harry could cause problems, but it seems prudent to eliminate every part of him with Fiendfyre (blood, guts, and all) if that is what Voldemort is aiming for.
Can Fawkes be summoned to extract Harry? If it helps Harry can decide to go to Azkaban.
Harry should be aware that reality is basically doomed to repeat the Atlantis mistake by now (either via AGI or whatever Atlantis unlocked). With the vow that Voldemort made him take he can honestly say that he is the best bet to avoid that fate. That is, Voldemort now needs Harry (and Hermione) to save reality. This seems like the most straight forward method get out of the current annoyance.
Some partial transfiguration options I haven’t seen mentioned:
Colorless / odorless neuro toxins (Harry should have researched these as he is in ‘serious mode’ now that Hermione died). Delivered via the ground directly into each death eater and/or into the air in specific areas.
Nanobots—I can’t recall at this time if this would work or if Harry needs to have the design already in his head. It is possible Atlantis tech. may utilize a vast array of these already.
Transfiguration may allow one to exploit quantum weirdness. Many things can happen at very small scales that could happen at large scales if everything is lined up just so (which never happens in reality, but transfiguration may make possible).
I like this exercise. It is useful in at least two ways.
Help me take a critical look at my current cherished views. Here’s one: work hard now and save for retirement; it is still cherished, but I already know of several lines of attack that might work if I think them through.
Help me take time to figure out how I’d hack myself.
It might also be interesting to come up with a cherished group view and try to take that apart (e.g., cryonics after death is a good idea—perhaps start with the possibility that the future likely to be hostile to you such as unfriendly AI).
Anecdotal evidence amongst people I’ve questioned falls into two main categories. The 1st is the failure to think the problem through formally. Many simply focus on the fact that whatever is in the box remains in the box. The 2nd is some variation of failure to accept the premise of an accurate prediction of their choice. This actually counter intuitive to most people and for others it is very hard to even casually contemplate a reality in which they can be perfectly predicted (and therefore, in their minds, have no ‘free will / soul’). Many conversations simply devolve into ‘Omega can’t actually make a such an accurate prediction about my choice therefore or I’d normally 2 box so I’m not getting my million anyhow’.
Game of Thrones and the new Battlestar Galactica appear to me to have characters that are either shallow and/or conflicted by evil versus evil. Yet they are very popular and as far as I can tell, character driven. I was wondering what it means. One thought I had was that many people are interested in relationship conflicts and that the characters don’t need to be deep, they just need to reflect, between the main character cast, the personalities of the audience (as messed up as the audience might be).
It is not as much that they haven’t given an argument or stated their position. It is that they are telling you (forcefully) WHAT to do without any justification. From what I can tell of the OP’s conversation this person has decided to stop discussing the matter and gone straight to telling the OP what to do. In my experience, when a conversation reaches that point, the other person needs to be made aware of what they are doing (politely if possible—assuming the discussion hasn’t reached a dead end, which is often the case). It is very human and tempting to rush to the ‘Are you crazy?!! You should __.’ and skip all the hard thinking.
Given the ‘Sorry if it offends you’ and the ‘Like… no’ I think your translation is in error. When a person says either of those things they are A. saying I no longer care about keeping this discussion civil/cordial and B. I am firmly behind (insert their position here). What you have written is much more civil and makes no demands on the other party as opposed to what they said ”… you should ….”
That being said, it is often better to be more diplomatic. However, letting someone walk all over you isn’t good either.
Yes, yes there is :). http://boardgamegeek.com/boardgame/37111/battlestar-galactica