I like Kahneman’s lecture here http://www.youtube.com/watch?v=dddFfRaBPqg as it sums up the distinction nicely (thought it’s a bit long) Edit: not sure if a post on LW exists though
Khaled
Type 2 as an aggregation of Type 1 processes
You’d be taking $3 from the experimenters, but in return giving them data that represents your decision in the situation they are trying to simulate (which is a situation where only the two experimentees exist), though your point shows they didn’t mange to set it up very accurately.
I realize it will be difficult to ignore the fact you mentioned once you notice it, just pointing out that not noticing it can be more advantageuos for the experimenter and yourself (not the other experimentee) - maybe another strategic ignorance
It might be of help to include elements of rationality within each course, in addition to a ToK course on it’s own. For example, in physics it might be useful to teach theories that turned out to be incorrect, and to analyze how and why it seemed correct at one point of time, and by collecting more evidence etc. it turned out incorrect.
Perhaps this is too difficult to include in current curriculums, so it can be included in the ToK course as additional discussions? Kind of an application or case study of Bayes’ theorem (it could be prone to hindsight bias, so this has to be taken into consideration, not to make the errors in the theory seem so obvious)
In relation to connectionism, wouldn’t that be the expected behavior? Taking the example of Tide, wouldn’t we expect “ocean” and “moon” to give a headstart to “Tide” when the “favorite detergent” fires up all detergent names in the brain. But we wouldn’t expect “Tide”, “favorite”, and “why” to give a headstart to “ocean” and “moon”?
Perhaps the time between eliciting “Tide” and asking for the reason for choosing it would be relevant (since asking for the reason while the “ocean” and “moon” are still active in the brain can give more chance for choosing them as the reason)?
The idea of “virtual machines” mentioned in [Your Brain is (almost) Perfect] (http://www.amazon.com/Your-Brain-Almost-Perfect-Decisions/dp/0452288843) is tempting me to think in the direction of “reading a manual will trigger the nuerons involved in running the task and the reinforcements will be implemented on those ‘virtual’ runs”.
How reading a manual will trigger this virtual run can be answered by the same way hearing “get me a glass of water” will trigger the neurons to do so, and if I get a “thank you” it will be reinforced. In the same way reading “to open the TV, click the red button on the remote” might trigger the neurons for opening a TV and reinforce the behavior in accordance to the manual.
I know this is quite a wild guess, but perhaps someone can elaborate on it in a more accurate manner
Can we know the victory condition from just watching the game?
So if the blue-minimising robot was to stop after 3 months (the stop condition is measured by a timer), can we say that the robot’s goal is to stay “alive” for 3 months? I cannot see a necessry link between deducing goals and stopping conditions.
A “victory condition” is another thing, but from a decision tree, can you deduce who loses (for Connect Four, perhaps it is the one who reaches the first four that loses).
But if whenever I eat dinner at 6I sleep better than when eating dinner at 8, can I not say that I prefer dinner at 6 over dinner at 8? Which would be one step over saying I prefer to sleep well than not.
I think we could have a better view if we consider many preferences in action. Taking your cyonics example, maybe I prefer to live (to a certain degree), prefer to conform, and prefer to procrastinate. In the burning-building situation, the living preference is playing more or less alone, while in the cryonics situation, preferences interact somewhat like oppsite forces and then motion happens in the winning side. Maybe this is what makes preferences seem like varying?
“Yvain, don’t tell tornadoes what to do”
When calculting the odds of the winning/losing/machine defect, shouldn’t you add the odds of the Many Worlds hypothesis being true? Perhaps wouldn’t affect the relative odds, but might change odds in relation to “moderately rich/poor and won’t try Quantum Immortality”
I think one useful thing is to try and find out why some explanations are more plausible than others (which seems standard, the fact of which explanation is actually true then, won’t affect the guess that much).
When asked a question by an experimenter, I imagine myself trying to give a somewhat quick answer (rather than ask the experimenter to repeat the experiment isolating some variables so that I can answer accurately). I imagine my mind going through reasons until it hits a reason that sounds like ok, i.e. would convince me if I heard it from someone else, and pick that up.
Many of those researches don’t seem to give “the time to answer” as a variable. What if the subjects were asked to think it over for 30 minutes before answering? I am not suggesting they will get the right answer, but perhaps a different answer, since different brain parts may be included in the decision then.
One thing that amazes me is the change over time of this desire/goal divide. Personally, with things like regular exercise, I find that in times of planning my brain will seem in complete coherence—admitting my faults for not exercising and putting forth a plan which seems agreeable to both the consious and unconcious. Once the timefor exercise comes, the tricks start playing.
Maybe the moment of coherence could be somehow captured to be used in the moment of tricks? Also, would those moments be useful to avoid unconscious signalling?
Could that be of any value: trends
Maybe using social network websites could generate a better turnover in the short term
Interesting.
Where does this fit with the idea that voluntary behavior can become involuntary in time. Like driving, where you start by fully thinking (consciously) of each move, and in time it becomes unconscious (not sure if we can call it involuntary). This was discussed a bit by Schrodinger in What is Life.
Will this, now unconsious action, be susceptible to reinforcement? If you find you make lots of accidents, maybe driving will jump back to voluntary?
Your pleasant thoughts were about “being able to speak Swahili” rather than “learning Sawahili”. Your thoughts were about the joy of the reward, which I guess are not reinfornced in total independence from actions (imagine trying to learn Swahili without the rewarded thoughts, you’d probably not make it through th first few calsses), but are certainly not identical.
What would happen if you think about the effort of actually learning? Will it get negatively reinforced the same way as actually doing the effort?
after the computer program above calculates the amplitude (the same every time we run the program), can we incorporate in the program additional steps to simulate our magical measurement tool (the detector)?
Or to be mentioned and praised by people, therefore, it is also for himself
Isn’t this like saying I wont pay for grocery because al the grocer wanted was to get paid?
Anyways, my counter argument will be “I have no choice in giving credit/blame too”. Of course, the reply could well be “and I have no choice in debating the idea” etc—which, I confess, can lead to some wasted time
I think the distinction between decisions (as an end result) and other brain processes can be useful in fields of behavioral economics and the like on the short term, as it reahes results quite fast. But the complexity of decisions makes me visit the examples of unifications in physics. Perhaps if all decisions (not only final output) are treated as the same phenomena, aspects like framing can be understood as altering sub decisions by using their constant value functions, leading to a different decision later in time (which just happens to be the output decision). The idea is perhaps understanding the building blocks of decisions (on a level smaller than final outputs and bigger than single nueron firings) can provide a better model for decision making
I can’t think of another way to reason—does our brain dictate our goal, or receives a goal from somewhere and makes an effort to execute it accurately? I’d go with the first option, which to me means that whatever our brain (code) is built to do is our goal.
The complication in the case of humans might be the fact that we have more than one competing goal. It is as if this robot has a multi-tasking operating system, with one process trying to kill blue objects and another trying to build a pyramid out of plastic bottles. Normally they can co-exist somehow with some switching between processes or by just one process “not caring” about doing some activity at the current instance.
It gets ugly when the robot finds a few blue bottles. Then the robot becomes “irrational” with one process destroying what the other is trying to do. This is simply when you are on a healthy diet and see a slice of chocolate cake—you’re processes are doing their jobs, but they are competing on resources—who gets to move your arms?
Let’s then imagine that we have in our brains a controlling (operating) system that can get to decide which process to kill when they are in conflict. Will this operating system have a right and wrong decision? Or will whatever it does be the right thing according to its code—or else it wouldn’t have done it?