I have a different way to look at this question. (1) introspection is bunk (2) if someone asks us or we ask ourselves why we did something—the answer is a guess, because we have no conscious access to the actual causes of our thoughts and actions (3)we vary in how good we are at guessing and in how honestly they judge themselves and so some people appear to be clearly rationalizing and other appear less so (4) most people are not actually aware that introspection is not direct knowledge but guesswork and so they do not recognize their guesses as guesses but may notice their self-deceptions as deceptions (5) we do not need to know the reasons for our actions unless we judge them as very bad and to be avoided or very good and to be encouraged (6) the appropriate thing in this case is not to ask ourselves why, but to ask ourselves how to change the likelihood of a repeat, up or down. Although we have only guesses about past actions, we can arrange to have some control over future ones (7) the more we know about ourselves, others, our situations, science and so on the better we can answer the how questions.
JanetK
Good, upvoted—your hypothesis is interesting. I tend to think of type 1 as the cognition/pattern recognition/thinking operation and type 2 as a way of sequentially combining type 1 sub-results. The sequentially operation involves working memory and therefore passes through consciousness and is slowed down. As soon as a group of type 1 operations fine-tune themselves to the point of not requiring working memory, they no longer generate type 2 operations.
SaidAchmiz asked for an opinion and I gave an honest one. I may be wrong in the view of some other people but that is still my honest opinion. It is not an overgeneralization as I believe that in all cases, in all situations, at all times the descriptive approach is preferable to the prescriptive one.
In all cases 1-6 - descriptive is scientific, productive, interesting while prescriptive is without evidence, harmful and boring.
OK, I over reacted. Several others have said that it is acceptable in Main—so be it. I guess it does not bother others as much as it bothers me and I won’t comment on corrections in future.
Doesn’t anyone think that it is very rude to comment in someone else’s language unless it is not understandable—just plain RUDE? If someone wants help with language they can ask. Language is a tool not a weapon.
Voting up and waiting for your next installment. (dtz weird text still there)
Why not adopt the convention used in many types of writing? The first time the term is used in a text, it is written in full and its abbreviation or acronym is put after it in brackets. After that the short form is used.
Thank you for the link—very illuminating.
I would like to see some enlargement on the concept of definition. It is usually treated as a simple concept: A means B or C or D; which one depending on Z. But when we try to pin down C for instance, we find that it has a lot of baggage—emotional, framing, stylistic etc. So does B and D. And in no case is the baggage of any of them the same as the baggage of A. None of - defining terms or tabooing words or coining new words - really works all that well in the real world, although they of course help. Do you see a way around this fuzziness?
Another ‘morally good’ definition for your list is ‘that which will not make the doer feel guilty or shameful in future’. It is no better than the others but quite different.
I hope there are soon some comments to this question. What do AI people think of the analysis—Marr’s and nhamann? Is the history accurate? This there a reason for ignoring?
I have been pointed at those pieces before. I read them originally and I have re-read them not long ago. Nothing in them changes my conviction (1) that it is dangerous to communication to use the term ‘free will’ in any sense other than freedom from causality, (2) I do not accept a non-material brain/mind nor a non-causal thought process. Also I believe that (3) using the phrase ‘determinism’ in any sense other that the ability to predict is dangerous to communication, and (4) we cannot predict in any effective way the processes of our own brain/minds. Therefore free will vs determinism is not a productive argument. Both concepts are flawed. In the end, we make decisions and we are (usually) responsible for them in a moral-ethical-legal sense. And those decision are neither the result of free will or of determinism. You can believe in magical free will or redefine the phrase to avoid the magic—but I decline to do either.
Right on. Free will is nonsense but morality is important. I see moral questions as questions that do not have a clear cut answer that can be found be consulting some rules (religious or not). We have to figure out what is the right thing to do. And we will be judged by how well we do it.
Tordmor has commented on my attitude—sorry I did not mean to sound so put out. The reason for the ‘near future’ was because the discussion was about ‘upload’ and so I assumed we talking about our lifetimes which in the context seemed the near furture (about the next 50 years). Making an approximate emulation of some simple invertebrate brain is certainly on the cards. But an accurate emulation of a particular person’s brain is a different ballpark entirely.
I never know exactly what people mean when they say emulation or simulation or model. How much is the idea to mimic how the brain does something? To ‘upload’ someone, the receiving computer would need some sort of mapping to the physical brain of that person. This is a very tall order.
Thanks for the link to the Roadmap which I will be reading it.
Do you honestly believe that an artificial brain can be built purely in software in the near future? And if it could how would it be accurate enough to be some particular person’s brain rather than a generic one? And if it was someone’s brain could the world afford to do this for more than one or two person’s at a time? I am not at all convinced of ‘uploads’.
I am a bit surprised if this is surprising—is it not obvious that electric fields will affect neuron activity. Whether a neuron fires depends on the voltage across its membrane (at a point in a particular region at the base of the axon and, it seems, down the axon). The electric field around the neuron will affect this voltage difference as in good old-fashioned electrical theory. This is important for synchrony in firing (as in the brain waves) and that is important for marking synapses between neurons that have fired simultaneously for chemical changes. etc. etc. etc. Fields are not to be thought of as a little side effect. What is more interesting is what the fields do to glial cells and their communication which is (I believe) carried out with calcium ions but very affected by electrical fields. The synapses live in an environment created by the surrounding glia. The brain cannot be reduced to a bunch of on-off switches.
I figure when we have built an artificial kidney that works as well as a kidney, and an artificial heart that works as well as a heart, and an artificial pancreas that works as well as a pancreas—then it will be reasonable to know whether an artificial brain is a reasonable goal.
If we have figured out how to compute the weather accurately some weeks into the future—then we might know whether we can compute a much more complex system. If we had the foggiest idea of how the brain actually works—then we might know what level of approximation is good enough.
Don’t hold your breath for a personal upload.
I seems to agree with your original list. I would phrase the free will one differently—both free will and determinism are useless concepts because we have no mechanism for contra-causality other than spirit-magic and we cannot predict our decisions even if they are causally produced.
This is not a surprise. Who wants to be a philosopher and who wants to be a scientist? Who likes to discuss the questions and who likes to discuss the answers? Who values consensus?
Interesting that this has no comments yet. I do not know why this subject is treated as ‘political’ or ‘controversial’. This group should not be anti-science or ‘head in the sand’, but it seems to be.