Is the question now posed in the main post a respectable start?
The post as it now stands needs some serious proofreading.
What sorts of questions could someone ask to learn the most about failure?
LessWrong is likely to focus on cognitive biases, and this is a good place to start. I assume that you have already read some on the subject, but if not, we have a lot on site, and there are some good books—for example, The Invisible Gorilla and Mistakes Were Made. Everyone will have a different list of recommended reading, but I don’t know if that is the sort of info dump you are looking for.
I think that your question may be too general. Being more specific will almost surely give you more useful responses.
I suspect that your best bet would be to notice specific sub-optimal outcomes in your life, and then ask knowledgeable people (which may include us) for thoughts and information. If you have access to a trustworthy person who will give honest and detailed feedback, you might ask them to observe you in completing some process (or better, multiple processes) and take notes on any thoughts they have regarding your actions—things you do differently, things you do wrong, things that you do slower than most people, etc. They will probably notice some things that you do not. They may not know how to help you change, but that doesn’t make their information any less valuable.
Thank you for the feedback. This was a surprisingly useful line of interaction.
The first thing it did was make me remember that inferential gaps take caution at the very least to cross. Another way I failed was in not carrying out my empathetic modules of people far enough; I knew people would realize what I was after was large and vague, but then trailed off into assuming people would actually want to rattle off in some randomly chosen direction available to them. Taken on iota more of a step and I can feel how annoying such a prompt is.
And then I recalled something about A.I. safety; something along the lines of not being able to specify all the ways we don’t want an AI (genie?) to act; the nature of value or goal specification is too exclusive to approach from that direction efficiently. Reflection to see if I can be coherent about his will have to happen later.
As of this moment (2 am) it is unattractive to see if I am on to something or not. Thank you once more for the feedback. It feels like I’ve gained valuable responses.
The post as it now stands needs some serious proofreading.
LessWrong is likely to focus on cognitive biases, and this is a good place to start. I assume that you have already read some on the subject, but if not, we have a lot on site, and there are some good books—for example, The Invisible Gorilla and Mistakes Were Made. Everyone will have a different list of recommended reading, but I don’t know if that is the sort of info dump you are looking for.
I think that your question may be too general. Being more specific will almost surely give you more useful responses.
I suspect that your best bet would be to notice specific sub-optimal outcomes in your life, and then ask knowledgeable people (which may include us) for thoughts and information. If you have access to a trustworthy person who will give honest and detailed feedback, you might ask them to observe you in completing some process (or better, multiple processes) and take notes on any thoughts they have regarding your actions—things you do differently, things you do wrong, things that you do slower than most people, etc. They will probably notice some things that you do not. They may not know how to help you change, but that doesn’t make their information any less valuable.
Thank you for the feedback. This was a surprisingly useful line of interaction.
The first thing it did was make me remember that inferential gaps take caution at the very least to cross. Another way I failed was in not carrying out my empathetic modules of people far enough; I knew people would realize what I was after was large and vague, but then trailed off into assuming people would actually want to rattle off in some randomly chosen direction available to them. Taken on iota more of a step and I can feel how annoying such a prompt is.
And then I recalled something about A.I. safety; something along the lines of not being able to specify all the ways we don’t want an AI (genie?) to act; the nature of value or goal specification is too exclusive to approach from that direction efficiently. Reflection to see if I can be coherent about his will have to happen later.
As of this moment (2 am) it is unattractive to see if I am on to something or not. Thank you once more for the feedback. It feels like I’ve gained valuable responses.