I like this prompt, and it so happens I have a proper response that fits.
I’ve seen people talk of noticing failure, but thankfully it having been a gentle one, they managed to make something of it. Sometimes people speak or write as if their may be some underlying method to be mined away from luck.
While planning actions, is it a good heuristic to attempt action so as a fall would not break legs, so to speak?
Well look at that you’ve helped me dissolve a question into a form that has an obvious answer. This is both nice (less clutter), and partly the reason I was asking for a dump. I’m trying to stumble across gaps in my understanding, not necessarily tangles (although again, thank you).
I suppose I expect to de-tangle my knowledge of this subject as I review anything possibly relevant. I just thought to ask here in conjunction with said review.
I’m trying to be as comprehensive as possible, which means I should ask the obvious first. Is the question now posed in the main post a respectable start?
Is the question now posed in the main post a respectable start?
The post as it now stands needs some serious proofreading.
What sorts of questions could someone ask to learn the most about failure?
LessWrong is likely to focus on cognitive biases, and this is a good place to start. I assume that you have already read some on the subject, but if not, we have a lot on site, and there are some good books—for example, The Invisible Gorilla and Mistakes Were Made. Everyone will have a different list of recommended reading, but I don’t know if that is the sort of info dump you are looking for.
I think that your question may be too general. Being more specific will almost surely give you more useful responses.
I suspect that your best bet would be to notice specific sub-optimal outcomes in your life, and then ask knowledgeable people (which may include us) for thoughts and information. If you have access to a trustworthy person who will give honest and detailed feedback, you might ask them to observe you in completing some process (or better, multiple processes) and take notes on any thoughts they have regarding your actions—things you do differently, things you do wrong, things that you do slower than most people, etc. They will probably notice some things that you do not. They may not know how to help you change, but that doesn’t make their information any less valuable.
Thank you for the feedback. This was a surprisingly useful line of interaction.
The first thing it did was make me remember that inferential gaps take caution at the very least to cross. Another way I failed was in not carrying out my empathetic modules of people far enough; I knew people would realize what I was after was large and vague, but then trailed off into assuming people would actually want to rattle off in some randomly chosen direction available to them. Taken on iota more of a step and I can feel how annoying such a prompt is.
And then I recalled something about A.I. safety; something along the lines of not being able to specify all the ways we don’t want an AI (genie?) to act; the nature of value or goal specification is too exclusive to approach from that direction efficiently. Reflection to see if I can be coherent about his will have to happen later.
As of this moment (2 am) it is unattractive to see if I am on to something or not. Thank you once more for the feedback. It feels like I’ve gained valuable responses.
I like this prompt, and it so happens I have a proper response that fits.
I’ve seen people talk of noticing failure, but thankfully it having been a gentle one, they managed to make something of it. Sometimes people speak or write as if their may be some underlying method to be mined away from luck.
While planning actions, is it a good heuristic to attempt action so as a fall would not break legs, so to speak?
Well look at that you’ve helped me dissolve a question into a form that has an obvious answer. This is both nice (less clutter), and partly the reason I was asking for a dump. I’m trying to stumble across gaps in my understanding, not necessarily tangles (although again, thank you).
I suppose I expect to de-tangle my knowledge of this subject as I review anything possibly relevant. I just thought to ask here in conjunction with said review.
I’m trying to be as comprehensive as possible, which means I should ask the obvious first. Is the question now posed in the main post a respectable start?
The post as it now stands needs some serious proofreading.
LessWrong is likely to focus on cognitive biases, and this is a good place to start. I assume that you have already read some on the subject, but if not, we have a lot on site, and there are some good books—for example, The Invisible Gorilla and Mistakes Were Made. Everyone will have a different list of recommended reading, but I don’t know if that is the sort of info dump you are looking for.
I think that your question may be too general. Being more specific will almost surely give you more useful responses.
I suspect that your best bet would be to notice specific sub-optimal outcomes in your life, and then ask knowledgeable people (which may include us) for thoughts and information. If you have access to a trustworthy person who will give honest and detailed feedback, you might ask them to observe you in completing some process (or better, multiple processes) and take notes on any thoughts they have regarding your actions—things you do differently, things you do wrong, things that you do slower than most people, etc. They will probably notice some things that you do not. They may not know how to help you change, but that doesn’t make their information any less valuable.
Thank you for the feedback. This was a surprisingly useful line of interaction.
The first thing it did was make me remember that inferential gaps take caution at the very least to cross. Another way I failed was in not carrying out my empathetic modules of people far enough; I knew people would realize what I was after was large and vague, but then trailed off into assuming people would actually want to rattle off in some randomly chosen direction available to them. Taken on iota more of a step and I can feel how annoying such a prompt is.
And then I recalled something about A.I. safety; something along the lines of not being able to specify all the ways we don’t want an AI (genie?) to act; the nature of value or goal specification is too exclusive to approach from that direction efficiently. Reflection to see if I can be coherent about his will have to happen later.
As of this moment (2 am) it is unattractive to see if I am on to something or not. Thank you once more for the feedback. It feels like I’ve gained valuable responses.