What, then, is a good method of establishing what you think the future will look like?
I think I’ve read the following method from other people on Less Wrong, but I can’t place a specific source or name. if anyone else can help me with attribution or a link to more details, I would appreciate it.
Generate a list of each possibility whose consequences seem worth tracking(I.E, they would affect whether or not you felt future changes would be superficial) and then also track what you think are the probabilities of each step whose consequences seem worth tracking.
Example: Let’s say you think there is a 20% chance that technological growth will continue exponentially into a singularity and a 80% chance it will proceed at a slower pace. You can then say “Well, If technology proceeds at a slower pace, then I think that there is an 80% chance that enough jobs will be automated to substantially reshape society, and a 20% chance that the portion of jobs automated will be superficial. And If technology does proceed into a singularity, I think that there will be a 20% chance of it being a Friendly Singularity, and an 80% chance of it being an Unfriendly singularity.
That means the future you would expect to see would be technology proceeds at a slower pace, but there are enough jobs will be automated to substantially reshape society (64%) Unfriendly Singularity or superficial automation are about equally likely (16% each) but less to be expected, and then a friendly Singularity is the least likely but not impossible. (4%)
So you establish a rough pass of a net like that with a number of different nodes (as opposed to just 3 and only 2 deep).
Then there would then be three ways you might be wrong, and you want to try to update accordingly:
You’re convinced an existing node won’t matter. (Example: You read a paper that persuades you that whether or not people will have middle class jobs won’t matter in the long run) so you remove that node.
You’re convinced a new node will matter, that you hadn’t accounted for. (Example: You read a paper that persuades you that whether or not people robots and algorithms will be allowed to autonomously make kill decisions will matter in the long run)
You find out about something that updates your probability of one of your existing nodes. (MIRI publishes a paper which causes you to update the likelihood of a friendly singularity to a different percentage.)
This helps you separate out impact and likelihood, and also keeps in mind that certainty things have prerequisites, all of which is important for forecasting.
Thank you to whoever explained this initially and I apologize I can’t remember the concept name. Hopefully I did not express anything incorrectly.
(Note: the specific numbers above are mostly arbitrary and I mainly selected them for easy multiplication.)
I think I’ve read the following method from other people on Less Wrong, but I can’t place a specific source or name. if anyone else can help me with attribution or a link to more details, I would appreciate it.
Generate a list of each possibility whose consequences seem worth tracking(I.E, they would affect whether or not you felt future changes would be superficial) and then also track what you think are the probabilities of each step whose consequences seem worth tracking.
Example: Let’s say you think there is a 20% chance that technological growth will continue exponentially into a singularity and a 80% chance it will proceed at a slower pace. You can then say “Well, If technology proceeds at a slower pace, then I think that there is an 80% chance that enough jobs will be automated to substantially reshape society, and a 20% chance that the portion of jobs automated will be superficial. And If technology does proceed into a singularity, I think that there will be a 20% chance of it being a Friendly Singularity, and an 80% chance of it being an Unfriendly singularity.
That means the future you would expect to see would be technology proceeds at a slower pace, but there are enough jobs will be automated to substantially reshape society (64%) Unfriendly Singularity or superficial automation are about equally likely (16% each) but less to be expected, and then a friendly Singularity is the least likely but not impossible. (4%)
So you establish a rough pass of a net like that with a number of different nodes (as opposed to just 3 and only 2 deep).
Then there would then be three ways you might be wrong, and you want to try to update accordingly:
You’re convinced an existing node won’t matter. (Example: You read a paper that persuades you that whether or not people will have middle class jobs won’t matter in the long run) so you remove that node.
You’re convinced a new node will matter, that you hadn’t accounted for. (Example: You read a paper that persuades you that whether or not people robots and algorithms will be allowed to autonomously make kill decisions will matter in the long run)
You find out about something that updates your probability of one of your existing nodes. (MIRI publishes a paper which causes you to update the likelihood of a friendly singularity to a different percentage.)
This helps you separate out impact and likelihood, and also keeps in mind that certainty things have prerequisites, all of which is important for forecasting.
Thank you to whoever explained this initially and I apologize I can’t remember the concept name. Hopefully I did not express anything incorrectly.
(Note: the specific numbers above are mostly arbitrary and I mainly selected them for easy multiplication.)