I started working with this as a rubric for analyzing tech companies… then (trying to number and rename in a useful way so that the diagram’s contents could be quickly cited in writing) I noticed that the node positions at the bottom did not seem to have been optimized for avoiding crossed lines and easy reading.
Also “Creeping Failure” and “Inconspicuous Failure” have strong overlaps but are far from each other, and “ML Scales to AGI” (at the top right) has no arrow to “Many Powerful AIs” (at the lower left) which it seems like it obviously should have?
Another quirk: if NOT-”Agentive AGI” (in the middle near the top), then maybe “Comprehensive AI Services” (lower right) instead happens instead, but then the only arrow from there is a positive one to its next door neighbor “Context For AGI More Secure”. However, if you think about it, humans having more really good tools seems to me like it would be an obviously useful input to “Use Feedback Loops To Correct Course As We Go” in the lower left, to make that work better? But again I find no such arrow.
A hypothesis that explains most of this is that your tools didn’t allow fast iteration or easy validity checking and/or perhaps you didn’t do a first draft in a spreadsheet and then convert to this for display purposes.
I started using an actual belief network tool to regenerate things, preparatory to assigning numbers and then letting “calculemus” determine my beliefs… and then noticed a Practice-Level-”smell”, on my part, related to refactoring someone’s old work without talking to them first.
Is this graph from August 2019 still relevant to anyone else’s live models or active plans in October of 2021?
Also, if this document still connects to a living practice, is there a most-recently-updated version that would be a better jumping off point for refinement?
Late arriving comment here! :-)
I started working with this as a rubric for analyzing tech companies… then (trying to number and rename in a useful way so that the diagram’s contents could be quickly cited in writing) I noticed that the node positions at the bottom did not seem to have been optimized for avoiding crossed lines and easy reading.
Also “Creeping Failure” and “Inconspicuous Failure” have strong overlaps but are far from each other, and “ML Scales to AGI” (at the top right) has no arrow to “Many Powerful AIs” (at the lower left) which it seems like it obviously should have?
Another quirk: if NOT-”Agentive AGI” (in the middle near the top), then maybe “Comprehensive AI Services” (lower right) instead happens instead, but then the only arrow from there is a positive one to its next door neighbor “Context For AGI More Secure”. However, if you think about it, humans having more really good tools seems to me like it would be an obviously useful input to “Use Feedback Loops To Correct Course As We Go” in the lower left, to make that work better? But again I find no such arrow.
A hypothesis that explains most of this is that your tools didn’t allow fast iteration or easy validity checking and/or perhaps you didn’t do a first draft in a spreadsheet and then convert to this for display purposes.
I started using an actual belief network tool to regenerate things, preparatory to assigning numbers and then letting “calculemus” determine my beliefs… and then noticed a Practice-Level-”smell”, on my part, related to refactoring someone’s old work without talking to them first.
Is this graph from August 2019 still relevant to anyone else’s live models or active plans in October of 2021?
Also, if this document still connects to a living practice, is there a most-recently-updated version that would be a better jumping off point for refinement?