This post reminds me of thinking from 1950s when people taking inspiration from Wiener’s work on cybernetics tried to operationalize “purposeful behavior” in terms of robust convergence to a goal state:
> When an optimizing system deviates beyond its own rim, we say that it dies. An existential catastrophe is when the optimizing system of life on Earth moves beyond its own outer rim.
I appreciate the direct attention to this process as an important instance of optimization. The first talk I ever gave in the EECS department at UC Berkeley (to the full EECS faculty) included a diagram of Earth drifting out of the region of phase spare where humans would exist. Needless to say, I’d like to see more explicit consideration of this type of scenario.
This post reminds me of thinking from 1950s when people taking inspiration from Wiener’s work on cybernetics tried to operationalize “purposeful behavior” in terms of robust convergence to a goal state:
https://heinonline.org/HOL/Page?collection=journals&handle=hein.journals/josf29&id=48&men_tab=srchresults
> When an optimizing system deviates beyond its own rim, we say that it dies. An existential catastrophe is when the optimizing system of life on Earth moves beyond its own outer rim.
I appreciate the direct attention to this process as an important instance of optimization. The first talk I ever gave in the EECS department at UC Berkeley (to the full EECS faculty) included a diagram of Earth drifting out of the region of phase spare where humans would exist. Needless to say, I’d like to see more explicit consideration of this type of scenario.