Intelligence enhancement could entail cognitive enhancements which increase rate / throughput of cognition, increase memory, use of BCI or AI harnesses which offload work / agency or complement existing skills and awareness.
In the vein of strategies which could eventually lead to ASI alignment by leveraging human enhancement, there is an alternative to biological / direct enhancements which attempt to influence cognitive hardware, and instead attempt to externalize one’s world model and some of the agency necessary to improve it. This could look like interacting with a system intended to elicit this world model and formalize it as a bayesian network or a HMM, with some included operations for its further exploration such as resolving inconsistencies and gaps, and communicating relevant details back to the user in a feedback loop.
This strategy has a number of benefits, for example it could: - mitigate risks associated to direct biological enhancement such as instability following large leaps in capability, or health risks which could follow from changing the physical demands of the brain or distancing in other ways from a stable equilibrium - reduce the distance to understanding AI systems operating at a higher level of intelligence or which use more complete world models - sidestep some of the burden of having people with radically different degrees of agency and responsibility which could result from more direct forms of enhancement - be near-term actionable by using AI models similar to those available today
Human Intelligence Enhancement via Learning:
Intelligence enhancement could entail cognitive enhancements which increase rate / throughput of cognition, increase memory, use of BCI or AI harnesses which offload work / agency or complement existing skills and awareness.
In the vein of strategies which could eventually lead to ASI alignment by leveraging human enhancement, there is an alternative to biological / direct enhancements which attempt to influence cognitive hardware, and instead attempt to externalize one’s world model and some of the agency necessary to improve it. This could look like interacting with a system intended to elicit this world model and formalize it as a bayesian network or a HMM, with some included operations for its further exploration such as resolving inconsistencies and gaps, and communicating relevant details back to the user in a feedback loop.
This strategy has a number of benefits, for example it could:
- mitigate risks associated to direct biological enhancement such as instability following large leaps in capability, or health risks which could follow from changing the physical demands of the brain or distancing in other ways from a stable equilibrium
- reduce the distance to understanding AI systems operating at a higher level of intelligence or which use more complete world models
- sidestep some of the burden of having people with radically different degrees of agency and responsibility which could result from more direct forms of enhancement
- be near-term actionable by using AI models similar to those available today