My views on the mistakes in “mainstream” A(G)I safety mindset: - we define non-aligned agents as conflicting with our/human goals, while we have ~none( but cravings and intuitive attractions). We should strive for conserving long-term positive/optimistic ideas/principles, rather.
- expecting human bodies are a neat fit for space colonisation/inhabitance/transformation is a We have(--are,actually) hammer so we nail it in the vastly empty space..
- we strive with imagining unbounded/maximized creativity—they can optimize experimentation vs. risks smoothly
- no focus on risk-awereness in AIs, to divert/bend/inflect ML development goals to risk-including/centered applications.
+ non-existent(?) good library catalog of existing models and their availability, including in development, incentivizing (anon )proofs of the later
My views on the mistakes in “mainstream” A(G)I safety mindset:
- we define non-aligned agents as conflicting with our/human goals, while we have ~none( but cravings and intuitive attractions). We should strive for conserving long-term positive/optimistic ideas/principles, rather.
- expecting human bodies are a neat fit for space colonisation/inhabitance/transformation is a We have(--are,actually) hammer so we nail it in the vastly empty space..
- we strive with imagining unbounded/maximized creativity—they can optimize experimentation vs. risks smoothly
- no focus on risk-awereness in AIs, to divert/bend/inflect ML development goals to risk-including/centered applications.
+ non-existent(?) good library catalog of existing models and their availability, including in development, incentivizing (anon )proofs of the later