LW and related blogs are basically spoiling fantasy fiction to me. DAE have an experience like this? How to overcome it?
That which can be destroyed by the truth...
Am I no the first one to notice the all-improving Philosopher’s Stone could not exist in principle because improvement is a mental category and not real, right?
To some extent the “value aligned agents” problem, formerly known as “friendly AI,” boils down to “how would we actually check our ‘improvement-map’ for validity and create agents that will actually enforce that improvement-map on reality, rather than something else?”
That which can be destroyed by the truth...
To some extent the “value aligned agents” problem, formerly known as “friendly AI,” boils down to “how would we actually check our ‘improvement-map’ for validity and create agents that will actually enforce that improvement-map on reality, rather than something else?”