Lifelong recursive self-improver, on his way to exploding really intelligently :D
More seriously: my posts are mostly about AI alignment, with an eye towards moral progress and creating a better future. If there was a public machine ethics forum, I would write there as well.
An idea:
We have a notion of what good is and how to do good
We could be wrong about it
It would be nice if we could use technology not only to do good, but also to also improve our understanding of what good is.
The idea above, and the fact that I’d like to avoid producing technology that can be used for bad purposes, is what motivates my research. Feel free to reach out if you relate!
At the moment I am doing research at CEEALAR on agents whose behaviour is driven by a reflective process analogous to human moral reasoning, rather than by a metric specified by the designer. See Free agents.
Here are other suggested readings from what I’ve written so far:
-Naturalism and AI alignment
-From language to ethics by automated reasoning
-Criticism of the main framework in AI alignment
It doesn’t seem impossible to create a mental map just from language: in this case, language itself would play the role of the external environment. But overall I agree with you, it’s uncertain whether we can reach a good level of world understanding just from natural language inputs.
Regarding your second paragraph:
I’ll quote the last paragraph under the heading “Error”: