Mauhn is a company doing research in AGI with a capped profit structure and an ethics board (represented by people from outside the company). Whereas there is a significant amount of AI/AGI safety research, there is still a gap of how to put this into practice for organizations doing research in AGI. We want to help closing this gap, with the following blogpost (written for an audience not familiar with AI Safety) and associated links to relevant documents:
Mauhn AI Safety Vision This summarizes the most important points Mauhn will commit to towards building safe (proto-)AGI systems
Declaration of Ethical Commitment Every founder, investor and employee sign the declaration of ethical commitment before starting a collaboration with Mauhn
We hope that other organizations will adopt similar principles or derivatives thereof. We were a bit short on bandwidth for this first version, but we want to include more feedback from the AI safety community for future versions of these documents. Please drop me an e-mail (berg@mauhn.com), if you’d like to contribute to next versions of this work. Probably we’ll update the documentation once per year.
Mauhn Releases AI Safety Documentation
Mauhn is a company doing research in AGI with a capped profit structure and an ethics board (represented by people from outside the company). Whereas there is a significant amount of AI/AGI safety research, there is still a gap of how to put this into practice for organizations doing research in AGI. We want to help closing this gap, with the following blogpost (written for an audience not familiar with AI Safety) and associated links to relevant documents:
Mauhn AI Safety Vision
This summarizes the most important points Mauhn will commit to towards building safe (proto-)AGI systems
Ethics section of Mauhn’s statutes
The statutes of Mauhn define the legal structure of the ethics board
Declaration of Ethical Commitment
Every founder, investor and employee sign the declaration of ethical commitment before starting a collaboration with Mauhn
We hope that other organizations will adopt similar principles or derivatives thereof. We were a bit short on bandwidth for this first version, but we want to include more feedback from the AI safety community for future versions of these documents. Please drop me an e-mail (berg@mauhn.com), if you’d like to contribute to next versions of this work. Probably we’ll update the documentation once per year.