Here’s an interesting article that argues for using (GPL-protected) open source strategies to develop strong AI, and lays out reasons why AI design and opsec should be pursued more at the modular implementation level (where mistakes can be corrected based on empirical feedback) rather than attempted at the algorithmic level. I would be curious to see MIRI’s response.
Here’s an interesting article that argues for using (GPL-protected) open source strategies to develop strong AI, and lays out reasons why AI design and opsec should be pursued more at the modular implementation level (where mistakes can be corrected based on empirical feedback) rather than attempted at the algorithmic level. I would be curious to see MIRI’s response.