Could you elaborate a bit more about the strategic assumptions of the agenda? For example, 1. Do you think your system is competitive with end-to-end Deep Learning approaches? 1.1. Assuming the answer is yes, do you expect CoEm to be preferable to users? 1.2. Assuming the answer is now, how do you expect it to get traction? Is the path through lawmakers understanding the alignment problem and banning everything that is end-to-end and doesn’t have the benefits of CoEm? 2. Do you think this is clearly the best possible path for everyone to take right now or more like “someone should do this, we are the best-placed organization to do this”?
PS: Kudos to publishing the agenda and opening up yourself to external feedback.
Could you elaborate a bit more about the strategic assumptions of the agenda? For example,
1. Do you think your system is competitive with end-to-end Deep Learning approaches?
1.1. Assuming the answer is yes, do you expect CoEm to be preferable to users?
1.2. Assuming the answer is now, how do you expect it to get traction? Is the path through lawmakers understanding the alignment problem and banning everything that is end-to-end and doesn’t have the benefits of CoEm?
2. Do you think this is clearly the best possible path for everyone to take right now or more like “someone should do this, we are the best-placed organization to do this”?
PS: Kudos to publishing the agenda and opening up yourself to external feedback.