If you think that MIRI ought to be involved in those decisions
As far as I understand that’s MIRI’s position that they ought to be involved when dangerous things might happen.
maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn’t already accept any of the site dogmas or hold EY in any particular regard.
But what goes for someone who does accept the site dogma’s in principle but still does some work in AI.
As far as I understand that’s MIRI’s position that they ought to be involved when dangerous things might happen.
But what goes for someone who does accept the site dogma’s in principle but still does some work in AI.
I’m sorry, I didn’t get much sleep last night, but I can’t parse this sentence at all. Could you rephrase it for me?