What heuristic should someone building a new AI use to decide whether it’s essential to talk with MIRI about it?
Why would they talk to MIRI about it at all?
They’re the ones with the actual AI expertise, having built the damn thing in the first place, and have the most to lose from any collaboration (the source code of a commercial or military grade AI is a very valuable secret). Furthermore, it’s far from clear that there is any consensus in the AI community about the likelihood of a technological singularity (especially the subset which FOOMs belong to) and associated risks. From their perspective, there’s no reason to pay MIRI any attention at all, much less bring them in as consultants.
If you think that MIRI ought to be involved in those decisions, maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn’t already accept any of the site dogmas or hold EY in any particular regard.
If you think that MIRI ought to be involved in those decisions
As far as I understand that’s MIRI’s position that they ought to be involved when dangerous things might happen.
maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn’t already accept any of the site dogmas or hold EY in any particular regard.
But what goes for someone who does accept the site dogma’s in principle but still does some work in AI.
Why would they talk to MIRI about it at all?
They’re the ones with the actual AI expertise, having built the damn thing in the first place, and have the most to lose from any collaboration (the source code of a commercial or military grade AI is a very valuable secret). Furthermore, it’s far from clear that there is any consensus in the AI community about the likelihood of a technological singularity (especially the subset which FOOMs belong to) and associated risks. From their perspective, there’s no reason to pay MIRI any attention at all, much less bring them in as consultants.
If you think that MIRI ought to be involved in those decisions, maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn’t already accept any of the site dogmas or hold EY in any particular regard.
As far as I understand that’s MIRI’s position that they ought to be involved when dangerous things might happen.
But what goes for someone who does accept the site dogma’s in principle but still does some work in AI.
I’m sorry, I didn’t get much sleep last night, but I can’t parse this sentence at all. Could you rephrase it for me?