I learned of ‘cloud chemistry labs’ while watching this video:
Wolfram is trying to devise a ‘molecular computing’ experiment to generate prime numbers using the ‘dynamics’ of a ‘molecular system’ (i.e. via something like the ‘set of all possible chemical reaction pathways’). (That’s pretty interesting on its own!)
But, apparently, he’s using (or planning on using) a ‘cloud chemistry lab’ to perform the actual chemistry experiments. He mentioned this particular ‘cloud lab’:
That is a really cool idea – but also seems, to me anyways, to be a (possibly) really dangerous ‘capability’ to make available in this way!
What are your thoughts on the security of either this particular lab or these kinds of labs in general?
Added 2022-6-18 19:07 UTC:
[Copied from this comment by me:]
I was hoping more for ‘brainstorm’ type responses I guess. (A somewhat comprehensive answer would be wonderful of course.)
I’m not actually worried about this. I don’t expect any particular ‘immediate’ tragedy to result. I’m thinking of this as more of ‘security puzzle’ than anything else, e.g. a ‘call to action’.
This isn’t a political campaign, or activism, or me attempting to pull any kind of ‘alarm’ about this being an impending catastrophe!
I’m really just curious.
What are the ways this could fail/be-compromised? (Don’t share useable ‘exploits’ tho! Disclose any useable vulnerabilities responsibly!)
What are the ways we should expect this to fail/be-compromised?
What are the ways it could be defended? What are the practical and principled limits of various defense?
How should we expect it to be defended?
[But also – SHIT – is the expected value of these kinds of discussions even positive at all?]
First time poster, so forgive me if I don’t sound LessWrongy…
But I wonder, what exactly is worrying you about Cloud Chem Labs? Is it a security worry, that someone may be able to create a chemical that’s “not good”? In that case, just like normal Cloud Infrastructure, they probably have extensive logs for orders and also some mechanisms for detecting what compounds are being created and to prevent or block creation of bad stuff.
Also, since it’s not a General Availability thing, just like OpenAI is not exactly GA, the applications are probably controlled and monitored for negative use, which might cause negative press for this new system.
But in the hands of more callous builders, this sort of system is definitely something that can be misused, just like all the Clouds are very often used for DDOS attacks and such frequently nowadays.
You “sound” perfectly LessWrongy to me! (We don’t have a ‘style guide’ or anything :))
I was hoping more for ‘brainstorm’ type responses I guess. (A somewhat comprehensive answer would be wonderful of course.)
I’m not actually worried about this. I don’t expect any particular ‘immediate’ tragedy to result. I’m thinking of this as more of ‘security puzzle’ than anything else, e.g. a ‘call to action’.
This isn’t a political campaign, or activism, or me attempting to pull any kind of ‘alarm’ about this being an impending catastrophe!
I’m really just curious.
Your details about likely defenses is interesting – pretty obvious, but good to point out anyways.
I was curious in particular about “detecting what compounds are being created and to prevent or block creation of bad stuff” – both the concrete practical specifics of how this would or could work, as well as, e.g. hard limits even in principle about what could be done.
Maybe the lab will only perform ‘verified’ reactions? Or maybe they have some protocol for handling ‘novel reactions’ too? I would expect the latter (or, rather hope they had a (good) protocol because it seems like part of the point of these labs being enabling chemistry experiments, i.e. potentially novel chemical reactions.
How well do those kinds of protections work generally? (I don’t know myself.)
I was thinking more that the more likely vulnerability is a ‘vetted’ customer’s account being hacked. I’d expect that to, sadly, be ridiculously easy to do (for a determined and resourceful attacker).
The site/backend-system itself being hacked also seems like an inevitability. What are the security risks when that happens?
What’s the risk analysis for labs like this in the case of an attack ‘on the order of’ something like Stuxnet?