I’m sure the engineers knew exactly what would happen. It doesn’t tell us much about the control problem that we didn’t already know.
OTOH, if this wasn’t an intentional PR stunt, that means management didn’t think this would happen even though the engineers presumably knew. That definitely has unsettling implications.
All publicity is good… even a Nazi AI? I mean, its obvious that they didn’t intentionally make it a Nazi. Maybe one of the engineers wanted to draw attention to AI risk?
It seems like something that could be easily anticipated, and even tested for.
Yet a lot of people just don’t take a game theoretic look at problems, and have a hard time conceiving of people with different motivations than they have.
It seems like something that could be easily anticipated, and even tested for.
Do anticipate what happened to the bot it would be necessary to predict how people interact with him. How the 4chan crowd interacted with it. That seems hard to test beforehand.
They could have done an internal beta and said “fuck with us”. They could have allocated time to a dedicated internal team to do so. Don’t they have internal hacking teams to similarly test their security?
A Youtube guy, Sargon of Akkad, had an analysis of previous interactive internet promo screwups. A long list. I hadn’t heard of them. Microsoft should be in the business of knowing such things.
History should have been enough of an indicator if they couldn’t be bothered to do any actual Enemy Team modeling on different populations on the internet that might like to fuck with them.
They knew something like this would happen. Their attempts to stop it failed and they heavily underestimated the creativity of the people they were up against.
I’m sure the engineers knew exactly what would happen. It doesn’t tell us much about the control problem that we didn’t already know.
OTOH, if this wasn’t an intentional PR stunt, that means management didn’t think this would happen even though the engineers presumably knew. That definitely has unsettling implications.
I assign very low probability to MSoft wanting a to release a Nazi AI as a PR stunt, or for any other purpose.
All publicity is good… even a Nazi AI? I mean, its obvious that they didn’t intentionally make it a Nazi. Maybe one of the engineers wanted to draw attention to AI risk?
Why?
I’m pretty sure they didn’t anticipate this happening. Someone at Microsoft Research is getting chewed over for this.
I wonder.
It seems like something that could be easily anticipated, and even tested for.
Yet a lot of people just don’t take a game theoretic look at problems, and have a hard time conceiving of people with different motivations than they have.
Do anticipate what happened to the bot it would be necessary to predict how people interact with him. How the 4chan crowd interacted with it. That seems hard to test beforehand.
They could have done an internal beta and said “fuck with us”. They could have allocated time to a dedicated internal team to do so. Don’t they have internal hacking teams to similarly test their security?
First, no, not hard to test. Second, the 4chan response is entirely predictable.
A Youtube guy, Sargon of Akkad, had an analysis of previous interactive internet promo screwups. A long list. I hadn’t heard of them. Microsoft should be in the business of knowing such things.
https://youtu.be/Tv74KIs8I7A?t=14m24s
History should have been enough of an indicator if they couldn’t be bothered to do any actual Enemy Team modeling on different populations on the internet that might like to fuck with them.
They knew something like this would happen. Their attempts to stop it failed and they heavily underestimated the creativity of the people they were up against.