AI partners won’t be “who” yet. That’s a very important qualification.
I’d consider a law banning people from using search engines like Google, Bing, Wolfram Alpha, or video games like GTA or the Sims to still be a very bad imposition on people’s basic freedoms. Maybe “free association” isn’t the right word to use, but there’s definitely an important right for which you’d be creating an exception. I’d also be curious to hear how you plan to determine when an AI has reached the point where it counts as a person?
But without that, in your passage, you can replace “AI partner” or “image” with “heroin” and nothing qualitatively changes.
I don’t subscribe to the idea that one can swap out arbitrary words in a sentence while leaving the truth-value of the sentence unchanged. Heroin directly alters your neuro-chemistry. Pure information is not necessarily harmless, but it is something you have the option to ignore or disbelieve at any point in time, and it essentially provides data, rather than directly hacking your motivations.
How do you imagine myself or any other lone concerned voice could master sufficient resources to do these experiments?
How much do you expect it would cost to do these experiments? 500 000 dollars? Let’s say 2 million just to be safe. Presumably you’re going to try and convince the government to implement your proposed policy. Now if you happen to be wrong, implementing such a policy is going to do far more than 2 million dollars of damage. If it’s worth putting some fairly authoritarian restrictions on the actions of millions of people, it’s worth paying a pretty big chunk of money to run the experiment. You already have a list of asks in your policy recommendations section. Why not ask for experiment funding in the same list?
All AI partners (created by different startups) will be different, moreover, and may lead to different psychological effects.
One experimental group is banned from all AI partners, the other group is able to use any of them they choose. Generally you want to make the groups in such experiments correspond to the policy options you’re considering. (And you always want to have a control group, corresponding to “no change to existing policy”.)
I’d consider a law banning people from using search engines like Google, Bing, Wolfram Alpha, or video games like GTA or the Sims to still be a very bad imposition on people’s basic freedoms. Maybe “free association” isn’t the right word to use, but there’s definitely an important right for which you’d be creating an exception. I’d also be curious to hear how you plan to determine when an AI has reached the point where it counts as a person?
I don’t subscribe to the idea that one can swap out arbitrary words in a sentence while leaving the truth-value of the sentence unchanged. Heroin directly alters your neuro-chemistry. Pure information is not necessarily harmless, but it is something you have the option to ignore or disbelieve at any point in time, and it essentially provides data, rather than directly hacking your motivations.
How much do you expect it would cost to do these experiments? 500 000 dollars? Let’s say 2 million just to be safe. Presumably you’re going to try and convince the government to implement your proposed policy. Now if you happen to be wrong, implementing such a policy is going to do far more than 2 million dollars of damage. If it’s worth putting some fairly authoritarian restrictions on the actions of millions of people, it’s worth paying a pretty big chunk of money to run the experiment. You already have a list of asks in your policy recommendations section. Why not ask for experiment funding in the same list?
One experimental group is banned from all AI partners, the other group is able to use any of them they choose. Generally you want to make the groups in such experiments correspond to the policy options you’re considering. (And you always want to have a control group, corresponding to “no change to existing policy”.)