In my opinion it does not matter to the average person. To them anything to do with a PC is a black box. So now you tell them that AI is… more black box? They wont really get the implications.
It is the wrong thing to focus on in my opinion. I think in general the notion that we create digital brains is more useful long term. We can tell people “well, doesnt xy happen to you as well sometimes? The same happens to the AI.”. Like, hallucination. “Dont you also sometimes remember stuff wrongly? Dont you also sometimes strongly believe something to be true only to be proven wrong?” is a way better explanation to the average person than “its a black box, we dont know”. They will get the potential in their head, they will get that we create something novel, we create something that can grow smarter. We model the brain. And we just lack some parts in models, like the part that moves stuff, the parts that plan very well.
Yes, this may make people think that models are more conscious or more “alive” than they are. But that debate is coming anyway. Since people tend to stick with what they first heard, lets instill that now even though we may be some distance from that.
I think we can assume that AGI is coming, or ASI, whatever you want to call it. And personally I doubt we will be able to create it without a shadow of a doubt in the AI community that it has feelings or some sort of emotions (or even consciousness, but thats another rabbithole). Just like the e/acc and e/alt movements (sorry, I dont like EA as a term) there will be “robotic rights” and “robots are tools” movements. I personally know which side I will probably stand on, but thats not this debate, I just want to argue that this debate will exist and my method prepares people for that. Black box does not do that. Once that debate comes, its basically saying “well idk”. Digital brain creates the argument if emotions are part of our wild mix of brain parts or not.
In my opinion it does not matter to the average person. To them anything to do with a PC is a black box. So now you tell them that AI is… more black box? They wont really get the implications.
It is the wrong thing to focus on in my opinion. I think in general the notion that we create digital brains is more useful long term. We can tell people “well, doesnt xy happen to you as well sometimes? The same happens to the AI.”. Like, hallucination. “Dont you also sometimes remember stuff wrongly? Dont you also sometimes strongly believe something to be true only to be proven wrong?” is a way better explanation to the average person than “its a black box, we dont know”. They will get the potential in their head, they will get that we create something novel, we create something that can grow smarter. We model the brain. And we just lack some parts in models, like the part that moves stuff, the parts that plan very well.
Yes, this may make people think that models are more conscious or more “alive” than they are. But that debate is coming anyway. Since people tend to stick with what they first heard, lets instill that now even though we may be some distance from that.
I think we can assume that AGI is coming, or ASI, whatever you want to call it. And personally I doubt we will be able to create it without a shadow of a doubt in the AI community that it has feelings or some sort of emotions (or even consciousness, but thats another rabbithole). Just like the e/acc and e/alt movements (sorry, I dont like EA as a term) there will be “robotic rights” and “robots are tools” movements. I personally know which side I will probably stand on, but thats not this debate, I just want to argue that this debate will exist and my method prepares people for that. Black box does not do that. Once that debate comes, its basically saying “well idk”. Digital brain creates the argument if emotions are part of our wild mix of brain parts or not.