Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn’t sound like it, so I’d say it’s not. Initiative is doing something that’s not required, but something you feel needs to be done or something you want to do.
Ok. Now, if said grad student did come to the thesis adviser, but their motivation was that they’ve been taught from a very young age that they should do math. Is there initiative?
Ok, whan you build a car but the car doesn’t start, I don’t think you’re going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You’re probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn’t seem to be able to do so, there’s a bug in the system.
It seems that a large part of the disagreement is implicit premises here. You seem to be focused on very narrow AI, when the entire issue is what happens when one doesn’t have narrow AI but have AI that has most capabilities that humans have. Let’s set aside whether or not we should build such AIs and whether or not they are possible. Assuming that such entities are possible, do you or do you not think there’s a risk of the AI getting out of control.
So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don’t want to ever have kids, is that a bug in your view?
That’s answered in the second sentence of the quote you chose...
Either there’s a miscommunication here or there’s a misunderstanding about how evolution works. An organism that puts its own survival over reproducing is an evolutionary dead end. Historically, lots of humans didn’t want any children, but they didn’t have effective birth control methods, so in the ancestral environment there was minimal evolutionary incentive to remove that preference. It has only been recently that there is widespread and effective birth control. So, what you’ve said is one evolved desire overriding another would still seem to be a bug.
Now, if said grad student did come to the thesis adviser, but their motivation was that they’ve been taught from a very young age that they should do math. Is there initiative?
Not sure. You could argue both points in this situation.
Assuming that such entities are possible, do you or do you not think there’s a risk of the AI getting out of control.
Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.
So, what you’ve said is one evolved desire overriding another would still seem to be a bug.
Ok. Now, if said grad student did come to the thesis adviser, but their motivation was that they’ve been taught from a very young age that they should do math. Is there initiative?
It seems that a large part of the disagreement is implicit premises here. You seem to be focused on very narrow AI, when the entire issue is what happens when one doesn’t have narrow AI but have AI that has most capabilities that humans have. Let’s set aside whether or not we should build such AIs and whether or not they are possible. Assuming that such entities are possible, do you or do you not think there’s a risk of the AI getting out of control.
Either there’s a miscommunication here or there’s a misunderstanding about how evolution works. An organism that puts its own survival over reproducing is an evolutionary dead end. Historically, lots of humans didn’t want any children, but they didn’t have effective birth control methods, so in the ancestral environment there was minimal evolutionary incentive to remove that preference. It has only been recently that there is widespread and effective birth control. So, what you’ve said is one evolved desire overriding another would still seem to be a bug.
Not sure. You could argue both points in this situation.
Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.
I suppose it would.
Ah. In that case, there’s actually very minimal disagreement.