You could construct an argument about needing to reinforce explicitly using system-2 ethics on common situations to make sure that you associate those ethics implicitly with normal situations, and not just contrived edge cases. But that seems to be even a bit too charitable. And also easily fixed if so.
taygetea
From my experiences trying similar things over IRC, I have found that the lack of anything holding you to your promises definitely is a detriment to most people. I have found a few for whom that’s not the case, but that’s very much the exception. That’s definitely a failure mode to look out for, doing this online (especially in text) won’t work for many people. In addition, this discrepancy can create friction between people.
The general structure of the failure tends to be one person feeling vaguely bad about not talking as much, or missing a session. And then when they don’t have many vectors to viscerally receive signals of disapproval, of the kind that would cause them to be uncomfortable and go through with it even when they don’t want to, it becomes easiest to do it the next time. Schelling Fences are easier to break without face to face interaction.
There should be ways to bypass that problem. One of the memes around LW is actively reinforcing positive things, instead of relying on implied approval. If you can create a culture of actively rewarding success, and treating apathy as something to be stamped out at every point, then you can do it. You can also make a point to create norms where one goes out of their way to help someone who falls behind to figure out what the true problem is. If you can manage that, instead of silence or simple berating, then you can make it work. Ideas around Tell Culture can help you with this. Unfortunately, this also requires diverting a lot of focus into preserving those conditions. Creating community norms is hard, but that seems like the way you avoid that problem.
I don’t mean to imply that you want to start a community around this along the lines of the LW study hall, but this is what I have found from my attempts. Maybe someone will find it helpful.
Relating to your first point, I’ve read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that’s solvable with a wider definition of the sort of stuff it’s supposed to be Friendly to, and I’d hope aliens would think of that, but it’s certainly possible.
According to Quirrel, yes, they are. “Anything with a brain”. And I notice that you’ve only looked at what we’ve directly seen. The presence of spells like all the ones you mentioned lead me to think that you can do more directed things with spells harry hasn’t come across yet.
the second, cybernetic, industrial revolution “is [bound] to devalue the human brain, at least in its simpler and more routine decisions
It certainly seems like he considered it, at least on a basic level, enough to be extrapolated.
Well, I did say it far outweighed it. Even that’s less of an inconvenience in my mind, but that’s getting to be very much a personal preference thing.
Creating arbitrary animals that are barely alive, don’t need food, water, air, or movement, and made of easily workable material which is also good as armor seems like a good place to start, and also within the bounds of magic. This isn’t as absurd as it seems. Essentially living armor plates. You’d want them to be thin so you could have multiple layers, and to fall off when they die, and various similar things. Or maybe on a different scale, like scale or lamellar armor.
The messiness and potential for really unpleasant sounds, in my mind, far outweighs the need for a specific type of dry-erase marker. Though that might be related to how easily sounds can be unpleasant to me in particular.
This would rely on a large fraction of pageviews being from Wikipedia editors. That seems unlikely. Got any data for that?