if I could give them back just ten minutes of their lives, most of them wouldn’t be here.
He’s wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.
The remainder of the post actually argues that persistent, stable “reflexes” are the cause of bad decisions and those certainly are not going to be fixed by a one-time gift of 10 minutes.
if I could give them back just ten minutes of their lives, most of them wouldn’t be here.
He’s wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.
I disagree. Let’s take drivers who got into a serious accident : if you “gave them just back ten minutes” so that they avoided getting into that accident, most of them wouldn’t have had another accident later on. It’s not as if the world neatly divided into safe drivers, who never have accidents, and unsafe drivers, who have several.
Sure, those kids that got in trouble are more likely to have problematic personalities, habits, etc. which would make it more likely to get in trouble again—but that doesn’t mean more likely than not. Most drivers don’t get have (serious) accidents, most kids don’t get in (serious) trouble, and if you restrict yourself to the subset of those who already had it once, I agree a second problem is more likely, but not certain.
Yeah, but we are not talking about average kids. We’re talking about kids who found themselves in juvenile detention and that’s a huge selection bias right there. You can treat them as a sample (which got caught) from the larger underlying population which does the same things but didn’t get caught (yet). It’s not an entirely unbiased sample, but I think it’s good enough for our handwaving.
but not certain.
Well, of course. I don’t think anyone suggested any certainties here.
To use the paper’s results, it looks like they’re getting roughly 10 in 100 in the experiment condition and 18 in 100 for the control. Those kids were selected because they were considered high risk. If among the 82 of 100 kids who didn’t get arrested there are >18 who are just as likely to be arrested as the 18 who were, then emile’s conclusion is correct across the year. The majority won’t be arrested next year. Across an entire lifetime however.… They’d probably become more normal as time passed, but how quickly would this occur? I’d think Lumifer is right that they probably would end up back in jail. I wouldn’t describe this as a very regular problem though.
Would you think that in future, when such technologies will probably become widespread, driver training should include at least one grisly crash, simulated and showed in 3-D? Or at least a mild crash?
The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that “editing out” only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the “very regular basis” claim isn’t substantiated.
That said, we cant actually retroactively edit anyways.
The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence.
I don’t think that’s the model (or if it is, I think it’s wrong). I see the model as persistent reflexes interacting with the environment and giving rise to common, repeatable, predictable events with serious legal consequences.
He’s wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.
The remainder of the post actually argues that persistent, stable “reflexes” are the cause of bad decisions and those certainly are not going to be fixed by a one-time gift of 10 minutes.
I disagree. Let’s take drivers who got into a serious accident : if you “gave them just back ten minutes” so that they avoided getting into that accident, most of them wouldn’t have had another accident later on. It’s not as if the world neatly divided into safe drivers, who never have accidents, and unsafe drivers, who have several.
Sure, those kids that got in trouble are more likely to have problematic personalities, habits, etc. which would make it more likely to get in trouble again—but that doesn’t mean more likely than not. Most drivers don’t get have (serious) accidents, most kids don’t get in (serious) trouble, and if you restrict yourself to the subset of those who already had it once, I agree a second problem is more likely, but not certain.
How do you know?
Yeah, but we are not talking about average kids. We’re talking about kids who found themselves in juvenile detention and that’s a huge selection bias right there. You can treat them as a sample (which got caught) from the larger underlying population which does the same things but didn’t get caught (yet). It’s not an entirely unbiased sample, but I think it’s good enough for our handwaving.
Well, of course. I don’t think anyone suggested any certainties here.
To use the paper’s results, it looks like they’re getting roughly 10 in 100 in the experiment condition and 18 in 100 for the control. Those kids were selected because they were considered high risk. If among the 82 of 100 kids who didn’t get arrested there are >18 who are just as likely to be arrested as the 18 who were, then emile’s conclusion is correct across the year. The majority won’t be arrested next year. Across an entire lifetime however.… They’d probably become more normal as time passed, but how quickly would this occur? I’d think Lumifer is right that they probably would end up back in jail. I wouldn’t describe this as a very regular problem though.
Would you think that in future, when such technologies will probably become widespread, driver training should include at least one grisly crash, simulated and showed in 3-D? Or at least a mild crash?
The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that “editing out” only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the “very regular basis” claim isn’t substantiated.
That said, we cant actually retroactively edit anyways.
I don’t think that’s the model (or if it is, I think it’s wrong). I see the model as persistent reflexes interacting with the environment and giving rise to common, repeatable, predictable events with serious legal consequences.