What is the mechanism, specifically, by which going slower will yield more “care”? What is the mechanism by which “care” will yield a better outcome? I see this model asserted pretty often, but no one ever spells out the details.
I’ve studied the history of technological development in some depth, and I haven’t seen anything to convince me that there’s a tradeoff between development speed on the one hand, and good outcomes on the other.
More information usually means better choices, and when has it ever been the case that the first design of something also was the best one? And wherever convention locked us on a path determined by early constraints, suboptimal results abound (e.g. the QWERTY keyboard). The worry about AI is that it might run away from us so fast, it has that sort of lock in on steroids.
Disclaimer: I don’t necessarily support this view, I thought about it for like 5 minutes but I thought it made sense.
If we were to do things the same thing as other slowing down of regulation, then that might make sense, but I’m uncertain that you can take the outside view here?
Yes, we can do the same as for other technologies by leaving it down to the standard government procedures to make legislation and then I might agree with you that slowing down might not lead to better outcomes. Yet, we don’t have to do this. We can use other processes that might lead to a lot better decisions. Like what about proper value sampling techniques like digital liquid democracy? I think we can do a lot better than we have in the past by thinking about what mechanism we want to use.
Also, for some potential examples, I thought of cloning technology in like the last 5 min. If we just went full-speed with that tech then things would probably have turned out badly?
What is the mechanism, specifically, by which going slower will yield more “care”? What is the mechanism by which “care” will yield a better outcome? I see this model asserted pretty often, but no one ever spells out the details.
I’ve studied the history of technological development in some depth, and I haven’t seen anything to convince me that there’s a tradeoff between development speed on the one hand, and good outcomes on the other.
If you go slower, you have more time to find desirable mechanisms. That’s pretty much it I guess.
More information usually means better choices, and when has it ever been the case that the first design of something also was the best one? And wherever convention locked us on a path determined by early constraints, suboptimal results abound (e.g. the QWERTY keyboard). The worry about AI is that it might run away from us so fast, it has that sort of lock in on steroids.
Disclaimer: I don’t necessarily support this view, I thought about it for like 5 minutes but I thought it made sense.
If we were to do things the same thing as other slowing down of regulation, then that might make sense, but I’m uncertain that you can take the outside view here?
Yes, we can do the same as for other technologies by leaving it down to the standard government procedures to make legislation and then I might agree with you that slowing down might not lead to better outcomes. Yet, we don’t have to do this. We can use other processes that might lead to a lot better decisions. Like what about proper value sampling techniques like digital liquid democracy? I think we can do a lot better than we have in the past by thinking about what mechanism we want to use.
Also, for some potential examples, I thought of cloning technology in like the last 5 min. If we just went full-speed with that tech then things would probably have turned out badly?