It sounds like you’re describing Maloch here. I agree entirely, but I’d go much further than you and claim “Humans aren’t aligned with eachother or even themselves” (self-dicipline is a kind of tool against internal misalignment, no?). I also think that basically all suffering and issues in the world can be said to stem from a lack of balance, which is simply just optimization gone wrong (since said optimization is always for something insatiable, unlike things like hunger, in which the desire goes away once the need is met).
Companies don’t optimize for providing value, but for their income. If they earn a trillion, they will just invest a trillion into their own growth, so that they can earn the next trillion. And all the optimal strategies exploit human weaknesses, clickbait being an easy example. In fact, it’s technology which has made this exploitation possible. So companies end up becoming tool-assisted cancers. But it’s not just companies which are the problem here, it’s everything which lives by darwinian/memetic principles. The only exception is “humanity”, which is when optimality is exchanged for positive valence. This requires direct human manipulation. Even an interface (online comments and such) are slightly dehumanized compared to direct communication. So any amount of indirectness will reduce this humanity.
Yeah. A way I like to put this is that we need to durably solve the inter being alignment problem for the first time ever. There are flaky attempts at it around to learn from, but none of them are leak proof and we’re expecting to go to metaphorical sea (the abundance of opportunity for systems to exploit vulnerability in each other) in this metaphorical boat of a civilization, as opposed to previously just boating in lakes. Or something. But yeah, core point I’m making is that the minimum bar to get out of the ai mess requires a fundamental change in incentives.
It sounds like you’re describing Maloch here. I agree entirely, but I’d go much further than you and claim “Humans aren’t aligned with eachother or even themselves” (self-dicipline is a kind of tool against internal misalignment, no?). I also think that basically all suffering and issues in the world can be said to stem from a lack of balance, which is simply just optimization gone wrong (since said optimization is always for something insatiable, unlike things like hunger, in which the desire goes away once the need is met).
Companies don’t optimize for providing value, but for their income. If they earn a trillion, they will just invest a trillion into their own growth, so that they can earn the next trillion. And all the optimal strategies exploit human weaknesses, clickbait being an easy example. In fact, it’s technology which has made this exploitation possible. So companies end up becoming tool-assisted cancers. But it’s not just companies which are the problem here, it’s everything which lives by darwinian/memetic principles. The only exception is “humanity”, which is when optimality is exchanged for positive valence. This requires direct human manipulation. Even an interface (online comments and such) are slightly dehumanized compared to direct communication. So any amount of indirectness will reduce this humanity.
[edit: pinned to profile]
Yeah. A way I like to put this is that we need to durably solve the inter being alignment problem for the first time ever. There are flaky attempts at it around to learn from, but none of them are leak proof and we’re expecting to go to metaphorical sea (the abundance of opportunity for systems to exploit vulnerability in each other) in this metaphorical boat of a civilization, as opposed to previously just boating in lakes. Or something. But yeah, core point I’m making is that the minimum bar to get out of the ai mess requires a fundamental change in incentives.