I can see where you’re coming from with the dislike of social pressure. Life isn’t life without agency (stated as opinion). I want to “save the world” to the extent that I can transform it into something that I like more than what currently exists. It’s like cleaning my house, I merely dislike the look of garbage. My egoistic desires just happen to align somewhat with that of other people, since I’m a human as well. (Only somewhat. I’m not so naive that I think suffering is inherently bad)
There’s two things that I feel are imporant here, though: A: Other people do what they think is right, but 99% of people are idiots, so they are actually making things worse. B: If fighting against the natural flow of things is a waste of time like the daoist seem to say, then so be it. But, I’m starting to see some bad disasters on the horizon, and I’m not even talking about AI risks or environmental change. I’m not sure if humanity deserves to survive, but I think it would be a shame to end things so early.
Correct, I should have had that in mind when I wrote my comment.
The alternative to runaway AI is an AI whose values are acceptable to us. I belive that this is a design problem. The perfect system should be something like what’s described in The Fun Theory Sequence and posts like When “yang” goes wrong describes two extremes as anarchy and tyranny, and I think it’s fair to assume that the perfect world must be a balance between these two extremes.
But this topic is not trivial at all, I will give you that. I also don’t think we agree very much about what the ideal future or AI looks like. I think some naive “Minimize suffering” optimization target is the most popular, but if you ask me that’s merely due to an old misunderstanding of Buddhism.
You could also say that any AI will become “Runaway”, and that we merely get to influence the direction. I’m pushing in my direction of choice, even if the force is tiny.
But I will stop here, as I have a lot of opinions on the subject, which probably don’t fit the consensus very well.
I can see where you’re coming from with the dislike of social pressure. Life isn’t life without agency (stated as opinion).
I want to “save the world” to the extent that I can transform it into something that I like more than what currently exists. It’s like cleaning my house, I merely dislike the look of garbage.
My egoistic desires just happen to align somewhat with that of other people, since I’m a human as well. (Only somewhat. I’m not so naive that I think suffering is inherently bad)
There’s two things that I feel are imporant here, though:
A: Other people do what they think is right, but 99% of people are idiots, so they are actually making things worse.
B: If fighting against the natural flow of things is a waste of time like the daoist seem to say, then so be it. But, I’m starting to see some bad disasters on the horizon, and I’m not even talking about AI risks or environmental change. I’m not sure if humanity deserves to survive, but I think it would be a shame to end things so early.
The context seems to be saving the world from runaway AI, which can’t be nontrivially described that way.
Correct, I should have had that in mind when I wrote my comment.
The alternative to runaway AI is an AI whose values are acceptable to us. I belive that this is a design problem. The perfect system should be something like what’s described in The Fun Theory Sequence and posts like When “yang” goes wrong describes two extremes as anarchy and tyranny, and I think it’s fair to assume that the perfect world must be a balance between these two extremes.
But this topic is not trivial at all, I will give you that. I also don’t think we agree very much about what the ideal future or AI looks like. I think some naive “Minimize suffering” optimization target is the most popular, but if you ask me that’s merely due to an old misunderstanding of Buddhism.
You could also say that any AI will become “Runaway”, and that we merely get to influence the direction. I’m pushing in my direction of choice, even if the force is tiny.
But I will stop here, as I have a lot of opinions on the subject, which probably don’t fit the consensus very well.