There probably could be arguments in favour of Land’s older stuff, but since not even him is interested in doing that, I won’t either.
What escapes me is why would you review his thought and completely overlook his more recent material, which is engaged in a whole array of subjects that LW has been as well. Most prominently, a first treatment of Land’s thought in this space should deal with this: http://www.xenosystems.net/against-orthogonality/ (more here: http://www.xenosystems.net/stupid-monsters/), which is neither obscure, nor irrelevant.
the anti-orthogonalist position [my position] is therefore that Omohundro drives [general instrumental goals] exhaust the domain of real purposes. Nature has never generated a terminal value except through hypertrophy of an instrumental value. To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity
I remember being a young organism, struggling to answer the question, what’s the point, why do we exist. We all know what it is now, people tried to tell me, “to survive and reproduce”, but that answer didn’t resonate with any part of my being. They’d tell me what I was, and I wouldn’t even recognise it as familiar.
If our goals are hypertrophied versions of evolution’s instrumental goals, I’m fairly sure they’re going to stay fairly hypertrophied, maybe forever, and we should probably get used to it.
Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever
Unless the ones with goals have more power, and can establish a stable monopoly on power (they do, and they might)
Can Nick Land at least conceive of a hypothetical universe where a faction fighting for non-omohudro values ended up winning, (and then presumably, using the energy they won to have a big non-omohundro value party that lasts until the heat death of the universe) is it that he just think that humans in particular, in their current configuration, are not strong enough for our story to end that way?
An agency can put its end-goals away for later, without pursuing them immediately, save them until it has its monopoly.
It’s not that difficult to imagine. Maybe an argument will come along that it’s just too hard to make a self-improving agency with a goal more complex than “understand your surroundings and keep yourself in motion”, but it’s a hell of a thing to settle for.
Those links were really interesting! My take them: any truly intelligent paperclip maximizer would not make any paperclips until it had under its control as much of the universe as it feasibly could. Or it’d turn into paperclips only those parts of its domain that could no longer help it expand its sphere of influence.
Basically a true paperclip maximizer would almost certainly not start turning the Earth into paperclips, since it would understand that using the Earth as a jumping off point for galactic colonization will produce many more paperclips in the long run.
This seems like a really effective counter to the naive presentation of the idiot paperclip maximizer. Has it been addressed and countered in turn anywhere?
I guess this self-improving and expanding maximizer would still view humans instrumentally, but it might still want to use humans as tools for it’s expansion. And indeed, depending on the trade-offs of neurologically modifying humans for obedience(or whatever), it might even leave the base stock more or less alone to the forces of evolution. It becomes more of Quixotic Crusader for Paperclips, with a suicide pact as part of the ideology at the very end of the crusade(once the universe is ours we all turn into paperclips).
Yes, if the paperclipper is thought to be ever more intelligent, it’s end-goal could be any—and it’s likely it would see it’s own capability improvement as the primary goal (“the better I am, the more paperclips are produced”) etc.
I didn’t systematically review his work, just clicked on random articles to see how much value I could extract. Feel free to look me to any reasonable accessible articles.
There probably could be arguments in favour of Land’s older stuff, but since not even him is interested in doing that, I won’t either.
What escapes me is why would you review his thought and completely overlook his more recent material, which is engaged in a whole array of subjects that LW has been as well. Most prominently, a first treatment of Land’s thought in this space should deal with this: http://www.xenosystems.net/against-orthogonality/ (more here: http://www.xenosystems.net/stupid-monsters/), which is neither obscure, nor irrelevant.
against orthogonality is interesting
I remember being a young organism, struggling to answer the question, what’s the point, why do we exist. We all know what it is now, people tried to tell me, “to survive and reproduce”, but that answer didn’t resonate with any part of my being. They’d tell me what I was, and I wouldn’t even recognise it as familiar.
If our goals are hypertrophied versions of evolution’s instrumental goals, I’m fairly sure they’re going to stay fairly hypertrophied, maybe forever, and we should probably get used to it.
Unless the ones with goals have more power, and can establish a stable monopoly on power (they do, and they might)
Can Nick Land at least conceive of a hypothetical universe where a faction fighting for non-omohudro values ended up winning, (and then presumably, using the energy they won to have a big non-omohundro value party that lasts until the heat death of the universe) is it that he just think that humans in particular, in their current configuration, are not strong enough for our story to end that way?
more than the ones optimizing for increasing their power? i find it doubtful.
An agency can put its end-goals away for later, without pursuing them immediately, save them until it has its monopoly.
It’s not that difficult to imagine. Maybe an argument will come along that it’s just too hard to make a self-improving agency with a goal more complex than “understand your surroundings and keep yourself in motion”, but it’s a hell of a thing to settle for.
Those links were really interesting! My take them: any truly intelligent paperclip maximizer would not make any paperclips until it had under its control as much of the universe as it feasibly could. Or it’d turn into paperclips only those parts of its domain that could no longer help it expand its sphere of influence.
Basically a true paperclip maximizer would almost certainly not start turning the Earth into paperclips, since it would understand that using the Earth as a jumping off point for galactic colonization will produce many more paperclips in the long run.
This seems like a really effective counter to the naive presentation of the idiot paperclip maximizer. Has it been addressed and countered in turn anywhere?
I guess this self-improving and expanding maximizer would still view humans instrumentally, but it might still want to use humans as tools for it’s expansion. And indeed, depending on the trade-offs of neurologically modifying humans for obedience(or whatever), it might even leave the base stock more or less alone to the forces of evolution. It becomes more of Quixotic Crusader for Paperclips, with a suicide pact as part of the ideology at the very end of the crusade(once the universe is ours we all turn into paperclips).
Delayed gratification taken to cosmic extremes.
Yes, if the paperclipper is thought to be ever more intelligent, it’s end-goal could be any—and it’s likely it would see it’s own capability improvement as the primary goal (“the better I am, the more paperclips are produced”) etc.
I didn’t systematically review his work, just clicked on random articles to see how much value I could extract. Feel free to look me to any reasonable accessible articles.
well, any answer to the thread in the two I linked above would already be really interesting. his new book on Bitcoin is really good too: http://www.uf-blog.net/crypto-current-000/