Hi, I’m the author of Nintil.com. As of today I think the endorsement I gave to Yarvin’s argument was too strong, and I have just amended the post to make that clear. I added the following:
[Edit 2022-06-14]: I think some overall points in Yarvin’s essay are valid (the world is indeed uncertain and there are diminishing returns to intelligence), but AGIs would still have the advantage of speed and parallelism (Imagine the entirety of Google but no need for meetings, and where workweeks are ran at 100x speed). Even in the absence of superior intelligence, that alone leads to capacities beyond what a human or group thereof can accomplish. I don’t know exactly what I was endorsing, but definitely as of today _I do not think Curtis Yarvin’s post shows there is no reason to worry about AI risk_. I might write about AI risk at some point. After all I recently compiled [a reading list](https://nintil.com/links-57) on the topic!
And answering the question, why haven’t I written about it, other topics come to mind where I have something that I think is worth saying, I think AGI is still somewhat into the future. I am somewhat specialized these days. Usually when I write I like to read all that has been said about the topic, or at least enough to see if something new deserves to be said and then I say it. I don’t like being repetitive. I like writing summaries and critical summaries, but even for that there seems to be decent sources around in the internet. If I spent more time reading about it I still think I could write the best primer to the subject :-) .There’s still an argument for why someone like me should write one post on this, which is to add my endorsement to the “this is a serious problem”, which marginally could increase the odds of someone doing something about it.
Perhaps additional reasons for why we not see more: Less caring? Take something like an asteroid hitting the Earth in a year, with 80% probability. How bad would I feel, or how much would I do to prevent it? Not much. Of course if success relied solely on me then I would do a lot :). You can observe something similar with Covid, There is no covidposting at Nintil and there was lots of it in canonically rationalist spheres.
Many thanks for the update… and if it’s true that you could write the very best primer, that sounds like a high value activity
I don’t understand the astroid analogy though. Does this assume the impact is inevitable? If so, I agree with taking no action. But in any other case, doing everything you can to prevent it seems like the single most important way to spend your days
The asteroid case—it wouldn’t be inevitable; it’s just the knowledge that there are people out there substantially more motivated than me (and better positioned) to deal with it. For some activities where I’m really good (like… writing blogposts) and where I expect my actions to make more of an impact relative to what others would be doing I could end up writing a blogpost about ‘what you guy should do’ and emailing it to some other relevant people.
Also, you can edit your post accordingly to reflect my update!
Hi, I’m the author of Nintil.com. As of today I think the endorsement I gave to Yarvin’s argument was too strong, and I have just amended the post to make that clear. I added the following:
[Edit 2022-06-14]: I think some overall points in Yarvin’s essay are valid (the world is indeed uncertain and there are diminishing returns to intelligence), but AGIs would still have the advantage of speed and parallelism (Imagine the entirety of Google but no need for meetings, and where workweeks are ran at 100x speed). Even in the absence of superior intelligence, that alone leads to capacities beyond what a human or group thereof can accomplish. I don’t know exactly what I was endorsing, but definitely as of today _I do not think Curtis Yarvin’s post shows there is no reason to worry about AI risk_. I might write about AI risk at some point. After all I recently compiled [a reading list](https://nintil.com/links-57) on the topic!
And answering the question, why haven’t I written about it, other topics come to mind where I have something that I think is worth saying, I think AGI is still somewhat into the future. I am somewhat specialized these days. Usually when I write I like to read all that has been said about the topic, or at least enough to see if something new deserves to be said and then I say it. I don’t like being repetitive. I like writing summaries and critical summaries, but even for that there seems to be decent sources around in the internet. If I spent more time reading about it I still think I could write the best primer to the subject :-) .There’s still an argument for why someone like me should write one post on this, which is to add my endorsement to the “this is a serious problem”, which marginally could increase the odds of someone doing something about it.
Perhaps additional reasons for why we not see more: Less caring? Take something like an asteroid hitting the Earth in a year, with 80% probability. How bad would I feel, or how much would I do to prevent it? Not much. Of course if success relied solely on me then I would do a lot :). You can observe something similar with Covid, There is no covidposting at Nintil and there was lots of it in canonically rationalist spheres.
Many thanks for the update… and if it’s true that you could write the very best primer, that sounds like a high value activity
I don’t understand the astroid analogy though. Does this assume the impact is inevitable? If so, I agree with taking no action. But in any other case, doing everything you can to prevent it seems like the single most important way to spend your days
The asteroid case—it wouldn’t be inevitable; it’s just the knowledge that there are people out there substantially more motivated than me (and better positioned) to deal with it. For some activities where I’m really good (like… writing blogposts) and where I expect my actions to make more of an impact relative to what others would be doing I could end up writing a blogpost about ‘what you guy should do’ and emailing it to some other relevant people.
Also, you can edit your post accordingly to reflect my update!
Updated! Excuse the delay