Ah right, good point, I forgot about cherry-picking. I guess we could make it be something like “And the blog post wasn’t cherry-picked; the same system could be asked to make 2 additional posts on rationality and you’d like both of them also.” I’m not sure what credence I’d give to this but it would probably be a lot higher than 10%.
Website prediction: Nice, I think that’s like 50% likely by 2030.
Major research area: What counts as a major research area? Suppose I go calculate that Alpha Fold 2 has already sped up the field of protein structure prediction by 100x (don’t need to do actual experiments anymore!), would that count? If you hadn’t heard of AlphaFold yet, would you say it counted? Perhaps you could give examples of the smallest and easiest-to-automate research areas that you think have only a 10% chance of being automated by 2030.
20,000 LW karma: Holy shit that’s a lot of karma for one year. I feel like it’s possible that would happen before it’s too late (narrow AI good at writing but not good at talking to people and/or not agenty) but unlikely. Insofar as I think it’ll happen before 2030 it doesn’t serve as a good forecast because it’ll be too late by that point IMO.
Productivity tool UI’s obsolete thanks to assistants: This is a good one too. I think that’s 50% likely by 2030.
I’m not super certain about any of these things of course, these are just my wild guesses for now.
20,000 LW karma: Holy shit that’s a lot of karma for one year.
I was thinking 365 posts * ~50 karma per post gets you most of the way there (18,250 karma), and you pick up some additional karma from comments along the way. 50 karma posts are good but don’t have to be hugely insightful; you can also get a lot of juice by playing to the topics that tend to get lots of upvotes. Unlike humans the bot wouldn’t be limited by writing speed (hence my restriction of one post per day). AI systems should be really, really good at writing, given how easy it is to train on text. And a post is a small, self-contained thing, that takes not very long to create (i.e. it has short horizons), and there are lots of examples to learn from. So overall this seems like a thing that should happen well before TAI / AGI.
I think I want to give up on the research area example, seems pretty hard to operationalize. (But fwiw according to the picture in my head, I don’t think I’d count AlphaFold.)
OK, fair enough. But what if it writes, like, 20 posts in the first 20 days which are that good, but then afterwards it hits diminishing returns because the rationality-related points it makes are no longer particularly novel and exciting? I think this would happen to many humans if they could work at super-speed.
That said, I don’t think this is that likely I guess… probably AI will be unable to do even three such posts, or it’ll be able to generate arbitrary numbers of them. The human range is small. Maybe. Idk.
But what if it writes, like, 20 posts in the first 20 days which are that good, but then afterwards it hits diminishing returns because the rationality-related points it makes are no longer particularly novel and exciting?
I’d be pretty surprised if that happened. GPT-3 already knows way more facts than I do, and can mimic far more writing styles than I can. It seems like by the time it can write any good posts (without cherrypicking), it should quickly be able to write good posts on a variety of topics in a variety of different styles, which should let it scale well past 20 posts.
(In contrast, a specific person tends to write on 1-2 topics, in a single style, and not optimizing that hard for karma, and many still write tens of high-scoring posts.)
Ah right, good point, I forgot about cherry-picking. I guess we could make it be something like “And the blog post wasn’t cherry-picked; the same system could be asked to make 2 additional posts on rationality and you’d like both of them also.” I’m not sure what credence I’d give to this but it would probably be a lot higher than 10%.
Website prediction: Nice, I think that’s like 50% likely by 2030.
Major research area: What counts as a major research area? Suppose I go calculate that Alpha Fold 2 has already sped up the field of protein structure prediction by 100x (don’t need to do actual experiments anymore!), would that count? If you hadn’t heard of AlphaFold yet, would you say it counted? Perhaps you could give examples of the smallest and easiest-to-automate research areas that you think have only a 10% chance of being automated by 2030.
20,000 LW karma: Holy shit that’s a lot of karma for one year. I feel like it’s possible that would happen before it’s too late (narrow AI good at writing but not good at talking to people and/or not agenty) but unlikely. Insofar as I think it’ll happen before 2030 it doesn’t serve as a good forecast because it’ll be too late by that point IMO.
Productivity tool UI’s obsolete thanks to assistants: This is a good one too. I think that’s 50% likely by 2030.
I’m not super certain about any of these things of course, these are just my wild guesses for now.
I was thinking 365 posts * ~50 karma per post gets you most of the way there (18,250 karma), and you pick up some additional karma from comments along the way. 50 karma posts are good but don’t have to be hugely insightful; you can also get a lot of juice by playing to the topics that tend to get lots of upvotes. Unlike humans the bot wouldn’t be limited by writing speed (hence my restriction of one post per day). AI systems should be really, really good at writing, given how easy it is to train on text. And a post is a small, self-contained thing, that takes not very long to create (i.e. it has short horizons), and there are lots of examples to learn from. So overall this seems like a thing that should happen well before TAI / AGI.
I think I want to give up on the research area example, seems pretty hard to operationalize. (But fwiw according to the picture in my head, I don’t think I’d count AlphaFold.)
OK, fair enough. But what if it writes, like, 20 posts in the first 20 days which are that good, but then afterwards it hits diminishing returns because the rationality-related points it makes are no longer particularly novel and exciting? I think this would happen to many humans if they could work at super-speed.
That said, I don’t think this is that likely I guess… probably AI will be unable to do even three such posts, or it’ll be able to generate arbitrary numbers of them. The human range is small. Maybe. Idk.
I’d be pretty surprised if that happened. GPT-3 already knows way more facts than I do, and can mimic far more writing styles than I can. It seems like by the time it can write any good posts (without cherrypicking), it should quickly be able to write good posts on a variety of topics in a variety of different styles, which should let it scale well past 20 posts.
(In contrast, a specific person tends to write on 1-2 topics, in a single style, and not optimizing that hard for karma, and many still write tens of high-scoring posts.)