Note that a lot of people are responding to a nontrivial enhancement of LLMs that they can see over the horizon, but wont talk about publicly for obvious reasons, so it wont be clear what they’re reacting to and they also might not say when you ask.
Though, personally, although my timelines have shortened, my P(Doom) has decreased in response to LLMs, as it seems more likely now that we’ll be able to get machines to develop an ontology and figure out what we mean by “good” before having developed enough general agency to seriously deceive us or escape the lab. However, shortening timelines have still led me to develop an intensified sense of focus and urgency. Many of the things that I used to be interested in doing don’t make sense any more. I’m considering retraining.
Hey Mako, I haven’t been able to identify anyone who seems to be referring to an enhancement in LLMs that might be coming soon.
Do you have evidence that this is something people are implicitly referring to? Do you personally know someone who has told you this possible development, or are you working as an employee for a company which makes it very reasonable for you to know this information?
If you have arrived at this information through a unique method, I would be very open to hearing that.
Basically everyone working AGI professionally sees potential enhancements on prior work that they’re not talking about. The big three have NDAs even just for interviews, and if you look closely at what they’re hiring for it’s pretty obvious they’re trying a lot of stuff that they’re not talking about.
It seems like you’re touching on a bigger question: Do the engines of invention see where they’re going, before they arrive. Personally, I think so, but it’s not a very legible skill so people underestimate it, or half-ass it.
Note that a lot of people are responding to a nontrivial enhancement of LLMs that they can see over the horizon, but wont talk about publicly for obvious reasons, so it wont be clear what they’re reacting to and they also might not say when you ask.
Though, personally, although my timelines have shortened, my P(Doom) has decreased in response to LLMs, as it seems more likely now that we’ll be able to get machines to develop an ontology and figure out what we mean by “good” before having developed enough general agency to seriously deceive us or escape the lab. However, shortening timelines have still led me to develop an intensified sense of focus and urgency. Many of the things that I used to be interested in doing don’t make sense any more. I’m considering retraining.
Hey Mako, I haven’t been able to identify anyone who seems to be referring to an enhancement in LLMs that might be coming soon.
Do you have evidence that this is something people are implicitly referring to? Do you personally know someone who has told you this possible development, or are you working as an employee for a company which makes it very reasonable for you to know this information?
If you have arrived at this information through a unique method, I would be very open to hearing that.
Basically everyone working AGI professionally sees potential enhancements on prior work that they’re not talking about. The big three have NDAs even just for interviews, and if you look closely at what they’re hiring for it’s pretty obvious they’re trying a lot of stuff that they’re not talking about.
It seems like you’re touching on a bigger question: Do the engines of invention see where they’re going, before they arrive. Personally, I think so, but it’s not a very legible skill so people underestimate it, or half-ass it.