There’s an entire class of problem within ML that I would see as framing problems and the one thing I think LLMs don’t help that much with is framing.
Could you say more about this? What do you mean by framing in this context?
There’s this quote I’ve been seeing from Situation Awareness that all you have to do is “believe in a straight line on a curve” and when I hear that and see the general trend extrapolations my spider senses start tingling.
Yeah that’s not really compelling to me either. SitA didn’t move my timelines. Curious if you have engaged with the benchmarks+gaps argument to AI R&D automation (timelines forecast), and then the AI algorithmic progress that would drive (takeoff forecast). These are the things that actually moved my view.
Could you say more about this? What do you mean by framing in this context?
Yeah that’s not really compelling to me either. SitA didn’t move my timelines. Curious if you have engaged with the benchmarks+gaps argument to AI R&D automation (timelines forecast), and then the AI algorithmic progress that would drive (takeoff forecast). These are the things that actually moved my view.
Thanks for the link, that’s compelling.