I didn’t mean that each one individually was very overconfident and just listed all predictions and there were no “might” or “plausibly” or even “more likely than not” (which I would see as >50%). I would read a “will be able to X” as >90% confident, and there are many of these.
But your explanation that each statement should be read with the implicit qualification “assuming the contextual model as given” clarifies this. I’m not sure the majority will read it like that, though.
I think the most overconfident claim is this:
AGI will almost certainly require recursion
I’m no longer sure you mean the statements as applying in a ten-year horizon, but among the statements, I think one about the human judges is the one furthest out because it mostly depends on the others being achieved (GPU clusters running agents etc.).
Yeah in retrospect I probably should reword that, as it may not convey my model very well. I am fairly confident that AGI will require something like recursion (or recurrence actually), but that something more specifically is information flow across time—over various timescales—and across the space of intermediate computations, but you can also get that from using memory mechanisms.
Just for the record: I am working on a brain-like AGI project, and I think approaches that simulate agents in a human-like environment are important and will plausibly give a lot of insights into value acquisition in AI and humans alike. I’m just less confident about many of your specific claims.
Thank you for the clarification.
I didn’t mean that each one individually was very overconfident and just listed all predictions and there were no “might” or “plausibly” or even “more likely than not” (which I would see as >50%). I would read a “will be able to X” as >90% confident, and there are many of these.
But your explanation that each statement should be read with the implicit qualification “assuming the contextual model as given” clarifies this. I’m not sure the majority will read it like that, though.
I think the most overconfident claim is this:
I’m no longer sure you mean the statements as applying in a ten-year horizon, but among the statements, I think one about the human judges is the one furthest out because it mostly depends on the others being achieved (GPU clusters running agents etc.).
Yeah in retrospect I probably should reword that, as it may not convey my model very well. I am fairly confident that AGI will require something like recursion (or recurrence actually), but that something more specifically is information flow across time—over various timescales—and across the space of intermediate computations, but you can also get that from using memory mechanisms.
Just for the record: I am working on a brain-like AGI project, and I think approaches that simulate agents in a human-like environment are important and will plausibly give a lot of insights into value acquisition in AI and humans alike. I’m just less confident about many of your specific claims.