Sure, I mean, logistic regression has had economic value and it doesn’t seem meaningful to me to say whether it is “aligned” or “inner aligned”. I’m talking about transformative AI systems, where downside risk is almost certainly not limited.
We might get TAI due to efforts by, say, an algo-trading company that develops trading AI systems. The company can limit the mundane downside risks that it faces from non-robust behaviors of its AI systems (e.g. by limiting the fraction of its fund that the AI systems control). Of course, the actual downside risk to the company includes outcomes like existential catastrophes, but it’s not clear to me why we should expect that prior to such extreme outcomes their AI systems would behave in ways that are detrimental to economic value.
I predict that this will not lead to transformative AI; I don’t see how an algorithmic trading system leads to an impact on the world comparable to the industrial revolution.
You can tell a story where you get an Eliezer-style near-omniscient superintelligent algorithmic trading system that then reshapes the world because it is a superintelligence, and that the researchers thought that it was not a superintelligence and so assumed that the downside risk was bounded, but both clauses (Eliezer-style superintelligence and researchers being horribly miscalibrated) seem unlikely to me.
My point here is that in a world where an algo-trading company has the lead in AI capabilities, there need not be a point in time (prior to an existential catastrophe or existential security) where investing more resources into the company’s safety-indifferent AI R&D does not seem profitable in expectation. This claim can be true regardless of researchers’ observations beliefs and actions in given situations.
Sure, I mean, logistic regression has had economic value and it doesn’t seem meaningful to me to say whether it is “aligned” or “inner aligned”. I’m talking about transformative AI systems, where downside risk is almost certainly not limited.
We might get TAI due to efforts by, say, an algo-trading company that develops trading AI systems. The company can limit the mundane downside risks that it faces from non-robust behaviors of its AI systems (e.g. by limiting the fraction of its fund that the AI systems control). Of course, the actual downside risk to the company includes outcomes like existential catastrophes, but it’s not clear to me why we should expect that prior to such extreme outcomes their AI systems would behave in ways that are detrimental to economic value.
I predict that this will not lead to transformative AI; I don’t see how an algorithmic trading system leads to an impact on the world comparable to the industrial revolution.
You can tell a story where you get an Eliezer-style near-omniscient superintelligent algorithmic trading system that then reshapes the world because it is a superintelligence, and that the researchers thought that it was not a superintelligence and so assumed that the downside risk was bounded, but both clauses (Eliezer-style superintelligence and researchers being horribly miscalibrated) seem unlikely to me.
My point here is that in a world where an algo-trading company has the lead in AI capabilities, there need not be a point in time (prior to an existential catastrophe or existential security) where investing more resources into the company’s safety-indifferent AI R&D does not seem profitable in expectation. This claim can be true regardless of researchers’ observations beliefs and actions in given situations.