Is this future AI catastrophe? Or is this just a description of current events being a general gradual collapse?
This seems like what is happening now, and has been for a while. Existing ML systems are clearly making Type-I problems, already quite bad before ML was a thing at all, much worse, to the extent that I don’t see much ability left of our civilization to get anything that can’t be measured in a short term feedback loop—even in spaces like this, appeals to non-measurable or non-explicit concerns are a near-impossible sell.
Part II problems are not yet coming from ML systems, exactly, But we certainly have algorithms that are effectively optimized and selected for the ability to gain influence; the algorithm gains influence, which causes people to care about it and feed into it, causing it to get more. If we get less direct in the metaphor we get the same thing with memetics, culture, life strategies, corporations, media properties and so on. The emphasis on choosing winners, being ‘on the right side of history’, supporting those who are good at getting support. OP notes that this happens in non-ML situations explicitly, and there’s no clear dividing line in any case.
So if there is another theory that says, this has already happened, what would one do next?
Is this future AI catastrophe? Or is this just a description of current events being a general gradual collapse?
This seems like what is happening now, and has been for a while. Existing ML systems are clearly making Type-I problems, already quite bad before ML was a thing at all, much worse, to the extent that I don’t see much ability left of our civilization to get anything that can’t be measured in a short term feedback loop—even in spaces like this, appeals to non-measurable or non-explicit concerns are a near-impossible sell.
Part II problems are not yet coming from ML systems, exactly, But we certainly have algorithms that are effectively optimized and selected for the ability to gain influence; the algorithm gains influence, which causes people to care about it and feed into it, causing it to get more. If we get less direct in the metaphor we get the same thing with memetics, culture, life strategies, corporations, media properties and so on. The emphasis on choosing winners, being ‘on the right side of history’, supporting those who are good at getting support. OP notes that this happens in non-ML situations explicitly, and there’s no clear dividing line in any case.
So if there is another theory that says, this has already happened, what would one do next?
You could always get a job at a company which controls an important algorithm.