Your two assumptions and intuitions are plausible, but they may not hold true in every case. It is important to consider the specific context and motivations of individual rulers when making predictions about their behavior.
Regarding your first intuition, it is possible that some rulers may support the development of powerful AGI if they see it as a means to achieve their goals more efficiently. However, they may also take precautions to ensure that the AGI is under their control and cannot threaten their power.
Regarding your second intuition, rulers may not necessarily get rid of humans as soon as AI can achieve goals more efficiently. Human labor can still be cheaper or more reliable in certain contexts, and rulers may also want to maintain the appearance of caring about human welfare for political reasons.
To make your intuitions more testable, you could formulate specific hypotheses and predictions that can be observed or measured. For example, you could predict that rulers of authoritarian regimes will be more likely to suppress the development of powerful AGI than rulers of democracies, or that companies with highly centralized decision-making structures will be more resistant to AI-driven automation than companies with more decentralized structures. You could then collect data to test these hypotheses, such as by analyzing government policies or conducting surveys of corporate leaders.
(I am a bit worried by this given that China seems to restrict AIs more than US...)
Me: Thanks, the idea with checking what authoritarian regimes are doing today, was very helpful. Can you suggest more operationalisations of my theses?
GPT: Sure, here are some possible operationalizations of your intuitions:
Operationalizing your first intuition:
Hypothesis: Rulers will be more likely to suppress the development of powerful AGI if they perceive it as a threat to their power.
Prediction: Countries with more authoritarian governments will have fewer AI research projects and investments than countries with more democratic governments.
Operationalization: Use publicly available data on AI research funding and publications by country, as well as measures of political freedom and democracy, to test the relationship between government type and AI development.
Operationalizing your second intuition:
Hypothesis: Rulers will be more likely to replace human workers with AI when it is cost-effective and does not threaten their power.
Prediction: Companies with highly centralized decision-making structures will be slower to adopt AI-driven automation than companies with more decentralized structures.
Operationalization: Survey companies to collect data on their decision-making structures and level of AI adoption, and test the relationship between these variables using statistical analysis.
Operationalizing your conclusion:
Hypothesis: Rulers will prioritize their own interests over the well-being of their constituents.
Prediction: Countries with high levels of corruption will be less likely to invest in social welfare programs and more likely to invest in projects that benefit the ruling elite.
Operationalization: Use measures of corruption and social welfare spending, as well as data on government contracts and investments, to test the relationship between corruption and government priorities.
I like how ChatGPT can help in operatinalizing fuzzy intuitions. I feel an eerie risk that it makes me think even less, and less carefully, and defer to AIs wisdom more and more… it’s very tempting … as if finding an adult who you can cede control to.
ChatGPT’s answer:
(I am a bit worried by this given that China seems to restrict AIs more than US...)
I like how ChatGPT can help in operatinalizing fuzzy intuitions. I feel an eerie risk that it makes me think even less, and less carefully, and defer to AIs wisdom more and more… it’s very tempting … as if finding an adult who you can cede control to.