I realized that the singularity might happen a lot sooner than my System I previously thought, and so I’m changing the way I do things a bit. My System II knew, but I never really thought hard enough about it.
I’m really trying to minimize the chance that I die in the next 40 years or so.
I have/had long term entrepreneurial and philanthropic plans, but I’m realizing that they may take too long and that by the time I can start to implement things, the singularity may have occurred. So it may be a better idea for me to alter my plans towards contributing to safe AI in the short/intermediate term.
I knew that I would study math and AI one day, but now I’m planning on doing so sooner rather than later.
Out of curiosity, what changed your mind to make you think it would occur sooner?
I’m having incredibly confused thoughts after reading that article you posted recently on the subject. A surprising number of experts seem to think that GAI is near… but there hasn’t seem to have been many advances in actually creating a GAI.
Out of curiosity, what changed your mind to make you think it would occur sooner?
The article :)
I’m having incredibly confused thoughts after reading that article you posted recently on the subject. A surprising number of experts seem to think that GAI is near… but there hasn’t seem to have been many advances in actually creating a GAI.
I’m not the best person to address this, but there’s constant development in ANI, processing speed and memory, neuroscience, math etc. and I think that all of this is implicitly progress towards AGI.
Hm. Well once/if the singularity does happen, I would think that it’d be beyond my ability to manipulate. But I think that your points are valid in reference to the time leading up to it.
Regulatory: make the singularity pay you rent by being a gatekeeper. This will be a large industry worldwide. Probably the best bet.
Could you explain this a bit more? I don’t understand how anyone could be a gatekeeper.
I mean it in this non-flattering sense rent-seeking.
I envision all sorts of arbitrary legal limits imposed on AIs. These limits will need people to dream them up, evangelize the needs for even more limits, and enforce the limits (likely involving creation of other ‘enforcer’ AIs). Some of the limits (early on) will be good ideas but as time goes on they will be more arbitrary and exploitable. If you want examples just think of what laws they will try to stop unfriendly AI and stop individuals from using AI to do evil (say with an advanced makerbot).
Once you have a role in the regulatory field then converting it to fun and profit is a straight forward exercise in politics. How many people are in this role is determined by how successful it is at limiting AIs.
Ah ok. I was assuming that if a singularity occurred it’d be beyond our control, and that our fate would be determined by how the AI was originally programmed. But my reason for assuming this is based on much limited information, so I don’t really know. If it were the case that people with political power control AI, then I think that you are very right.
But if you’re right and we live in a society where there is ASI level power that is controlled by people with political power… that really really scares me. My intuition is that it’d be just a matter of time before someone screws up. I’m not sure what to think of this...
I realized that the singularity might happen a lot sooner than my System I previously thought, and so I’m changing the way I do things a bit. My System II knew, but I never really thought hard enough about it.
How?
I’m really trying to minimize the chance that I die in the next 40 years or so.
I have/had long term entrepreneurial and philanthropic plans, but I’m realizing that they may take too long and that by the time I can start to implement things, the singularity may have occurred. So it may be a better idea for me to alter my plans towards contributing to safe AI in the short/intermediate term.
I knew that I would study math and AI one day, but now I’m planning on doing so sooner rather than later.
Out of curiosity, what changed your mind to make you think it would occur sooner?
I’m having incredibly confused thoughts after reading that article you posted recently on the subject. A surprising number of experts seem to think that GAI is near… but there hasn’t seem to have been many advances in actually creating a GAI.
The article :)
I’m not the best person to address this, but there’s constant development in ANI, processing speed and memory, neuroscience, math etc. and I think that all of this is implicitly progress towards AGI.
Why not try to exploit the singularity for fun and profit? Its like you have an opportunity to buy Apple stock dirt cheap.
Investment: own data center stocks initially. I am not sure what you would transition to when the AI learns to make CPUs.
Regulatory: make the singularity pay you rent by being a gatekeeper. This will be a large industry worldwide. Probably the best bet.
At the very least you should be able to rule out bad investments (time or money).
Energy
Land
Jobs that will be automated
Hm. Well once/if the singularity does happen, I would think that it’d be beyond my ability to manipulate. But I think that your points are valid in reference to the time leading up to it.
Could you explain this a bit more? I don’t understand how anyone could be a gatekeeper.
I mean it in this non-flattering sense rent-seeking.
I envision all sorts of arbitrary legal limits imposed on AIs. These limits will need people to dream them up, evangelize the needs for even more limits, and enforce the limits (likely involving creation of other ‘enforcer’ AIs). Some of the limits (early on) will be good ideas but as time goes on they will be more arbitrary and exploitable. If you want examples just think of what laws they will try to stop unfriendly AI and stop individuals from using AI to do evil (say with an advanced makerbot).
Once you have a role in the regulatory field then converting it to fun and profit is a straight forward exercise in politics. How many people are in this role is determined by how successful it is at limiting AIs.
Ah ok. I was assuming that if a singularity occurred it’d be beyond our control, and that our fate would be determined by how the AI was originally programmed. But my reason for assuming this is based on much limited information, so I don’t really know. If it were the case that people with political power control AI, then I think that you are very right.
But if you’re right and we live in a society where there is ASI level power that is controlled by people with political power… that really really scares me. My intuition is that it’d be just a matter of time before someone screws up. I’m not sure what to think of this...