if you simply drop the word drugs and try to figure out which chemicals are dangerous to consume, it might be even easier. also potentially worth bringing up is that the human brain is typically near optimality in most ways and it is usually quite hard to identify specific ways to alter your brain to improve it. this is similar to taking a trained neural network and attempting to update it significantly with a sampling setting or inference change: maybe you can re-architect it after training and not do any more training updates and still get something useful out of the inference invocation, but more likely the neural net breaks, and when you are the neural network in question and don’t have the opportunity to roll back changes, it is very risky to do experimentation. if your research is wrong about how quickly damage accumulates from a change, it can be very bad!
also, an important factor in learning is to have higher risk tolerance for some period of time in order to gain the experience to calibrate your risk estimates. without that you don’t explore, you spend all your time using whatever few things you know about! and humans of course have this built-in, we wouldn’t be able to grow up otherwise. unfortunately it can have some pretty serious consequences if people don’t realize their risk tolerance is unusually high when they’re young, for example I was very hesitant to drive cars until I was 19 because of the way the risk tolerance curve changes. The accident rate per capita of a newly licensed 16-year-old driver is far higher than that of a 19-year-old, because of this risk tolerance curve. any type of risk of permanent damage meshes badly with this; it’s the first person version of one of the key components to AI safety, avoiding irreversible state transitions from curious reinforcement learners.
also, an important factor in learning is to have higher risk tolerance for some period of time in order to gain the experience to calibrate your risk estimates
A popular attitude to AI risk. I’m worried the possibility of permanent change in properties of cognition from use of psychedelics is not given the level of alarm it deserves. Some confusions should be grappled with for as many centuries as it takes, instead of resorting to personal experience.
if you simply drop the word drugs and try to figure out which chemicals are dangerous to consume, it might be even easier. also potentially worth bringing up is that the human brain is typically near optimality in most ways and it is usually quite hard to identify specific ways to alter your brain to improve it. this is similar to taking a trained neural network and attempting to update it significantly with a sampling setting or inference change: maybe you can re-architect it after training and not do any more training updates and still get something useful out of the inference invocation, but more likely the neural net breaks, and when you are the neural network in question and don’t have the opportunity to roll back changes, it is very risky to do experimentation. if your research is wrong about how quickly damage accumulates from a change, it can be very bad!
also, an important factor in learning is to have higher risk tolerance for some period of time in order to gain the experience to calibrate your risk estimates. without that you don’t explore, you spend all your time using whatever few things you know about! and humans of course have this built-in, we wouldn’t be able to grow up otherwise. unfortunately it can have some pretty serious consequences if people don’t realize their risk tolerance is unusually high when they’re young, for example I was very hesitant to drive cars until I was 19 because of the way the risk tolerance curve changes. The accident rate per capita of a newly licensed 16-year-old driver is far higher than that of a 19-year-old, because of this risk tolerance curve. any type of risk of permanent damage meshes badly with this; it’s the first person version of one of the key components to AI safety, avoiding irreversible state transitions from curious reinforcement learners.
A popular attitude to AI risk. I’m worried the possibility of permanent change in properties of cognition from use of psychedelics is not given the level of alarm it deserves. Some confusions should be grappled with for as many centuries as it takes, instead of resorting to personal experience.