Great post! Thanks for writing it. Seems like a good fit for Main.
So just to clarify my understanding: If the ULH is true it becomes more plausible that, say, playing video games and hating books because authority figures force you to read them in school have long-term broad impacts on your personality. And if the EMH is true, it becomes more plausible that important characteristics like the Big Five personality traits and intelligence are genetically coded and you become the person your genes describe. Correct?
Yudkowsky’s AI box experiments and that entire notion of open boxing is a strawman—a distraction.
Us humans have contemplated whether we are in a simulation even though no one “outside the Matrix” told us we might be. Is it possible that an AI-in-training might contemplate the same thing?
In general the evidence from the last four years or so supports Hanson’s viewpoint from the Foom debate.
Really? My impression was that Hanson had more of a EMH view.
So just to clarify my understanding: If the ULH is true it becomes more plausible that, say, playing video games and hating books because authority figures force you to read them in school have long-term broad impacts on your personality.
I agree with this largely but I would replace ‘personality’ with ‘mental software’, or just ‘mind’. Personality to me connotes a subset of mental aspects that are more associated with innate variables.
I suspect that enjoying/valuing learning is extremely important for later development. It seems probable that some people are born with a stronger innate drive for learning, but that drive by itself can also probably be adjusted through learning. But i’m not aware of any hard evidence on this matter.
In my case I was somewhat obsessed with video games as a young child and my father actually did force me to read books and even the encyclopedia. I found that I hated the books he made me read (I only liked sci-fi) but I loved the encyclopedia. I ended up learning how to quickly skim books and fake it enough to pass the resulting QA test.
And if the EMH is true, it becomes more plausible that important characteristics like the Big Five personality traits and intelligence are genetically coded
I don’t think abstract high level variables like big five personality traits or IQ scores are the relevant features for the EMH vs ULH issue. For example in the ULH scenario, there is still plenty of room for strongly genetically determined IQ effects (hardware issues/tradeoffs), and personality variables are not complex cognitive algorithms.
Us humans have contemplated whether we are in a simulation even though no one “outside the Matrix” told us we might be. Is it possible that an AI-in-training might contemplate the same thing?
Sure, and this was part of what my post from 5 years back was all about. It’s kind of a world design issue. Is it better to have your AIs in your testsim believe in a simplistic creator god? (which is in a way on the right track with regards to the sim arg, but it also doesn’t do them much good) Or is better for them to have a naturalist/atheist worldview? (potentially more dangerous in the long term as it leads to scientific investigation and eventually the sim arg)
That post was downvoted into hell, in part I think because I posted to main—I was new to LW and didn’t understand the main/discussion distinction. Also, I think people didn’t like the general idea of anything mentioning the word theology, or the idea of intentionally giving your testsim AI a theology.
Really? My impression was that Hanson had more of a EMH view.
I should clarify—I meant Hanson’s viewpoint on just the FOOM issue specifically as outlined in that one post, not his whole view on AGI—which I gather is very much a EMH type viewpoint. His view seems to be pessimistic on first principles AGI but also pessimistic on brain-based AGI but optimistic on brain uploading. However many of his insights/speculations on a brain upload future also apply equally well to a brain-based AGI future.
Re: AIs in a simulation, it seems like whatever goals the AI had would be defined in terms of the simulation (similar to how if humanity discovered we were in a hackable simulation, our first priorities would be to make sure the simulation didn’t get shut off, invent immortality, provide everyone with unlimited cake, etc.--all concerns that exist within our simulation.) So even if the AI realizes its in a simulation, having its goal defined in terms of the simulation probably counts as a weak security measure.
Great post! Thanks for writing it. Seems like a good fit for Main.
So just to clarify my understanding: If the ULH is true it becomes more plausible that, say, playing video games and hating books because authority figures force you to read them in school have long-term broad impacts on your personality. And if the EMH is true, it becomes more plausible that important characteristics like the Big Five personality traits and intelligence are genetically coded and you become the person your genes describe. Correct?
Us humans have contemplated whether we are in a simulation even though no one “outside the Matrix” told us we might be. Is it possible that an AI-in-training might contemplate the same thing?
Really? My impression was that Hanson had more of a EMH view.
I agree with this largely but I would replace ‘personality’ with ‘mental software’, or just ‘mind’. Personality to me connotes a subset of mental aspects that are more associated with innate variables.
I suspect that enjoying/valuing learning is extremely important for later development. It seems probable that some people are born with a stronger innate drive for learning, but that drive by itself can also probably be adjusted through learning. But i’m not aware of any hard evidence on this matter.
In my case I was somewhat obsessed with video games as a young child and my father actually did force me to read books and even the encyclopedia. I found that I hated the books he made me read (I only liked sci-fi) but I loved the encyclopedia. I ended up learning how to quickly skim books and fake it enough to pass the resulting QA test.
I don’t think abstract high level variables like big five personality traits or IQ scores are the relevant features for the EMH vs ULH issue. For example in the ULH scenario, there is still plenty of room for strongly genetically determined IQ effects (hardware issues/tradeoffs), and personality variables are not complex cognitive algorithms.
Sure, and this was part of what my post from 5 years back was all about. It’s kind of a world design issue. Is it better to have your AIs in your testsim believe in a simplistic creator god? (which is in a way on the right track with regards to the sim arg, but it also doesn’t do them much good) Or is better for them to have a naturalist/atheist worldview? (potentially more dangerous in the long term as it leads to scientific investigation and eventually the sim arg)
That post was downvoted into hell, in part I think because I posted to main—I was new to LW and didn’t understand the main/discussion distinction. Also, I think people didn’t like the general idea of anything mentioning the word theology, or the idea of intentionally giving your testsim AI a theology.
I should clarify—I meant Hanson’s viewpoint on just the FOOM issue specifically as outlined in that one post, not his whole view on AGI—which I gather is very much a EMH type viewpoint. His view seems to be pessimistic on first principles AGI but also pessimistic on brain-based AGI but optimistic on brain uploading. However many of his insights/speculations on a brain upload future also apply equally well to a brain-based AGI future.
Re: AIs in a simulation, it seems like whatever goals the AI had would be defined in terms of the simulation (similar to how if humanity discovered we were in a hackable simulation, our first priorities would be to make sure the simulation didn’t get shut off, invent immortality, provide everyone with unlimited cake, etc.--all concerns that exist within our simulation.) So even if the AI realizes its in a simulation, having its goal defined in terms of the simulation probably counts as a weak security measure.