Do you mean “impossible in principle” or “will never be built by our civilization”?
If first, then it is a well-known an widely accepted without much evidence idea that brain just can’t be simulated by any sort of Turing machine. For in-story explanation why there are no AIs in future, that is enough.
If second, there is a very real possibility than technical progress will slow down to a halt, and we just never reach a technical capability to build an AI. On this topic, some people say that progress is accelerating right now and some say that it is slowing down since the late 19 century, and of course future is even more unclear.
it is a well-known an widely accepted without much evidence idea that brain just can’t be simulated by any sort of Turing machine.
Is it? I don’t think I’ve ever encountered this view. I think the opposite view that the brain is approximated by a turing machine is widely voiced, e.g. Kurzweil.
You mean you’ve never met any non-transhumanophile and/or non-SF-bay human? (I kid, I kid.)
Walk down to your nearest non-SF-bay starbucks and ask the first person in a business suit if they think we could ever simulate brains on computers. Wager you on >4:1 odds that they’ll say something that boils down to “Nope, impossible.”
For starters, the majority of devout religious followers (which is, what, more than half the worldwide population? more than 80%?) apparently believe souls are necessary for human brains to work correctly. Or at least for humans to work correctly, which if they knew enough about brains would probably lead them to believe the former (limited personal experience!). (EDIT: Addendum: They also have the prior, even if unaware of it, that nothing can emulate souls, at least in physics.)
Now, if you restrict yourself to people familiar with these formulations (“Whether human brains can be simulated by any turing machine in principle”) to immediately give a coherent answer, your odds will naturally go up. There’s some selection effect where people who learn about data theory, turing machines and human brains (as a conjunction) tend to also be people who believe human brains can be emulated like any other data by a turing machine, unsurprisingly enough in retrospect.
If so, then the explanation proposed by Lalartu won’t hold water with the target audience, i.e. the subset of humans who don’t happen to hold that idea for granted.
If it’s not, and the audience includes general-muggle-population in any non-accidental capacity, then it’s worth pointing out that the majority of people accept the idea for granted, and thus that that subset of the target audience would take this explanation in stride.
Either way, the issue is relevant.
Mostly, I just wanted to respond to the emotionally-surprising assertion that they’d never cognizantly encountered this view.
Our existence only proves that intelligence is evolvable, but it’s far from settled that it’s makeable. Human brains might be unable to design/build anything more complex than themselves.
Do you mean “impossible in principle” or “will never be built by our civilization”?
If first, then it is a well-known an widely accepted without much evidence idea that brain just can’t be simulated by any sort of Turing machine. For in-story explanation why there are no AIs in future, that is enough.
If second, there is a very real possibility than technical progress will slow down to a halt, and we just never reach a technical capability to build an AI. On this topic, some people say that progress is accelerating right now and some say that it is slowing down since the late 19 century, and of course future is even more unclear.
Is it? I don’t think I’ve ever encountered this view. I think the opposite view that the brain is approximated by a turing machine is widely voiced, e.g. Kurzweil.
You mean you’ve never met any non-transhumanophile and/or non-SF-bay human? (I kid, I kid.)
Walk down to your nearest non-SF-bay starbucks and ask the first person in a business suit if they think we could ever simulate brains on computers. Wager you on >4:1 odds that they’ll say something that boils down to “Nope, impossible.”
For starters, the majority of devout religious followers (which is, what, more than half the worldwide population? more than 80%?) apparently believe souls are necessary for human brains to work correctly. Or at least for humans to work correctly, which if they knew enough about brains would probably lead them to believe the former (limited personal experience!). (EDIT: Addendum: They also have the prior, even if unaware of it, that nothing can emulate souls, at least in physics.)
Now, if you restrict yourself to people familiar with these formulations (“Whether human brains can be simulated by any turing machine in principle”) to immediately give a coherent answer, your odds will naturally go up. There’s some selection effect where people who learn about data theory, turing machines and human brains (as a conjunction) tend to also be people who believe human brains can be emulated like any other data by a turing machine, unsurprisingly enough in retrospect.
I’m not sure they’re a big part of listic’s target audience.
If so, then the explanation proposed by Lalartu won’t hold water with the target audience, i.e. the subset of humans who don’t happen to hold that idea for granted.
If it’s not, and the audience includes general-muggle-population in any non-accidental capacity, then it’s worth pointing out that the majority of people accept the idea for granted, and thus that that subset of the target audience would take this explanation in stride.
Either way, the issue is relevant.
Mostly, I just wanted to respond to the emotionally-surprising assertion that they’d never cognizantly encountered this view.
I didn’t distinguish between the two; for me, any would be fine; thanks.
Our existence only proves that intelligence is evolvable, but it’s far from settled that it’s makeable. Human brains might be unable to design/build anything more complex than themselves.