I worry that a lot of discussions about AI are all being done via metaphor or being based on past events while it’s easy to make up a metaphor that matches any given future scenario and it shouldn’t be easily assumed that building an artificial brain is (or isn’t!) anything like past events.
I agree that using metaphors to predict the future is problematic, but predicting the future is really hard and if we don’t have a good inside view of what’s likely to happen the best we can do is to extrapolate from what has happened in the past.
I worry that a lot of discussions about AI are all being done via metaphor or being based on past events while it’s easy to make up a metaphor that matches any given future scenario and it shouldn’t be easily assumed that building an artificial brain is (or isn’t!) anything like past events.
I agree that using metaphors to predict the future is problematic, but predicting the future is really hard and if we don’t have a good inside view of what’s likely to happen the best we can do is to extrapolate from what has happened in the past.