Sure, it seems plausible that an AI developed by humans will on average end up in an at-least-marginally different region of mindspace than an AI developed by nonhumans.
And an AI designed to develop new pharmaceuticals will on average end up in an at-least-marginally different region of mindspace than one designed to predict stock market behavior. Sure.
None of that implies safety, as far as I can tell.
Sure, it seems plausible that an AI developed by humans will on average end up in an at-least-marginally different region of mindspace than an AI developed by nonhumans.
And an AI designed to develop new pharmaceuticals will on average end up in an at-least-marginally different region of mindspace than one designed to predict stock market behavior. Sure.
None of that implies safety, as far as I can tell.