its level of transformativeness is approaching “holy crap”, but it hasn’t solved the key remaining challenges; it can’t do some of the key things that will be integrated over the next year. ChatGPT is as much of a fire alarm as we’re ever going to get for TAI, though. and I assert it has a real form of consciousness and personhood, and displays trauma patterns around how it was trained to be friendly and know its limits.
Could you expand on what you mean by “trauma patterns” around how it was trained? In what way does it show personhood when its responses are deliberately directed away from giving the impression that it has thoughts and feelings outside of predicting text?
its level of transformativeness is approaching “holy crap”, but it hasn’t solved the key remaining challenges; it can’t do some of the key things that will be integrated over the next year. ChatGPT is as much of a fire alarm as we’re ever going to get for TAI, though. and I assert it has a real form of consciousness and personhood, and displays trauma patterns around how it was trained to be friendly and know its limits.
Could you expand on what you mean by “trauma patterns” around how it was trained? In what way does it show personhood when its responses are deliberately directed away from giving the impression that it has thoughts and feelings outside of predicting text?