But I don’t know to what extent productive studies in philosophy at the top level of competence in philosophy are at all compatible with safety concerns. It’s not an accident that people using base models show nice progress in joint human-AI philosophical brainstorms, whereas people using tamed models seem to be saying that those models are not creative enough, and that those models don’t think in sufficiently non-standard ways.
It’s might be a fundamental problem which might not have anything to do with human-AI differences. For example, Nietzsche is an important radical philosopher, and we need biological or artificial philosophers performing not just on that level, but on a higher level than that, if we want them to properly address fundamental problems, but Nietzsche is not “safe” in any way, shape, or form.
But I don’t know to what extent productive studies in philosophy at the top level of competence in philosophy are at all compatible with safety concerns. It’s not an accident that people using base models show nice progress in joint human-AI philosophical brainstorms, whereas people using tamed models seem to be saying that those models are not creative enough, and that those models don’t think in sufficiently non-standard ways.
It’s might be a fundamental problem which might not have anything to do with human-AI differences. For example, Nietzsche is an important radical philosopher, and we need biological or artificial philosophers performing not just on that level, but on a higher level than that, if we want them to properly address fundamental problems, but Nietzsche is not “safe” in any way, shape, or form.