The problem appears to be underspecified: Trivial by post-G-increase standards or trivial by current standards?
If it’s by current standards, then we should invest in figuring out the pattern of trivial details that go disproportionately unnoticed. I’m not sure how we can use information before we’ve obtained it, unless I’m missing something obvious about causality. The logic of not being able to act on that realization before I’ve had it seems air-tight to me, unless it’s a tool you can use without realizing it, in which case I’m not sure why I haven’t used it. It might be possible to use it without being aware you have, but that doesn’t solve the problem of my not consciously knowing it before I’ve realized it.
If it’s not by current standards, then it would depend on what “trivial” means in terms of a being significantly more generally intelligent than myself. If “trivial” in that sense has any relation to being ably to easily explain a concept, then such an intelligence should be able to communicate the idea to us, even though it may be non-trivial by our current understanding.
Replaying memories of conversations with agents of lesser immediate intelligence (Read: Knowledge? Understanding?) than my then-self, the only trivial explanation I’ve been able to determine is that there is no shortcut to G-increase. Once you stop looking for secret backdoors to decoding the mechanics of reality, you’re liable to suffer less distractions in doing so. In unsure if this qualifies as obvious or trivial or what the total cognitive relatedness of the two concepts is.
If there is an easy button, it is trivial by our standards to recognize it retrospectively: “This is the button such that people who press it get smarter.”
I thought you were saying that there is no easy button, not that any easy button is difficult to recognize.
All that said, I think one of the major optimizations that many people could do is to perform VOI calculations before looking for an easy way; the return-on-work for looking for easier ways of doing something is often very low (in cases where lots of people have already looked), or very high (in cases where few people have already looked), but it seems that the actual distribution is the reverse.
The problem appears to be underspecified: Trivial by post-G-increase standards or trivial by current standards?
If it’s by current standards, then we should invest in figuring out the pattern of trivial details that go disproportionately unnoticed. I’m not sure how we can use information before we’ve obtained it, unless I’m missing something obvious about causality. The logic of not being able to act on that realization before I’ve had it seems air-tight to me, unless it’s a tool you can use without realizing it, in which case I’m not sure why I haven’t used it. It might be possible to use it without being aware you have, but that doesn’t solve the problem of my not consciously knowing it before I’ve realized it.
If it’s not by current standards, then it would depend on what “trivial” means in terms of a being significantly more generally intelligent than myself. If “trivial” in that sense has any relation to being ably to easily explain a concept, then such an intelligence should be able to communicate the idea to us, even though it may be non-trivial by our current understanding.
Replaying memories of conversations with agents of lesser immediate intelligence (Read: Knowledge? Understanding?) than my then-self, the only trivial explanation I’ve been able to determine is that there is no shortcut to G-increase. Once you stop looking for secret backdoors to decoding the mechanics of reality, you’re liable to suffer less distractions in doing so. In unsure if this qualifies as obvious or trivial or what the total cognitive relatedness of the two concepts is.
If there is an easy button, it is trivial by our standards to recognize it retrospectively: “This is the button such that people who press it get smarter.”
I thought you were saying that there is no easy button, not that any easy button is difficult to recognize.
All that said, I think one of the major optimizations that many people could do is to perform VOI calculations before looking for an easy way; the return-on-work for looking for easier ways of doing something is often very low (in cases where lots of people have already looked), or very high (in cases where few people have already looked), but it seems that the actual distribution is the reverse.
Ah, that helps me understand what went wrong in the communication of my sentiment. Thank you for clearing up that inferential silence.