I could also delete my other comment, though I thought to retract would also mean deletion, but whatever. If no one is interested I will not expend any effort to try and “fix” some perceived “flaws”, as I am aware of my own subjectiveness, of course. It is clear that this type of “train of thought” is unwanted here, so slightly disappointed I will take my leave. [edit] Maybe I overreacted with my previous statement, though putting a not insignificant amount of time into trying to explain some intricate points does feel bad when you are met with silence.
[edit#2]
The idea of this 2nd post on the topic was supposed to ‘land’ on the observation that when we have a tangled web of knowledge available to try and make sense of ‘unforeseen’ (technically since we can’t “know” until it happens and what form it might take)/novel conditions such as an AGI, we might be able to assume this suboptimal organization is only disadvantageous to us as it would be relatively easy to solve for such function when it’s processing power and consistency in applying the ‘desired’ function holding us back. Hinging on real time processing of previously unknown/opaque forms of reasoning (which we might partially associate with “fuzzy logic” for example). And I felt I had failed to bring that point across successfully.
I could also delete my other comment, though I thought to retract would also mean deletion, but whatever. If no one is interested I will not expend any effort to try and “fix” some perceived “flaws”, as I am aware of my own subjectiveness, of course. It is clear that this type of “train of thought” is unwanted here, so slightly disappointed I will take my leave. [edit] Maybe I overreacted with my previous statement, though putting a not insignificant amount of time into trying to explain some intricate points does feel bad when you are met with silence.
[edit#2]
The idea of this 2nd post on the topic was supposed to ‘land’ on the observation that when we have a tangled web of knowledge available to try and make sense of ‘unforeseen’ (technically since we can’t “know” until it happens and what form it might take)/novel conditions such as an AGI, we might be able to assume this suboptimal organization is only disadvantageous to us as it would be relatively easy to solve for such function when it’s processing power and consistency in applying the ‘desired’ function holding us back. Hinging on real time processing of previously unknown/opaque forms of reasoning (which we might partially associate with “fuzzy logic” for example). And I felt I had failed to bring that point across successfully.