Mmh, interesting, I hadn’t thought of it from the need-for-robustness angle.
Do you think it matters because LLMs are inherently less robust than humans, and therefore you can’t just replace humans by general-ish things? Some companies do work as a combination of micro-entities which are extremely predictable and robust, and the more predictable/robust the better. Do you think that every entity that produces value have to follow this structure?
I disagree with what you said about statelessness because the AI with translucent thoughts I describe are mostly stateless. The difference between CAIS & AI with translucent thoughts is not the possibility of a state, it’s the possibility of joint training & long “free” text generations which makes hidden coordination & not-micro thinking possible.
That free text generation is state buildup. It’s the cause of most software failure since the beginning.
Current llms, because they have only a single buffer which is human readable as their state are fairly safe. The danger comes if we start hiding memory cells in network layers. Where unexaminable values are being stored, and from frame to frame these values are being used in a hidden way (to maximize reward)
When you say translucent thoughts you are proposing we structure ourselves exactly WHAT the machine can store from frame to frame, and we can validate this such as by having a different model “pick up” from a frame and complete a task.
If task performance drops because the stored data wasn’t formatted correctly (the AI hijacked the bits to store something else) we can automatically measure this and take action.
Mmh, interesting, I hadn’t thought of it from the need-for-robustness angle.
Do you think it matters because LLMs are inherently less robust than humans, and therefore you can’t just replace humans by general-ish things? Some companies do work as a combination of micro-entities which are extremely predictable and robust, and the more predictable/robust the better. Do you think that every entity that produces value have to follow this structure?
I disagree with what you said about statelessness because the AI with translucent thoughts I describe are mostly stateless. The difference between CAIS & AI with translucent thoughts is not the possibility of a state, it’s the possibility of joint training & long “free” text generations which makes hidden coordination & not-micro thinking possible.
That free text generation is state buildup. It’s the cause of most software failure since the beginning.
Current llms, because they have only a single buffer which is human readable as their state are fairly safe. The danger comes if we start hiding memory cells in network layers. Where unexaminable values are being stored, and from frame to frame these values are being used in a hidden way (to maximize reward)
When you say translucent thoughts you are proposing we structure ourselves exactly WHAT the machine can store from frame to frame, and we can validate this such as by having a different model “pick up” from a frame and complete a task.
If task performance drops because the stored data wasn’t formatted correctly (the AI hijacked the bits to store something else) we can automatically measure this and take action.