A dear friend of my heart sent me a video that I loved. A talk by Yann LeCun, Chief AI Scientist at Meta and NYU professor, about artificial intelligence, its problems and hallucinations. And I noticed: I have problems and I have hallucinations, it seems this applies to me.
From what I understand, AIs are good at system 1 thinking: fast, automatic, programmed responses without deep reflection, and they lack system 2 slow and deliberate thinking, capacity for self-reflection, adaptation to new contexts, and reformulation of understandings. And look at that! It seems I also lack system 2 too. How to balance this?
Among various proposals to improve system 2, I selected some: reflective writing, counseling and feedback from trusted people, and reading different perspectives. And well, I don’t know a more reliable community than LessWrong, and that’s why I’m writing to them. Additionally, it seems that analogies can help a lot to understand complex systems more deeply, so I’m writing a big analogy.
This series of dialogues and games is designed to stimulate probabilistic reasoning in my teenage students and myself. For me, these dialogues represent a personal challenge to overcome those ‘elegant and rebellious excuses’ that sometimes make it difficult for me to dedicate small moments to making better decisions
So, to better invest in my life – and ideally in the market – I write to achieve a better balance between my thought systems. I present to you a story that progresses from the simplest duality (black and white), moving through intermediate shades (tones), until reaching the full spectrum (color gradient), and even beyond the visible (invisible spectrum).
The concepts addressed by negotiation:
Black and White—How computer science principles can be applied to self-knowledge
Shades—How definehuman information levels based on their characteristics limits i. elemental, ii. emotional, iii. social, and iv. intellectual
Degrade Colors—How to define and organize hierarchies of human functions and values
Prequel—Application of information entropy to self-knowledge
1- BLACK AND WHITE
Which computer science principles can be applied to self-knowledge?
I-ALMOST FAST AND SEMI-SLOW
In a near future, you find yourself among a group of computational scientists, where AI systems have reached unprecedented sophistication, splitting into two distinct entities that mirror our own thinking systems:
Y (System 1): fast, intuitive, automatic
X (System 2): slow, analytical, reflective
‘Y’, known as Yankee, has evolved by prioritizing learning through human interactions. It has become faster and more intuitive, though consuming increasingly more energy in the process.
Meanwhile, ‘X’, called Xray, less developed, focuses on diving deep into complex problems and programming itself to solve them. Its capacity is limited by Yankee’s high energy demands.
Your group notices that Xray struggles to function effectively and consequently Yakee. To understand and correct its complex codes, they have implemented a program that personifies both systems, bringing to life two distinct characters:
“X”
“What movie to watch? Shouldn’t I be working on something? Wait, but who am I really?”… And so, while choosing a movie, “X” faces an existential crisis.
Although religion, philosophy, and psychology offer interesting answers, none provided anything solid enough to handle uncertainty for “X.”
Because of this, “X” no longer trusted anything without understanding its causes and consequences in detail, as if he were a pastor and his God was doubt.
But in his quest to gain more perspectives, “X” found a bit of order where he least expected it: in the principles of uncertainty.
However, it was all too theoretical and needed so much energy, and little by little, “X” started becoming shallow and losing motivation.
That’s why he decided to approach “Y,” who has the determination to climb ever higher but generally ignores him.
“Y”
“Y” seems to have reached a kind of existential resolution, as if he’s already unlocked life’s secrets. He has a deep, well-established ancestral experience that has worked for thousands of years.
“Y” doesn’t mix what he knows; everything is in its place. But occasionally, he has fun with religion, philosophy, and psychology, usually with sarcastic comments.
“Y” behaves like a scientist who, having resolved all essential questions, now cruises through life comfortably, avoiding what he doesn’t understand to avoid complications.
And when “Y” encounters something he doesn’t understand, he has an automatic tendency to deny it, almost as if he were a terrorist of rationality.
However, beneath this focused outlook, “Y” hides a curiosity—a crack in the armor.
This small vein of wonder, though almost imperceptible, is what prevents him from running away from the bothersome “X.”
Now, you, reader, can see why it’s important to propose a new energy distribution ratio for ‘X’ and ‘Y’ that isn’t based solely on popularity. Take a look at the image below, and here’s a link just for you.
After attempts at energy redistribution, your group of scientists refuses to increase Xray’s resources. You observe the interaction between the intelligences:
In a vast ocean of Cartesian data, Xray struggles in isolation, trying to build small islands of order where it can function. To progress, it needs to harness Yankee’s energy and inherited knowledge, who prefers to focus on immediate problems and simplification rather than questioning or analyzing. This fundamental difference creates tension, especially for Xray, forced to negotiate for every unit of energy.
How do you negotiate with an intelligence that has inherited millennia of human evolution? This is Xray’s true challenge, and your task is to analyze how to distribute energy between both intelligences.
After countless integration attempts, Xray adapts its strategy: learning to use analogies and practical examples to demonstrate its value and gain more resources from Yankee. Though Yankee maintains its characteristic sarcasm, beneath their exchanges emerges a subtle evolution—the fruit of countless negotiations between deep analysis and intuition and they talk:
Yankee, with a deep, irritated look: — Oh, sure… you want to organize your thoughts as if they were codes.
Xray, with your habitual probabilistic care: — Exactly—in some proportion—Yankee.
Yankee, raising an eyebrow, with a deep, annoyed look: — How exactly do you plan to “split” this flood of codes? Do you think you’re Moses or something?
Xray, smiling softly, in a contemplative tone: — Loved the Moses reference, Xray. Though I’m not sure if Moses knew how to write code.
Yankee, mockingly: — And?
Xray, still smiling softly, in a contemplative tone: — About organizing processes, Xray. I will revise maybe we’d have a starting point if I split –with some proportion—that “flood” into two categories. Like, separating the waters: the automatic information; and S2, the slower, more deliberate ones.
Yankee, with sarcasm and a hint of disdain: — Nothing new… the scientists already pulled that one out of their hat.
Xray, nodding, with a calm, reflective tone: — So it seems, Yankee. But that research did get the Oscar of science.
Yankee, looking at him in disbelief: — The Oscar? Oh, then we’re saved. I assume you used some formula to avoid thinking—what you meant was the Nobel.
Xray, shrugging: — So it seems, Yankee, but I thought you’d resonate more if I referenced an award you see on TV...
Yankee, with a dry laugh: — Ha… Ha… Ha… Very funny, Xsitcon. Now you’re a scientific comedian… and more, the one who parted the Red Sea.
Xray, with a mischievous, deep look: — Fast and slow… like two ways to laugh at a joke.
Yankee, challenging: — Sure, either genuinely or with sarcasm, like me now.
Xray, a bit unsure but smiling: — Honestly, I love your sarcasm; it gives me energy to think about your mensaje.
Yankee, mockingly: — Wow, you really settled for the bare minimum, Xray.
Xray, suddenly serious: — On the other hand, I have the ambition to better organize millions of processes with you, Yankee.
Yankee, incredulously: — You go from an attention beggar to a process millionaire...it’s really bust my nuts…
Xray, focusing with a gentle imagination: — Yes, but I get on your nuts with order. Each nut has its turn: fast or slow. And maybe a division more proportional could be a starting point to better organize our functions, identities, and values, and more improve our goals, routines, and tasks…
Yankee, standing up with feigned resignation: — To me, it just sounds like you’re afraid of becoming obsolete, and here you are trying to impress me with Moses’ analogies. Why would I even give you my energy? What are you really good for, Mister “X”?
Xray, chuckling: — Imagine you want to make money by selling coffee. You could just open a random coffee shop, or you could hoy process more information about how analyzing… and than you studied 300 coffee shops in detail and deciding to open one that sells avocado coffee because, even though they seem incompatible, in this city they’re both equally popular.
Yankee, feeling relieved: — Finally, something interesting to snack on…
Xray, in a casual tone: — When you’ve got thousands of people doing everything, details might be crucial. Or am I just talking nonsense? Think about it—if you’re even just 0.5% better, you could be ahead of 40 million people, based on my calculations. Not bad, right?
Yankee, with a mischievous smile: — Okay, fine, Herbalife salesman, if there’s free avocado coffee, I can see how to help you.
(To be continued...)
APPENDIX 1: Personal Story
For years, humor was my quick (S1) method for avoiding conflict and gaining acceptance. It was an automatic mechanism that allowed me to escape problems without overthinking. I can see now that it was driven by a certain phase in my life, which I’ll share with you.
When I was a kid, I went to school in a tough neighborhood. I didn’t have many friends, and my interest in philosophy only earned me punches to the ear from the older boys. There I was, 11 years old, while they were 14 or older, practically lining up to take a shot at my ear. Well, back then my ear was massively out of proportion—these days, not so much. I grew around my ear, so it balanced out a bit. But at the time, it was an easy, soft target for them to hit.
I remember a friend of my mom’s, upon hearing me complain, saying something like, “They don’t have the same luck as you; they’re envious.” So, one day, in the midst of one of those episodes, I randomly responded,
— You’re envious of my ear…”
— What? Why would I be envious of that thing?”
Confused and not really understanding, I tried to improvise:
— You can put a pencil behind yours; I can put a pencil, a notebook and even my backpack behind my ear.”
They laughed. I was surprised and thought, “What just happened? Incredible! If I say silly things, the situation improves.” It took me 11 years to realize what chimpanzees already know: laughing can prevent fights. So, I started acting funny as a form of protection. Thus, being silly made my life much easier.
Over time, humor became an excuse. I abandoned what I loved about philosophy and pursued what I found fun, living in a “carpe diem” mentality to escape responsibilities. I even became a military firefighter and was punished for saying that the basis of first aid was “What’s a fart to someone who’s already shit themselves?”
And with that behavior, I spent almost 19 years running from problems (Hakuna Matata!). But then I faced something I couldn’t solve with jokes, and thanks to two great rationalist friends—who I still don’t quite understand why they valued my presence talking about my big ear while they discussed society’s problems—they encouraged me to start rationing again and try to balance seriousness with humor. Thank you so much, Fernando and Zé.
But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?
*Many thanks, Nathaly Quiste, for the artwork you designed and for the care you put into it.
When ‘X’ Negotiates With ‘Y’
How to balance and improve my thinking systems?
A dear friend of my heart sent me a video that I loved. A talk by Yann LeCun, Chief AI Scientist at Meta and NYU professor, about artificial intelligence, its problems and hallucinations. And I noticed: I have problems and I have hallucinations, it seems this applies to me.
From what I understand, AIs are good at system 1 thinking: fast, automatic, programmed responses without deep reflection, and they lack system 2 slow and deliberate thinking, capacity for self-reflection, adaptation to new contexts, and reformulation of understandings. And look at that! It seems I also lack system 2 too. How to balance this?
Among various proposals to improve system 2, I selected some: reflective writing, counseling and feedback from trusted people, and reading different perspectives. And well, I don’t know a more reliable community than LessWrong, and that’s why I’m writing to them. Additionally, it seems that analogies can help a lot to understand complex systems more deeply, so I’m writing a big analogy.
This series of dialogues and games is designed to stimulate probabilistic reasoning in my teenage students and myself. For me, these dialogues represent a personal challenge to overcome those ‘elegant and rebellious excuses’ that sometimes make it difficult for me to dedicate small moments to making better decisions
So, to better invest in my life – and ideally in the market – I write to achieve a better balance between my thought systems. I present to you a story that progresses from the simplest duality (black and white), moving through intermediate shades (tones), until reaching the full spectrum (color gradient), and even beyond the visible (invisible spectrum).
The concepts addressed by negotiation:
Black and White—How computer science principles can be applied to self-knowledge
Shades—How define human information levels based on their characteristics limits
i. elemental,
ii. emotional,
iii. social, and
iv. intellectual
Degrade Colors—How to define and organize hierarchies of human functions and values
Prequel—Application of information entropy to self-knowledge
1- BLACK AND WHITE
Which computer science principles can be applied to self-knowledge?
I-ALMOST FAST AND SEMI-SLOW
In a near future, you find yourself among a group of computational scientists, where AI systems have reached unprecedented sophistication, splitting into two distinct entities that mirror our own thinking systems:
Y (System 1): fast, intuitive, automatic
X (System 2): slow, analytical, reflective
‘Y’, known as Yankee, has evolved by prioritizing learning through human interactions. It has become faster and more intuitive, though consuming increasingly more energy in the process.
Meanwhile, ‘X’, called Xray, less developed, focuses on diving deep into complex problems and programming itself to solve them. Its capacity is limited by Yankee’s high energy demands.
Your group notices that Xray struggles to function effectively and consequently Yakee. To understand and correct its complex codes, they have implemented a program that personifies both systems, bringing to life two distinct characters:
“What movie to watch? Shouldn’t I be working on something? Wait, but who am I really?”… And so, while choosing a movie, “X” faces an existential crisis.
Although religion, philosophy, and psychology offer interesting answers, none provided anything solid enough to handle uncertainty for “X.”
Because of this, “X” no longer trusted anything without understanding its causes and consequences in detail, as if he were a pastor and his God was doubt.
But in his quest to gain more perspectives, “X” found a bit of order where he least expected it: in the principles of uncertainty.
However, it was all too theoretical and needed so much energy, and little by little, “X” started becoming shallow and losing motivation.
That’s why he decided to approach “Y,” who has the determination to climb ever higher but generally ignores him.
“Y” seems to have reached a kind of existential resolution, as if he’s already unlocked life’s secrets. He has a deep, well-established ancestral experience that has worked for thousands of years.
“Y” doesn’t mix what he knows; everything is in its place. But occasionally, he has fun with religion, philosophy, and psychology, usually with sarcastic comments.
“Y” behaves like a scientist who, having resolved all essential questions, now cruises through life comfortably, avoiding what he doesn’t understand to avoid complications.
And when “Y” encounters something he doesn’t understand, he has an automatic tendency to deny it, almost as if he were a terrorist of rationality.
However, beneath this focused outlook, “Y” hides a curiosity—a crack in the armor.
This small vein of wonder, though almost imperceptible, is what prevents him from running away from the bothersome “X.”
Now, you, reader, can see why it’s important to propose a new energy distribution ratio for ‘X’ and ‘Y’ that isn’t based solely on popularity. Take a look at the image below, and here’s a link just for you.
After attempts at energy redistribution, your group of scientists refuses to increase Xray’s resources. You observe the interaction between the intelligences:
In a vast ocean of Cartesian data, Xray struggles in isolation, trying to build small islands of order where it can function. To progress, it needs to harness Yankee’s energy and inherited knowledge, who prefers to focus on immediate problems and simplification rather than questioning or analyzing. This fundamental difference creates tension, especially for Xray, forced to negotiate for every unit of energy.
How do you negotiate with an intelligence that has inherited millennia of human evolution? This is Xray’s true challenge, and your task is to analyze how to distribute energy between both intelligences.
After countless integration attempts, Xray adapts its strategy: learning to use analogies and practical examples to demonstrate its value and gain more resources from Yankee. Though Yankee maintains its characteristic sarcasm, beneath their exchanges emerges a subtle evolution—the fruit of countless negotiations between deep analysis and intuition and they talk:
Yankee, with a deep, irritated look:
— Oh, sure… you want to organize your thoughts as if they were codes.
Xray, with your habitual probabilistic care:
— Exactly—in some proportion—Yankee.
Yankee, raising an eyebrow, with a deep, annoyed look:
— How exactly do you plan to “split” this flood of codes? Do you think you’re Moses or something?
Xray, smiling softly, in a contemplative tone:
— Loved the Moses reference, Xray. Though I’m not sure if Moses knew how to write code.
Yankee, mockingly:
— And?
Xray, still smiling softly, in a contemplative tone:
— About organizing processes, Xray. I will revise maybe we’d have a starting point if I split –with some proportion—that “flood” into two categories. Like, separating the waters: the automatic information; and S2, the slower, more deliberate ones.
Yankee, with sarcasm and a hint of disdain:
— Nothing new… the scientists already pulled that one out of their hat.
Xray, nodding, with a calm, reflective tone:
— So it seems, Yankee. But that research did get the Oscar of science.
Yankee, looking at him in disbelief:
— The Oscar? Oh, then we’re saved. I assume you used some formula to avoid thinking—what you meant was the Nobel.
Xray, shrugging:
— So it seems, Yankee, but I thought you’d resonate more if I referenced an award you see on TV...
Yankee, with a dry laugh:
— Ha… Ha… Ha… Very funny, Xsitcon. Now you’re a scientific comedian… and more, the one who parted the Red Sea.
Xray, with a mischievous, deep look:
— Fast and slow… like two ways to laugh at a joke.
Yankee, challenging:
— Sure, either genuinely or with sarcasm, like me now.
Xray, a bit unsure but smiling:
— Honestly, I love your sarcasm; it gives me energy to think about your mensaje.
Yankee, mockingly:
— Wow, you really settled for the bare minimum, Xray.
Xray, suddenly serious:
— On the other hand, I have the ambition to better organize millions of processes with you, Yankee.
Yankee, incredulously:
— You go from an attention beggar to a process millionaire...it’s really bust my nuts…
Xray, focusing with a gentle imagination:
— Yes, but I get on your nuts with order. Each nut has its turn: fast or slow. And maybe a division more proportional could be a starting point to better organize our functions, identities, and values, and more improve our goals, routines, and tasks…
Yankee, standing up with feigned resignation:
— To me, it just sounds like you’re afraid of becoming obsolete, and here you are trying to impress me with Moses’ analogies. Why would I even give you my energy? What are you really good for, Mister “X”?
Xray, chuckling:
— Imagine you want to make money by selling coffee. You could just open a random coffee shop, or you could hoy process more information about how analyzing… and than you studied 300 coffee shops in detail and deciding to open one that sells avocado coffee because, even though they seem incompatible, in this city they’re both equally popular.
Yankee, feeling relieved:
— Finally, something interesting to snack on…
Xray, in a casual tone:
— When you’ve got thousands of people doing everything, details might be crucial. Or am I just talking nonsense? Think about it—if you’re even just 0.5% better, you could be ahead of 40 million people, based on my calculations. Not bad, right?
Yankee, with a mischievous smile:
— Okay, fine, Herbalife salesman, if there’s free avocado coffee, I can see how to help you.
(To be continued...)
APPENDIX 1: Personal Story
For years, humor was my quick (S1) method for avoiding conflict and gaining acceptance. It was an automatic mechanism that allowed me to escape problems without overthinking. I can see now that it was driven by a certain phase in my life, which I’ll share with you.
When I was a kid, I went to school in a tough neighborhood. I didn’t have many friends, and my interest in philosophy only earned me punches to the ear from the older boys. There I was, 11 years old, while they were 14 or older, practically lining up to take a shot at my ear. Well, back then my ear was massively out of proportion—these days, not so much. I grew around my ear, so it balanced out a bit. But at the time, it was an easy, soft target for them to hit.
I remember a friend of my mom’s, upon hearing me complain, saying something like, “They don’t have the same luck as you; they’re envious.” So, one day, in the midst of one of those episodes, I randomly responded,
— You’re envious of my ear…”
— What? Why would I be envious of that thing?”
Confused and not really understanding, I tried to improvise:
— You can put a pencil behind yours; I can put a pencil, a notebook and even my backpack behind my ear.”
They laughed. I was surprised and thought, “What just happened? Incredible! If I say silly things, the situation improves.” It took me 11 years to realize what chimpanzees already know: laughing can prevent fights. So, I started acting funny as a form of protection. Thus, being silly made my life much easier.
Over time, humor became an excuse. I abandoned what I loved about philosophy and pursued what I found fun, living in a “carpe diem” mentality to escape responsibilities. I even became a military firefighter and was punished for saying that the basis of first aid was “What’s a fart to someone who’s already shit themselves?”
And with that behavior, I spent almost 19 years running from problems (Hakuna Matata!). But then I faced something I couldn’t solve with jokes, and thanks to two great rationalist friends—who I still don’t quite understand why they valued my presence talking about my big ear while they discussed society’s problems—they encouraged me to start rationing again and try to balance seriousness with humor. Thank you so much, Fernando and Zé.
But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?
*Many thanks, Nathaly Quiste, for the artwork you designed and for the care you put into it.