https://hearth.ai/thesis keeping track of people you know. as an inveterate birthday-forgetter and someone too prone to falling out of touch with friends, I bet there are ways for AI tools to do helpful things here.
the subjective experience of perceiving; Thomas Nagel’s “what it is like to be a bat”; qualia
awake states (as opposed to dreamless sleep, anaesthesia, coma, etc)
things we are mentally aware of (perceptions, thoughts, emotions, etc) as opposed to things we are not aware of (most autonomic processes, blindsight, “subconscious” motives)
the fact that we do not have a scientific account of what consciousness is made of, doesn’t mean consciousness doesn’t exist or is inherently mystical or incoherent. Isaac Newton had never heard of “H20” but he knew what water is. The point of science is to give explanations for the things we know about experientially but don’t fully understand.
A “theory of consciousness” would allow us to, given some monitoring data of brain activity in an organism, determine whether the organism is conscious or not, and what it is conscious of.
is the anaesthesia patient conscious?
is the locked-in patient conscious?
which animals have consciousness?
I’ve long had a vague sense of suspicion around consciousness research and the idea of qualia, but I’ve never really been able to put my finger on why.
When defined crisply like this, it does seem clear that consciousness is a real, mundane thing (if a nurse says “the patient is unconscious” there’s no confusion about what that means).
But why is consciousness mysterious? why is it a “hard problem”?
David Chalmers’ “hard problem of consciousness” refers to the difficulty of explaining how physical processes give rise to subjective experiences. Even if you explained a lot of brain mechanisms that have to go on for us to consciously experience something, would that really cross the explanatory gap?
I think this is what has turned me off “consciousness”, because I don’t get why there’s supposed to be a gap.
If we had some full explanation based on patterns of brain activity, like “you consciously perceive a bright light precisely when when the foo blergs the bar”, then...I think there wouldn’t be any mystery left!
I agree that e.g. “you see a bright light when the visual cortex is stimulated” is not enough, because you don’t see it if you’re unconscious, and we don’t have a necessary-and-sufficient physical correlate of consciousness. but, like, Eric Hoel and apparently a lot of mainstream neuroscientists are saying that we could find such a thing.
I guess you could keep asking “ok, the foo blerging the bar produces the phenomenon we experience as consciousness, but why does it?” and it would be hard to come up with any experimental way to even approach that question...
but that’s an “explanatory gap” that comes up everywhere and we’re usually happy to live with.
it also depends what kind of “why” you want.
if you’re asking “why does it produce consciousness” as in “what’s the efficient cause?” or “how does it work to produce consciousness?” then I think all how-does-it-work questions are going to have to be about physical (or algorithmic) processes. and if you say “well but my subjective experience is not even really commensurate with these kinds of objectively observable processes, it’s a different sort of thing, how can it ever emerge from them” then...you are SOL? “how” questions will never satisfy you?
if you’re asking “why does it produce consciousness” in a final-cause sense, like what is the use of consciousness, then I think we can have fruitful ideas. “why don’t organisms operate on pure blindsight” is an interesting question! (pace Peter Watts, i think it must have some evolutionary function or we wouldn’t have it.)
I think p-zombies are stupid, obviously just because you can verbally say you’re “imagining” something exactly the same down to every physical detail, but magically different in its properties, doesn’t mean it’s possible!
ok, so: my beef with “consciousness studies” is primarily with the non-physicalists who say that even if we had a perfect neural correlate of consciousness, we still wouldn’t understand consciousness as a subjective experience. but what I didn’t realize, is that there are neuroscientists interested in consciousness who just want to find that neural correlate, and don’t necessarily have any weird philosophical assumptions.
The global neuronal workspace theory of consciousness says that consciousness is produced by an “interconnected network of prefrontal-parietal areas and many high-level sensory cortical areas.”
early sensory processing is unconscious.
stimuli are sometimes attended to (made conscious), a process which involves sending (pre-processed) signals about the stimuli through the prefrontal and parietal areas which control executive function, and distributing them to a bunch of other areas of the brain as part of the current working context.
IIT is an information-theoretic theory of consciousness; it says that consciousness is measured by the power of a neuronal network to influence itself. “The more cause-effect power a system has, the more conscious it is.”
keeping track of people you know. as an inveterate birthday-forgetter and someone too prone to falling out of touch with friends, I bet there are ways for AI tools to do helpful things here.
Facebook already reminded me when my friends had birthdays, but recently I noticed that it also offers to write a congratulation comment for me, I just need to make a single click to send it. Now, Facebook has an obvious incentive to keep me returning to their page every day, so they are not going to fully automate this.
The next necessary functionality would be to write automated replies. I think that could be achieved by LLMs, I just need some service to do it automatically. That way I could have a rich social life, without the need to interact with humans.
I don’t want automatic messages; that seems too inhuman. I do want things like reminders to follow up with people I haven’t talked to for a while, with context awareness for social appropriateness. like, i wouldn’t know how to reach out to my roommate/best friend from college; we haven’t talked in 16 years! but maybe the right app could keep that from happening in the first place, or create a new normalized type of social behavior that’s “reaching out after a long time apart” or whatever.
The description on the page you linked—“augments the brain’s ability to reason on a) who am I, b) who are you, and c) who are you to me, now and over time”—leaves a lot to imagination. Sounds like a chatbot that will talk to you about your contacts?
i wouldn’t know how to reach out to my roommate/best friend from college; we haven’t talked in 16 years!
Maybe try finding out their birthday (on social networks, by online research, or maybe ask a mutual friend), and then set up a reminder. “Happy birthday, we haven’t seen each other for a while, how are you?” Sounds to me like a socially appropriate thing (but I am not an expert).
Also, spend 5 minutes by the clock writing a list of people you would like to stay in contact with.
Now, I guess the question is how to set up a system that will let you store the data and provide the reminders. The easiest version would a spreadsheet where you enter the names and birthdays, and some system that will read it and prepare notifications for you. A more complicated version would allow you to write more data about the person (how do we know each other, what kinds of activities did we do together, when was the last time we talked), and group the people by categories. You could make an AI go through your e-mail archive and compile an initial report on the person.
I would probably feel very uncomfortable doing this online, because it would feel like I am making reports on people, and the owner of the software will most likely sell the data to any third party. I would want this as a desktop application, maybe connected to a small phone app, to set up the reminders. But many people seem to prefer online solutions as more convenient, privacy be damned.
(The phone reminders could be like: “Today, XY has a birthday; you have their phone number, e-mail, and Less Wrong account. You relationship status is: you have met a few times at a LW meetup. Topics you usually discuss: AI, kitten videos.”)
links 12/18/2024: https://roamresearch.com/#/app/srcpublic/page/12-18-2024
https://hearth.ai/thesis keeping track of people you know. as an inveterate birthday-forgetter and someone too prone to falling out of touch with friends, I bet there are ways for AI tools to do helpful things here.
https://www.statista.com/chart/33684/number-of-confirmed-human-h5n1-cases-by-exposure-source H5N1 cases by state. mostly California, mostly livestock handlers. 61 cases so far.
https://www.theintrinsicperspective.com/p/consciousness-is-a-great-mystery Eric Hoel says that “consciousness researchers” straightforwardly agree on what consciousness is.
Consciousness is:
the subjective experience of perceiving; Thomas Nagel’s “what it is like to be a bat”; qualia
awake states (as opposed to dreamless sleep, anaesthesia, coma, etc)
things we are mentally aware of (perceptions, thoughts, emotions, etc) as opposed to things we are not aware of (most autonomic processes, blindsight, “subconscious” motives)
the fact that we do not have a scientific account of what consciousness is made of, doesn’t mean consciousness doesn’t exist or is inherently mystical or incoherent. Isaac Newton had never heard of “H20” but he knew what water is. The point of science is to give explanations for the things we know about experientially but don’t fully understand.
A “theory of consciousness” would allow us to, given some monitoring data of brain activity in an organism, determine whether the organism is conscious or not, and what it is conscious of.
is the anaesthesia patient conscious?
is the locked-in patient conscious?
which animals have consciousness?
I’ve long had a vague sense of suspicion around consciousness research and the idea of qualia, but I’ve never really been able to put my finger on why.
When defined crisply like this, it does seem clear that consciousness is a real, mundane thing (if a nurse says “the patient is unconscious” there’s no confusion about what that means).
But why is consciousness mysterious? why is it a “hard problem”?
David Chalmers’ “hard problem of consciousness” refers to the difficulty of explaining how physical processes give rise to subjective experiences. Even if you explained a lot of brain mechanisms that have to go on for us to consciously experience something, would that really cross the explanatory gap?
I think this is what has turned me off “consciousness”, because I don’t get why there’s supposed to be a gap.
If we had some full explanation based on patterns of brain activity, like “you consciously perceive a bright light precisely when when the foo blergs the bar”, then...I think there wouldn’t be any mystery left!
I agree that e.g. “you see a bright light when the visual cortex is stimulated” is not enough, because you don’t see it if you’re unconscious, and we don’t have a necessary-and-sufficient physical correlate of consciousness. but, like, Eric Hoel and apparently a lot of mainstream neuroscientists are saying that we could find such a thing.
I guess you could keep asking “ok, the foo blerging the bar produces the phenomenon we experience as consciousness, but why does it?” and it would be hard to come up with any experimental way to even approach that question...
but that’s an “explanatory gap” that comes up everywhere and we’re usually happy to live with.
it also depends what kind of “why” you want.
if you’re asking “why does it produce consciousness” as in “what’s the efficient cause?” or “how does it work to produce consciousness?” then I think all how-does-it-work questions are going to have to be about physical (or algorithmic) processes. and if you say “well but my subjective experience is not even really commensurate with these kinds of objectively observable processes, it’s a different sort of thing, how can it ever emerge from them” then...you are SOL? “how” questions will never satisfy you?
if you’re asking “why does it produce consciousness” in a final-cause sense, like what is the use of consciousness, then I think we can have fruitful ideas. “why don’t organisms operate on pure blindsight” is an interesting question! (pace Peter Watts, i think it must have some evolutionary function or we wouldn’t have it.)
I think p-zombies are stupid, obviously just because you can verbally say you’re “imagining” something exactly the same down to every physical detail, but magically different in its properties, doesn’t mean it’s possible!
ok, so: my beef with “consciousness studies” is primarily with the non-physicalists who say that even if we had a perfect neural correlate of consciousness, we still wouldn’t understand consciousness as a subjective experience. but what I didn’t realize, is that there are neuroscientists interested in consciousness who just want to find that neural correlate, and don’t necessarily have any weird philosophical assumptions.
https://www.science.org/doi/10.1126/science.abj3259
The global neuronal workspace theory of consciousness says that consciousness is produced by an “interconnected network of prefrontal-parietal areas and many high-level sensory cortical areas.”
early sensory processing is unconscious.
stimuli are sometimes attended to (made conscious), a process which involves sending (pre-processed) signals about the stimuli through the prefrontal and parietal areas which control executive function, and distributing them to a bunch of other areas of the brain as part of the current working context.
IIT is an information-theoretic theory of consciousness; it says that consciousness is measured by the power of a neuronal network to influence itself. “The more cause-effect power a system has, the more conscious it is.”
Facebook already reminded me when my friends had birthdays, but recently I noticed that it also offers to write a congratulation comment for me, I just need to make a single click to send it. Now, Facebook has an obvious incentive to keep me returning to their page every day, so they are not going to fully automate this.
The next necessary functionality would be to write automated replies. I think that could be achieved by LLMs, I just need some service to do it automatically. That way I could have a rich social life, without the need to interact with humans.
I don’t want automatic messages; that seems too inhuman. I do want things like reminders to follow up with people I haven’t talked to for a while, with context awareness for social appropriateness. like, i wouldn’t know how to reach out to my roommate/best friend from college; we haven’t talked in 16 years! but maybe the right app could keep that from happening in the first place, or create a new normalized type of social behavior that’s “reaching out after a long time apart” or whatever.
The description on the page you linked—“augments the brain’s ability to reason on a) who am I, b) who are you, and c) who are you to me, now and over time”—leaves a lot to imagination. Sounds like a chatbot that will talk to you about your contacts?
Maybe try finding out their birthday (on social networks, by online research, or maybe ask a mutual friend), and then set up a reminder. “Happy birthday, we haven’t seen each other for a while, how are you?” Sounds to me like a socially appropriate thing (but I am not an expert).
Also, spend 5 minutes by the clock writing a list of people you would like to stay in contact with.
Now, I guess the question is how to set up a system that will let you store the data and provide the reminders. The easiest version would a spreadsheet where you enter the names and birthdays, and some system that will read it and prepare notifications for you. A more complicated version would allow you to write more data about the person (how do we know each other, what kinds of activities did we do together, when was the last time we talked), and group the people by categories. You could make an AI go through your e-mail archive and compile an initial report on the person.
I would probably feel very uncomfortable doing this online, because it would feel like I am making reports on people, and the owner of the software will most likely sell the data to any third party. I would want this as a desktop application, maybe connected to a small phone app, to set up the reminders. But many people seem to prefer online solutions as more convenient, privacy be damned.
(The phone reminders could be like: “Today, XY has a birthday; you have their phone number, e-mail, and Less Wrong account. You relationship status is: you have met a few times at a LW meetup. Topics you usually discuss: AI, kitten videos.”)