Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system.
On the fundamental level, there are some particles that interact with other particles in a regular fashion. On a higher level, patterns interact with other patterns. This is analogous to how water waves can interact. (It’s the result of the regularity, and other things as well.) The pattern is definitely real—it’s a pattern in a real thing—and it can “affect” lower levels in that the particular arrangement of particles corresponding to the pattern of “physicist and particle accelerator” describes a system which interacts with other particles which then collide at high speeds. None of this requires physicists to be ontologically basic in order to interact with particles.
It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is just complexity.
Patterns aren’t nothing. They’re the only thing we ever interact with, in practice. The only thing that makes your chair a chair is the pattern of atoms. If the atoms were kept the same but the pattern changed, it could be anything from a pile of wood chips to a slurry of CHON.
But if the sentience doesn’t exist, there is no suffering and no role for morality.
Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)
Maybe that will turn out to be the case—we might find out some day if we can ever trace how the data the brain generates about sentience is generated and see the full chain of causation.
Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.
It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.
I’ll hunt for that some time, but it can’t be any good or it would be better known if such a model existed.
Better known to you? Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?
Do you imagine that patterns can suffer; that they can be tortured?
“Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)”
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
“Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.”
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
“It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.”
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
“Better known to you?”
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
“Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?”
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
Do you imagine that patterns can suffer; that they can be tortured?
Yes, I do. I don’t imagine that every pattern can.
Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.
Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?
I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event! MIT built an entire course around it once! I found all this by looking for less than a minute on Wikipedia. Are you seriously so certain of yourself that if you haven’t heard of a book before, it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more
I pointed out before that GEB isn’t specifically relevant to sentience. It’s less detailed than the entire body of neuropsychological information, but that still doesn’t contain an explanation of sentience, as Cooper correctly points out.
I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)
“Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.”
There is no situation where the whole is more than the parts—if anything new is emerging, it is a new part coming from somewhere not previously declared.
“You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.”
No—it wouldn’t hurt and all other feelings would be imaginary too. The reason they feel too real for that to be the case though is an indication that they are real.
“Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?” --> That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.”
So if it’s just producing fake assertions, it isn’t wrong. And if we are just producing fake assertions, there is nothing wrong about “torturing” people either.
“Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)”
If we have followed the trail to see how the data is generated, we are not kicking a box with unknown content—if the trail shows us that the data is nothing but fake assertions, we are kicking a non-conscious box.
“Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?”
In which case we should be able to follow the trail and see the causation in action, thereby either uncovering the mechanism of sentience or showing that there isn’t any.
“I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.”
If you’re wrong in thinking you feel pain, there is no pain.
“Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
What are you on about—it’s precisely because this is the most important question of them all that it should be up in golden lights.
“(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event!”
All manner of crap wins prizes of that kind.
″...it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)”
If it had a model showing the role of sentience in the system, the big question would have been answered and we wouldn’t have a continual stream of books and articles asking the question and searching desperately for answers that haven’t been found by anyone.
“What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”.”
I mean exactly what I said—everyone’s asking for answers, and none of them have found answers where you claim they lie waiting to be discovered.
″ GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.”
It doesn’t answer the question. There are plenty of experts on the brain and its functionality, but none of them know how consciousness or sentience works.
On the fundamental level, there are some particles that interact with other particles in a regular fashion. On a higher level, patterns interact with other patterns. This is analogous to how water waves can interact. (It’s the result of the regularity, and other things as well.) The pattern is definitely real—it’s a pattern in a real thing—and it can “affect” lower levels in that the particular arrangement of particles corresponding to the pattern of “physicist and particle accelerator” describes a system which interacts with other particles which then collide at high speeds. None of this requires physicists to be ontologically basic in order to interact with particles.
Patterns aren’t nothing. They’re the only thing we ever interact with, in practice. The only thing that makes your chair a chair is the pattern of atoms. If the atoms were kept the same but the pattern changed, it could be anything from a pile of wood chips to a slurry of CHON.
Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)
Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.
It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.
Better known to you? Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?
“Patterns aren’t nothing.”
Do you imagine that patterns can suffer; that they can be tortured?
“Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)”
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
“Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.”
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
“It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.”
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
“Better known to you?”
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
“Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?”
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
Yes, I do. I don’t imagine that every pattern can.
Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.
You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.
That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.
Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)
Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?
I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.
Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event! MIT built an entire course around it once! I found all this by looking for less than a minute on Wikipedia. Are you seriously so certain of yourself that if you haven’t heard of a book before, it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.
I pointed out before that GEB isn’t specifically relevant to sentience. It’s less detailed than the entire body of neuropsychological information, but that still doesn’t contain an explanation of sentience, as Cooper correctly points out.
I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
So is Cooper just wrong, or using “sentience” differently?
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Do you understand what dark matter is?
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)
“Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.”
There is no situation where the whole is more than the parts—if anything new is emerging, it is a new part coming from somewhere not previously declared.
“You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.”
No—it wouldn’t hurt and all other feelings would be imaginary too. The reason they feel too real for that to be the case though is an indication that they are real.
“Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?” --> That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.”
So if it’s just producing fake assertions, it isn’t wrong. And if we are just producing fake assertions, there is nothing wrong about “torturing” people either.
“Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)”
If we have followed the trail to see how the data is generated, we are not kicking a box with unknown content—if the trail shows us that the data is nothing but fake assertions, we are kicking a non-conscious box.
“Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?”
In which case we should be able to follow the trail and see the causation in action, thereby either uncovering the mechanism of sentience or showing that there isn’t any.
“I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.”
If you’re wrong in thinking you feel pain, there is no pain.
“Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
What are you on about—it’s precisely because this is the most important question of them all that it should be up in golden lights.
“(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event!”
All manner of crap wins prizes of that kind.
″...it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)”
If it had a model showing the role of sentience in the system, the big question would have been answered and we wouldn’t have a continual stream of books and articles asking the question and searching desperately for answers that haven’t been found by anyone.
“What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”.”
I mean exactly what I said—everyone’s asking for answers, and none of them have found answers where you claim they lie waiting to be discovered.
″ GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.”
It doesn’t answer the question. There are plenty of experts on the brain and its functionality, but none of them know how consciousness or sentience works.