They are very similar. Kant does not claim that we have no information about reality, and the linked article does not only say that we are sometimes wrong with our intuition...
This statement for example is very “Kantian” :
Before you can question your intuitions, you have to realize that what your mind’s eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.
Kant does not claim that we have no information about reality
Kant says that we can know about the representations that appear in the manifold of appearances provided to us by our senses. But, in his view, we can know nothing, zip, zilch, nada, about whatever it is that stands behind those sensory representations.
In a sense, Kant takes the map/territory distinction to an extreme. For Kant, the territory is so distinct from the map that we know nothing about the territory at all. All of our knowledge is only about the map.
That is also what the linked article seems to entail. The statement I quoted, as I understand it, says that every information we have about reality is the result of “some cognitive algorithm” (=the representations that appears (...) provided by our senses)
The map is certainly a kind of information about the territory (though we cannot know it with certainty). Strictly speaking, Kant does not say we have no information about reality, he says we cannot know if we have or not.
Strictly speaking, Kant does not say we have no information about reality, he says we cannot know if we have or not.
I don’t think that Kant makes the distinction between “knowing” and “having information about” that you and I would make. If he doesn’t outright deny that we have any information about the world beyond our senses, he certainly comes awfully close.
On A380, Kant writes,
If, therefore, as the present critique obviously requires of us, we remain true to the rule established earlier not to press our questions beyond that with which possible experience and its objects can supply us, then it will not occur to us to seek information about what the objects of our senses may be in themselves, i.e., apart from any relation to the senses.
And, on A703/B731, he writes,
[I]f charming and plausible prospects did not lure us to reject the compulsion of these doctrines [i.e., doctrines for which Kant has argued], then of course we might have been able to dispense with our painstaking examination of the dialectical witnesses which a transcendent reason brings forward on behalf of its pretensions; for we already knew beforehand with complete certainty that all their allegations, while perhaps honestly meant, had to be absolutely null and void, because they dealt with information which no human being can ever get.
(Emphasis added. These are from the Guyer–Wood translation.)
Ok, it depends what you mean by “information about”. My understanding is that we have no information on the nature of reality, which does not mean that we have no information from reality.
I agree that we get information from reality. And I think that we agree that our confidence that we get information from reality is far less murky than our concept of “the nature of reality”.
Kant, being a product of his times, doesn’t seem to think this way, though. Maybe, if you explained the modern information-theoretic notion of “information” to Kant, he would agree that we get information about external reality in that sense. But I don’t know. It’s hard to imagine what a thinker like Kant would do in an entirely different intellectual environment from the one in which he produced his work. I’m inclined to think that, for Kant, the noumena are something to which it is not even possible to apply the concept of “having information about”.
I suppose it’s a virtue of that interpretation that ‘information that cannot be coded in any particular scheme’ is a conceptual impossibility (assuming that’s what you meant).
If you are a cognitive algorithm X that receives input Y, this allows you to “know” a nontrivial fact about “reality” (whatever it is): namely, that it contains an instance of algorithm X that receives input Y. The same extends to probabilistic knowledge: if in one “possible reality” most instances of your algorithm receive input Y and in another “possible reality” most of them receive input Z, then upon seeing Y you come to believe that the former “possible reality” is more likely than the latter. This is a straightforward application of LW-style thinking, but it didn’t occur to Kant as far as I know.
If I am a cognitive algorithm X that reveives input Y, I don’t necessarily know what an algorithm is, what an input is, and so on. One could argue that all I know is ‘Y’. I don’t necessarily have any idea of what a “possible reality” is. I might not have a concept of “possibility” nor of “reality”.
Your way of thinking presupposes many metaphysical concepts that have been questioned by philosophers, including Kant. I am not saying that this line of reasoning is invalid (I suspect it is a realist approach, which is a fair option). My personal feeling is that Kant is upstream of that line of reasoning.
But I do know what an algorithm is. Can someone be so Kantian as to distrust even self-contained logical reasoning, not just sensations? In that case how did they come to be a Kantian?
Do you? I find the unexamined use of this particular concept possibly the most problematic component of what you call “LW-style thinking.” (Another term that commonly raises my red flags here is “pattern.”)
To take a concrete example, the occasional attempts to delineate “real” computation as distinct from mere look-up tables seem to me rather confused and ultimately nonsensical. (Here, for example, is one such attempt, and I commented on another one here.) This strongly suggests deeper problems with the concept, or at least our present understanding of it.
Interestingly, I just searched for some old threads in which I commented on this issue, and I found this comment where you also note that presently we lack any real understanding of what constitutes an “algorithm.” If you’ve found some insight about this in the meantime, I’d be very interested to hear it.
I don’t see that the concept of a computation excludes a lookup table. A lookup table is simply one far end of a spectrum of possible ways to implement some map from inputs to outputs. And if I were writing a program that mapped inputs to outputs, implementing it as a lookup table is at least in principle always one of the options. Even a program that interacted constantly with the environment could be implemented as a lookup table, in principle. In practice, lookup tables can easily become unwieldy. Imagine a chess program implemented as a lookup table that maps each possible state of the board to a move. It would be staggeringly huge. But I don’t see why we wouldn’t consider it a computation.
One of your links concerns the idea that a lookup table couldn’t possibly be conscious. But the topic of consciousness is a kind of mind poison, because it is tied to strong, strong delusions which corrupt everything they touch. Thinking clearly about a topic once consciousness and the self have been attached to it virtually impossible. For example, the topic of fission—of one thing splitting into two—is not a big deal as long as you’re talking about ordinary things like a fork in the road, or a social club splitting into two social clubs. But if we imagine you splitting into two people (via a Star Trek transporter accident or what have you), then all of sudden it becomes very hard to think about clearly. A lot of philosophical energy has been sucked into wrapping our heads around the problem of personal identity.
A lookup table is simply one far end of a spectrum of possible ways to implement some map from inputs to outputs.
Yes. In my view, this continuity is best observed through graph-theoretic properties of various finite state machines that implement the same mapping of inputs to outputs (since every computation that occurs in reality must be in the form of a finite state machine). From this perspective, the lookup table is a very sparse graph with very many nodes, but there’s nothing special about it otherwise.
The reason people are concerned with the concept of consciousness, is that they have terms in their utility functions for the welfare of conscious beings.
If you have some idea how to write out a reasonable utility function without invoking consciousness I’d love to hear it. (Adjust this challenge appropriately if your ethical theory isn’t consequentialist.)
I think it is largely because consciousness is so important to people that it is hard to think straight about it, and about anything tied to it. Similarly, the typical person loves Mom, and if you say bad things about Mom then they’ll have a hard time thinking straight, and so it will be hard for them to dispassionately evaluate statements about Mom. But what this means is that if someone wants to think straight about something, then it’s dangerous to tie it to Mom. Or to consciousness.
Nope, no new insights yet… I agree that this is a problem, or more likely some underlying confusion that we don’t know how to dissolve. It’s on my list of problems to think about, and I always post partial results to LW, so if something’s not on my list of submitted posts, that means I’ve made no progress. :-(
Granted, our concepts are often unclear.The Socratic dialogs demonstrate that, when pressed, we have trouble explaining our concepts. But that doesn’t mean that we don’t know what things are well enough to use the concepts. People managed to communicate and survive and thrive, probably often using some of the very concepts that Socrates was able to shatter with probing questions. For example, a child’s concepts of “up” and “down” unravel slightly when the child learns that the planet is a sphere, but that doesn’t mean that, for everyday use, the concepts aren’t just fine.
(I know the exchange isn’t primarily about Kant, but...)
Kant certainly isn’t a “distrusting logical reasoning” kind of guy. He takes for granted that “analytic” (i.e. purely deductive) reasoning is possible and truth-preserving. His mission is to explain (in light of Hume’s problem) how “synthetic a priori knowledge” is possible (with a secondary mission of exposing all previous work on metaphysics as nonsense). “Synthetic a priori knowledge” includes mathematics (which he doesn’t regard as just a variety or application of deductive logic), our knowledge of space and time, and Newtonian science.
His solution is essentially to argue that having our sensory presentations structured in space and time, and perceiving causal relations among them, is universally necessary in order for consciousness to exist at all. Since we are conscious, we can know a priori that the necessary conditions for consciousness obtain. [Disclaimer: This quick thumbnail sketch doesn’t pretend to be adequate. Neither am I convinced that the theory even makes sense.]
What Kant says we cannot know is how things (“really”) are, considered independently of the universal and necessary conditions for the possibility of experience. As far as I can tell, this boils down to “it’s not possible to know the answers to questions that transcend the limits of possible experience”. For instance, according to Kant we cannot know whether the universe is finite or infinite, whether it has a beginning in time, whether we have free will, or whether God exists.
It’s important to understand that Kant is an “empirical realist”, which means that the objects of experience—the coffee cups, rocks and stars around us—really do exist and we can acquire knowledge of them and their spatiotemporal and causal relations. However, if the universe could be considered ‘as it is in itself’ - independently of our minds—those spatiotemporal and causal relations would disappear (rather like how co-ordinates disappear when you consider a sphere objectively).
The nature of logical reasoning is actually a deep philosophical question...
You know what an algorithm is, but do you know if you are an algorithm? I am not sure to understand why you need algorithm at all. Maybe your point is “If you are a human being X that receive an input Y, this allows you to know a nontrivial fact about reality (...)”. I tend to agree with that formulation, but again, this supposes some concepts that do not go without saying, and in particular, it supposes a realist approach. Idealist philosophers would disagree.
I can understand that your idea is to build models of reality, then use a Bayesian approach to validate them. There is a lot to say about this (more than I could say in a few lines). For example : are you able to gather all your “inputs”? What about the qualitative aspects: can you measure them? If not, how can you ever be sure that your model is complete? Are the ideas you have about the world part of your “inputs’? How do you disentangle them from what comes from outside, how do you disentangle your feelings, memory and actual inputs? Is there a direct correspondance between your inputs and scientific data, or do you have presupositions on how to interpret the data? For example, don’t you need to have an idea of what space/time is in order to measure distances and durations? Where does this idea comes from? Your brain? Reality? A bit of both? Don’t we interpret any scientific data at the light of the theory itself, and isn’t there a kind of circularity? etc.
in particular, it supposes a realist approach. Idealist philosophers would disagree.
This is why I talked about algorithms. When a human being says “I am a human being”, you may quibble about it being “observational” or “apriori” knowledge. But algorithms can actually have apriori knowledge coded in, including knowledge of their own source code. When such an algorithm receives inputs, it can make conclusions that don’t rely on “realist” or “idealist” philosophical assumptions in any way, only on coded apriori knowledge and the inputs received. And these conclusions would be correct more or less by definition, because they amount to “if reality contains an instance of algorithm X receiving input Y, then reality contains an instance of algorithm X receiving input Y”.
Your second paragraph seems to be unrelated to Kant. You just point out that our reasoning is messy and complex, so it’s hard to prove trustworthy from first principles. Well, we can still consider it “probably approximately correct” (to borrow a phrase from Leslie Valiant), as jimrandomh suggested. Or maybe skip the step-by-step justifications and directly check your conclusions against the real world, like evolution does. After all, you may not know everything about the internal workings of a car, but you can still drive one to the supermarket. I can relate to the idea that we’re still in the “stupid driver” phase, but this doesn’t imply the car itself is broken beyond repair.
I don’t think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm.
I agree with your second point: you can take a pragmatist approach. Actually, that’s a bit how science work. But still you did not prove in anyway that your model is a complete and definitive description of all there is nor that it can be strictly identifiable with “reality”, and Kant’s argument remains valid. It would be more correct to say that a scientific model is a relational model (it describes the relations between things as they appear to observers and their regularities).
I don’t think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm.
You can be the algorithm. The software running in your brain might be “approximately correct by design”, a naturally arising approximation to the kind of algorithms I described in previous comments. I cannot examine its workings in detail, but sometimes it seems to obtain correct results and “move in harmony with Bayes” as Eliezer puts it, so it can’t be all wrong.
No you cannot be an algorithm. An algorithm is a concept, it only exists inside our representations… You cannot be an object/a concept inside your own representation, that makes no sense…
The question whether algorithms “exist” is related to the larger question of whether mathematical concepts “exist”. (The former is a special case of the latter.) Many people on LW take seriously the “mathematical multiverse” ideas of Tegmark and others, which hypothesize that abstract mathematical concepts are actually all that exists. I’m not sure what to think about such ideas, but they’re not obviously wrong, because they’ve been subjected to very harsh criticism from many commenters here, yet they’re still standing. The closest I’ve come to a refutation is the pheasant argument (search for “pheasant” on this site), but it’s not as conclusive as I’d like.
I think it’s very encouraging that we’ve come to a concrete disagreement at last!
ETA: I didn’t downvote you, and don’t like the fact that you’re being downvoted. A concrete disagreement is better than confused rhetoric.
They may not be obviously wrong, but the important point is that it remains a pure metaphysical speculation and that other metaphysical systems exist, and other people even deny that any metaphysical system can ever be “true” (or real or whatever). The last point is rather consensual among modern philosophers: it is commonly assumed that any attempt to build a definitive metaphysical system will necessarily be a failure (because there is no definitive ground on which any concept rests). As a consequence, we have to rely on pragmatism (as you did in a previous comment). But anyway, the important point is that different approaches exist, and none is a definitive answer.
Evidence suggests that the universe is composed of qualia. The ability to build a mathematical model that fits our scientific measurements (= a probabilistic description of the correlations between qualia) does not remotely suggest that the universe is an algorithm.
It may not suggest this to your satisfaction but it certainly suggests it remotely (and the mathematical model involves counterfactual dependencies of qualia, not just correlations). What does it mean to say that the universe is composed of qualia? That sounds like an obvious confusion between representation and reality.
Well my opinion is that the confusion between representation and reality is on your side.
Indeed, a scientific model is a representation of reality—not reality. It can be found inside books or learned at school, it is interpreted. On the contrary, qualia are not represented but directly experienced. They are real.
Not at all. What you call “qualia” could be the combination of a mental symbol, the connections and associations this symbol has and various abstract entities. When you experience experiencing such a “quale” the actual symbol might or might not be replaced with a symbol for the symbol, possibly using a set of neural machinery overlapping with the set for the actual symbol (so you can remember or imagine things without causing all of the involuntary reactions the actual experience causes)
I define qualia as the elements of my subjective experience.
“That sounds obvious” was an euphemism. It’s more than obvious that qualia are real, it’s given, it is the only truth that does not need to be proven.
Unless you’re making a use-mention distinction (and why would you?), I don’t see your point. An algorithm can be realized in a mechanism. Are you saying that he should say “you can be an implementation of an algorithm” instead?
What I mean is that the notion of algorithm is always relative to an observer. Something is an algorithm because someone decides to view it as an algorithm. She/He decides what its inputs are and what its outputs. She/He decides what is the relevant scale for defining what a signal is. All these decisions are arbitrary (say I decide that the text-processing algorithm that runs on my computer extends to my typing fingers and the “calculation” performed by the molecules of them—why not? My hand is part of my computer. Does my computer “feel it”? Only because I decided to view things like that?). Being, on the contrary, is independent on any observer and is not arbitrary. Therefore being an algorithm is meaningless.
“Algorithm” is a type; things can be algorithms in the same sense that 5 is an integer and {”hello”,”world”} is a list. This does not depend on the observer, or even the existence of an observer.
I’m not sure you understand where quen tin is coming from. He would regard integers, list and “algorithms” in your sense as abstract entities, and maintain (as a point so fundamental that it’s never spelled out) that abstract entities are not physically real. At most they provide patterns that we can usefully superimpose on various ‘systems’ in the world.
The point isn’t whether or not abstract entities are observer-dependent, the point is that the business of superimposing abstract entities on real things is observer-dependent (on quen tin’s view). And observers themselves are “real things” not abstracta.
(Not that I agree with this personally, but it’s important to at least understand how others view things.)
There is a sense in which the view of the universe that just consists of me (an algorithm) receiving input from the universe (another algorithm) feels like it’s missing something, it’s the intuition the Chinese room argument pumps. I’ve never really found a good way to unpump it. But attempts to articulate that other component keep falling apart so...
{”hello”, “world”} is a set of lighted pixels on my screen, or a list of characters in a text file containing source code, or a list of bytes in my computer’s memory, but in any case, there must be an observer so that they can be interpreted as a list of string. The real list of string only exists inside my representation.
Your code is a list of characters in a text file, or a list of bytes in your computer’s memory. Only you interpret it as a code that interprets something.
Intepreting is giving a meaning to something. Stating that the “code interprets something” is a misuse of language for saying that the code “processes something”. You don’t know if the code gives meaning to anything since you are not the code, only you give the meaning. “Interpretation” is a first-person concept.
“mathematical model involves counterfactual dependencies of qualia” → I suggest you read David Mermin’s “What quantum physics is tring to tell us”. It can be found on arxiv. Quantum physics is only about correlations between measurements—or at least it can be successfully interpreted that way, and that solves quite every “paradox” of it...
“if you dispute this metaphysics you need to explain what the disadvantage” → It would require more than a few comments. I just found your self-confidence a bit arrogant, as far as scientific realism is far from being a consensus among philosophers and has many flaws. Personnaly, the main disavantage I see is that its an “objectual” conception, a conception of things as objects, which does not account for any subject, and does not acknowledge that an object merely exist as representations for subjects. It does not address first-person phenomenology (time, …). It does not seem to consider our cognitive situation seriously by uncritically claiming that our representation is reality, that’s all, which I find a bit naive.
Intepreting is giving a meaning to something. Stating that the “code interprets something” is a misuse of language for saying that the code “processes something”. You don’t know if the code gives meaning to anything since you are not the code, only you give the meaning. “Interpretation” is a first-person concept.
Okay… well what does it mean to give meaning to something? My claim is that I am a (really complex) code of sorts and that I interpret things in basically the same way code does. Now it often feels like this description is missing something and that’s the problem of consciousness/qualia for which I, like everyone else, have no solution. But “interpretation is a first-person concept” doesn’t let us represent humans.
“if you dispute this metaphysics you need to explain what the disadvantage” → It would require more than a few comments.
You were disputing someone’s claim that ‘the universe is an algorithm’… why isn’t that reason enough to identify one possible disadvantage. Otherwise you’re just saying “Na -ahhhh!”
I just found your self-confidence a bit arrogant, as far as scientific realism is far from being a consensus among philosophers and has many flaws. Personnaly, the main disavantage I see is that its an “objectual” conception, a conception of things as objects, which does not account for any subject, and does not acknowledge that an object merely exist as representations for subjects. It does not address first-person phenomenology (time, …). It does not seem to consider our cognitive situation seriously by uncritically claiming that our representation is reality, that’s all, which I find a bit naive.
I’m really bewildered by this and imagine you must have read someone else and took their position to be mine. I’m a straight forward Quinean ontological relativist which is why I paraphrased the original claim in terms of ideal representation and dropped the ‘is’. I was just trying to explain the claim since it didn’t seem like you were understanding it- I didn’t even make the statement in question (though I do happen to think the algorithm approach is the best thing going, I’m not confident that thats the end of the story).
But I think we’re bumping up against competing conceptions of what philosophy should be. I think philosophy is a kind of meta-science which expands and clarifies the job of understanding the world. As such, it needs to find a way of describing the subject in the language of scientific representation. This is what the cognitive science end of philosophy is all about. But you want to insist on the subject as fundamental- as far as I’m concerned thats just refusing to let philosophy/science do it’s thing.
I also view philosophy as a meta-science. I think language is relational by nature (e.g. red refer to the strong correlation between our respective experiences of red) and is blind to singularity (I cannot explain by mean of language what it is like for me to see red, I can only give it a name, which you can understand only if my red is correlated to yours—my singular red cannot be expressed).
Since science is a product of language, its horizon is describing the relational framework of existing things, which are unspeakable. That’s exactly what science converge toward (Quantum physics is a relational description of measurables—with special relativity, space/time referentials are relative to an observer, etc.). Being a subject is unspeakable (my experience of existing is a succession of singularities) and is beyond the horizon of science, science can only define its contour—the relational framework.
I don’t think that we can describe the subject in the language of scientific representation, because I think that the scientific representation is always relative to a subject (therefore the subject is already in the representation, in a sense...). That is why I always insist on the subject. Not that I refuse to let philosophy do its thing, I just want to clarify what its thing exactly is, so that we are not deluded by a mythical scientific description of everything that would be totally independend of our existence (which would make of us an epiphenomenon).
1: Yes—we assume that words mean the same thing to others when we use them, and it’s actually quite tricky to know when you’ve succeeded in communicating meaning.
2: “with special relativity, space/time referentials are relative to an observer, etc.”—this is rather sad and makes me think you’re trolling. What does this have to do with language? Nothing.
3: Your belief that we can’t describe things in certain ways has you preaching, instead of trying to discover what your interlocutor actually means. “which would make of us an epiphenomenon”—so what? It sounds like you’re prepared to derail any conversation by insisting everyone remind themselves that these are PEOPLE saying and thinking these things. Or maybe, more reasonably, you think that everyone ought to have a position about why they aren’t constantly saying “I think …”, and you’ll only derail when they refuse to admit that they’re making an aesthetic choice.
I only insist that people do not conflate representation and reality. To me, stating that an object is is already a fallacy (though I accept this as a convenient way of speaking). An object appears or is conceived, but we do not know what is, and we should not talk about what we do not know. To me, uncritically assuming that their exist an objective world and trying to figure out what it is is already a fallacy. Why I think that? Because I think there is no absolute, only relations.
So I agree that whether or not an observer views something as an algorithm is in fact, contingent. But the claim is that the people and the universe are in fact algorithms. To put it in pragmatic language: representing the universe as an algorithm and it’s components as subroutines is a useful and clarifying way of conceptualizing that universe relative to competing views and has no countervailing disadvantages relative to other ways of conceptualizing the universe.
I prefer this formulation, because you emphasize on the representational aspect. Now a representation (a conceptualization) requires someone that conceptualize/represents things. I think that this “useful and clarifying way” just forget that a representation is always relative to a subject. The last part of the sentence only expresses your proud ignorance (sorry)...
The last part of the sentence only expresses your proud ignorance (sorry)...
What proud ignorance? I haven’t proudly asserted anything (I’m not among your downvoters). My point is, if you dispute this metaphysics you need to explain what the disadvantages of it are and you haven’t done that which is what is frustrating people.
I am not saying that it is meant in this way, but the following could be construed as a proud assertion:
is a useful and clarifying way of conceptualizing that universe relative to competing views and has no countervailing disadvantages relative to other ways of conceptualizing the universe.
I agree that representing the universe as an algorithm is a useful view. I am not sure what you mean by “it’s components as subroutines”, though. What are the components of the universe?
I thought you were only talking about representing the universe as algorithms, which seems like a good idea. You could also claim that “the universe is an algorithm”, but I find that statement to be too vague, what does ‘is’ mean in this sentence?
The components are you, me, the galaxy, socks, etc.
A subroutine in a program is a distinct part that can be executed repeatedly. Are you saying that the universe has a distinct part dedicated to dealing with socks? To me that sounds like the universe would somehow have to know what is and what is not a sock. (sorry for anthropomorphising the universe there.)
It is mainly the word “subroutine” that I have a problem with, not the universe-as-an-algorithm idea per se.
I thought you were only talking about representing the universe as algorithms, which seems like a good idea. You could also claim that “the universe is an algorithm”, but I find that statement to be too vague, what does ‘is’ mean in this sentence?
Quinean ontological pragmatism just paraphrases existential claims as “x figures in our best explanation of the universe”. So ‘is’ in the sentence “the universe is an algorithm” means roughly the same thing as ‘are’ in the sentence “there are atoms in the universe”.
Are you saying that the universe has a distinct part dedicated to dealing with socks? To me that sounds like the universe would somehow have to know what is and what is not a sock. (sorry for anthropomorphising the universe there.) It is mainly the word “subroutine” that I have a problem with, not the universe-as-an-algorithm idea per se.
I see what you’re saying and on reflection it might be a dangerously misleading thing to say. The best candidate algorithm would not have such subroutines, however more complex but functional identical algorithms would.
The downvote corporatist system of this site is extremely annoying. I am proposing a valid and relevant argument. I expect counter-arguments from people who disagree, not downvotes. Why not keep downvotes for not-argumented/irrelevant comments?
Your above comment could be phrased better (it makes a valid point in a way that can be easily misinterpreted as proposing some mushy-headed subjective relativism), but I agree that people downvoting it are very likely overconfident in their own understanding of the problem.
My impression is that the concept of “algorithm” (and “computation” etc.) is dangerously close to being a semantic stop sign on LW. It is definitely often used to underscore a bottom line without concern for its present problematic status.
The guideline is to upvote things you want to see more of, and downvote things you want to see less of. That leaves room for interpretation about where the two quality thresholds should be, but in practice they’re both pretty high and I think that’s a good thing. There are a lot of things that could be wrong with a comment besides being irrelevant or not being argued. In this case, I think the problem is arguing one side of a confusing question rather than trying to clarify or dissolve it.
Votes are not always for good reasons, whatever the guidelines. Getting good behavior out of people works best if people are accountable for what they do, and tends to fail when they are not. People who comment are accountable in at least two ways that people who vote are not:
1) They have to explain themselves. That, after all, is what a comment is.
2) They have to identify themselves. You can’t comment without an account.
Voters have to do neither. Now, even though commenters are doubly accountable, I think most will agree that a certain nonzero proportion of the comments are not very good. Take away accountability, and the we should expect the proportion of the bad to increase.
All of those questions have known answers, but you have to take them on one at a time. Most of them go away when you switch from discrete (boolean) reasoning to continuous (probabilistic) reasoning.
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
My question is what does “happiness” rest upon? A probability of what? You need to have an apriori model oh what hapiness is in order to measure it (that is, a theory of mind), which you have not. Verifying your model depends on your model...
You argued that “I believe P with probability 0.53” might be as meaningless as “I am 53% happy”. It is a valid response to say, “Setting happiness aside, there actually is a rigorous foundation for quantifying belief—namely, Cox’s theorem.”
The pb here is that “I believe P” supposes a representation / a model of P. There must be a pre-existing model prior to using Cox’s theorem on something. My question is semantic: what does this model lie on? The probabilities you will get will depend on the model you will adopt, and I am pretty sure that there is no definitive model/conception of anything (see the problem of translation analysed by Quine for example).
They are very similar. Kant does not claim that we have no information about reality, and the linked article does not only say that we are sometimes wrong with our intuition...
This statement for example is very “Kantian” : Before you can question your intuitions, you have to realize that what your mind’s eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.
Kant says that we can know about the representations that appear in the manifold of appearances provided to us by our senses. But, in his view, we can know nothing, zip, zilch, nada, about whatever it is that stands behind those sensory representations.
In a sense, Kant takes the map/territory distinction to an extreme. For Kant, the territory is so distinct from the map that we know nothing about the territory at all. All of our knowledge is only about the map.
That is also what the linked article seems to entail. The statement I quoted, as I understand it, says that every information we have about reality is the result of “some cognitive algorithm” (=the representations that appears (...) provided by our senses)
The map is certainly a kind of information about the territory (though we cannot know it with certainty). Strictly speaking, Kant does not say we have no information about reality, he says we cannot know if we have or not.
I don’t think that Kant makes the distinction between “knowing” and “having information about” that you and I would make. If he doesn’t outright deny that we have any information about the world beyond our senses, he certainly comes awfully close.
On A380, Kant writes,
And, on A703/B731, he writes,
(Emphasis added. These are from the Guyer–Wood translation.)
Does anyone smell irony in this whole discussion? Considering the OP specifically derided the whole “discussion of old, dead guys” thing?
Ah, I wish this wasn’t a three year old post. I have no idea how this site works yet, so who knows whose attention I’ll attract by doing this?
At least the person whose comment you’re replying to sees your reply, so you weren’t speaking entirely into the void :).
Ok, it depends what you mean by “information about”. My understanding is that we have no information on the nature of reality, which does not mean that we have no information from reality.
I agree that we get information from reality. And I think that we agree that our confidence that we get information from reality is far less murky than our concept of “the nature of reality”.
Kant, being a product of his times, doesn’t seem to think this way, though. Maybe, if you explained the modern information-theoretic notion of “information” to Kant, he would agree that we get information about external reality in that sense. But I don’t know. It’s hard to imagine what a thinker like Kant would do in an entirely different intellectual environment from the one in which he produced his work. I’m inclined to think that, for Kant, the noumena are something to which it is not even possible to apply the concept of “having information about”.
Suggestion: knowledge of what a thing is in itself , is like information that is not coded in any particular scheme.
I suppose it’s a virtue of that interpretation that ‘information that cannot be coded in any particular scheme’ is a conceptual impossibility (assuming that’s what you meant).
Yes. You can make such an interpretation of the ding-an-such.
For my money, that lessens its impact.
If you are a cognitive algorithm X that receives input Y, this allows you to “know” a nontrivial fact about “reality” (whatever it is): namely, that it contains an instance of algorithm X that receives input Y. The same extends to probabilistic knowledge: if in one “possible reality” most instances of your algorithm receive input Y and in another “possible reality” most of them receive input Z, then upon seeing Y you come to believe that the former “possible reality” is more likely than the latter. This is a straightforward application of LW-style thinking, but it didn’t occur to Kant as far as I know.
If I am a cognitive algorithm X that reveives input Y, I don’t necessarily know what an algorithm is, what an input is, and so on. One could argue that all I know is ‘Y’. I don’t necessarily have any idea of what a “possible reality” is. I might not have a concept of “possibility” nor of “reality”.
Your way of thinking presupposes many metaphysical concepts that have been questioned by philosophers, including Kant. I am not saying that this line of reasoning is invalid (I suspect it is a realist approach, which is a fair option). My personal feeling is that Kant is upstream of that line of reasoning.
But I do know what an algorithm is. Can someone be so Kantian as to distrust even self-contained logical reasoning, not just sensations? In that case how did they come to be a Kantian?
Do you? I find the unexamined use of this particular concept possibly the most problematic component of what you call “LW-style thinking.” (Another term that commonly raises my red flags here is “pattern.”)
What do you find dubious about the use of this concept on LW?
To take a concrete example, the occasional attempts to delineate “real” computation as distinct from mere look-up tables seem to me rather confused and ultimately nonsensical. (Here, for example, is one such attempt, and I commented on another one here.) This strongly suggests deeper problems with the concept, or at least our present understanding of it.
Interestingly, I just searched for some old threads in which I commented on this issue, and I found this comment where you also note that presently we lack any real understanding of what constitutes an “algorithm.” If you’ve found some insight about this in the meantime, I’d be very interested to hear it.
I don’t see that the concept of a computation excludes a lookup table. A lookup table is simply one far end of a spectrum of possible ways to implement some map from inputs to outputs. And if I were writing a program that mapped inputs to outputs, implementing it as a lookup table is at least in principle always one of the options. Even a program that interacted constantly with the environment could be implemented as a lookup table, in principle. In practice, lookup tables can easily become unwieldy. Imagine a chess program implemented as a lookup table that maps each possible state of the board to a move. It would be staggeringly huge. But I don’t see why we wouldn’t consider it a computation.
One of your links concerns the idea that a lookup table couldn’t possibly be conscious. But the topic of consciousness is a kind of mind poison, because it is tied to strong, strong delusions which corrupt everything they touch. Thinking clearly about a topic once consciousness and the self have been attached to it virtually impossible. For example, the topic of fission—of one thing splitting into two—is not a big deal as long as you’re talking about ordinary things like a fork in the road, or a social club splitting into two social clubs. But if we imagine you splitting into two people (via a Star Trek transporter accident or what have you), then all of sudden it becomes very hard to think about clearly. A lot of philosophical energy has been sucked into wrapping our heads around the problem of personal identity.
Yes. In my view, this continuity is best observed through graph-theoretic properties of various finite state machines that implement the same mapping of inputs to outputs (since every computation that occurs in reality must be in the form of a finite state machine). From this perspective, the lookup table is a very sparse graph with very many nodes, but there’s nothing special about it otherwise.
The reason people are concerned with the concept of consciousness, is that they have terms in their utility functions for the welfare of conscious beings.
If you have some idea how to write out a reasonable utility function without invoking consciousness I’d love to hear it. (Adjust this challenge appropriately if your ethical theory isn’t consequentialist.)
I think it is largely because consciousness is so important to people that it is hard to think straight about it, and about anything tied to it. Similarly, the typical person loves Mom, and if you say bad things about Mom then they’ll have a hard time thinking straight, and so it will be hard for them to dispassionately evaluate statements about Mom. But what this means is that if someone wants to think straight about something, then it’s dangerous to tie it to Mom. Or to consciousness.
Nope, no new insights yet… I agree that this is a problem, or more likely some underlying confusion that we don’t know how to dissolve. It’s on my list of problems to think about, and I always post partial results to LW, so if something’s not on my list of submitted posts, that means I’ve made no progress. :-(
Granted, our concepts are often unclear.The Socratic dialogs demonstrate that, when pressed, we have trouble explaining our concepts. But that doesn’t mean that we don’t know what things are well enough to use the concepts. People managed to communicate and survive and thrive, probably often using some of the very concepts that Socrates was able to shatter with probing questions. For example, a child’s concepts of “up” and “down” unravel slightly when the child learns that the planet is a sphere, but that doesn’t mean that, for everyday use, the concepts aren’t just fine.
(I know the exchange isn’t primarily about Kant, but...)
Kant certainly isn’t a “distrusting logical reasoning” kind of guy. He takes for granted that “analytic” (i.e. purely deductive) reasoning is possible and truth-preserving. His mission is to explain (in light of Hume’s problem) how “synthetic a priori knowledge” is possible (with a secondary mission of exposing all previous work on metaphysics as nonsense). “Synthetic a priori knowledge” includes mathematics (which he doesn’t regard as just a variety or application of deductive logic), our knowledge of space and time, and Newtonian science.
His solution is essentially to argue that having our sensory presentations structured in space and time, and perceiving causal relations among them, is universally necessary in order for consciousness to exist at all. Since we are conscious, we can know a priori that the necessary conditions for consciousness obtain. [Disclaimer: This quick thumbnail sketch doesn’t pretend to be adequate. Neither am I convinced that the theory even makes sense.]
What Kant says we cannot know is how things (“really”) are, considered independently of the universal and necessary conditions for the possibility of experience. As far as I can tell, this boils down to “it’s not possible to know the answers to questions that transcend the limits of possible experience”. For instance, according to Kant we cannot know whether the universe is finite or infinite, whether it has a beginning in time, whether we have free will, or whether God exists.
It’s important to understand that Kant is an “empirical realist”, which means that the objects of experience—the coffee cups, rocks and stars around us—really do exist and we can acquire knowledge of them and their spatiotemporal and causal relations. However, if the universe could be considered ‘as it is in itself’ - independently of our minds—those spatiotemporal and causal relations would disappear (rather like how co-ordinates disappear when you consider a sphere objectively).
(It’s similar to the dust theory.)
The nature of logical reasoning is actually a deep philosophical question...
You know what an algorithm is, but do you know if you are an algorithm? I am not sure to understand why you need algorithm at all. Maybe your point is “If you are a human being X that receive an input Y, this allows you to know a nontrivial fact about reality (...)”. I tend to agree with that formulation, but again, this supposes some concepts that do not go without saying, and in particular, it supposes a realist approach. Idealist philosophers would disagree.
I can understand that your idea is to build models of reality, then use a Bayesian approach to validate them. There is a lot to say about this (more than I could say in a few lines). For example : are you able to gather all your “inputs”? What about the qualitative aspects: can you measure them? If not, how can you ever be sure that your model is complete? Are the ideas you have about the world part of your “inputs’? How do you disentangle them from what comes from outside, how do you disentangle your feelings, memory and actual inputs? Is there a direct correspondance between your inputs and scientific data, or do you have presupositions on how to interpret the data? For example, don’t you need to have an idea of what space/time is in order to measure distances and durations? Where does this idea comes from? Your brain? Reality? A bit of both? Don’t we interpret any scientific data at the light of the theory itself, and isn’t there a kind of circularity? etc.
This is why I talked about algorithms. When a human being says “I am a human being”, you may quibble about it being “observational” or “apriori” knowledge. But algorithms can actually have apriori knowledge coded in, including knowledge of their own source code. When such an algorithm receives inputs, it can make conclusions that don’t rely on “realist” or “idealist” philosophical assumptions in any way, only on coded apriori knowledge and the inputs received. And these conclusions would be correct more or less by definition, because they amount to “if reality contains an instance of algorithm X receiving input Y, then reality contains an instance of algorithm X receiving input Y”.
Your second paragraph seems to be unrelated to Kant. You just point out that our reasoning is messy and complex, so it’s hard to prove trustworthy from first principles. Well, we can still consider it “probably approximately correct” (to borrow a phrase from Leslie Valiant), as jimrandomh suggested. Or maybe skip the step-by-step justifications and directly check your conclusions against the real world, like evolution does. After all, you may not know everything about the internal workings of a car, but you can still drive one to the supermarket. I can relate to the idea that we’re still in the “stupid driver” phase, but this doesn’t imply the car itself is broken beyond repair.
I don’t think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm.
I agree with your second point: you can take a pragmatist approach. Actually, that’s a bit how science work. But still you did not prove in anyway that your model is a complete and definitive description of all there is nor that it can be strictly identifiable with “reality”, and Kant’s argument remains valid. It would be more correct to say that a scientific model is a relational model (it describes the relations between things as they appear to observers and their regularities).
You can be the algorithm. The software running in your brain might be “approximately correct by design”, a naturally arising approximation to the kind of algorithms I described in previous comments. I cannot examine its workings in detail, but sometimes it seems to obtain correct results and “move in harmony with Bayes” as Eliezer puts it, so it can’t be all wrong.
No you cannot be an algorithm. An algorithm is a concept, it only exists inside our representations… You cannot be an object/a concept inside your own representation, that makes no sense…
The question whether algorithms “exist” is related to the larger question of whether mathematical concepts “exist”. (The former is a special case of the latter.) Many people on LW take seriously the “mathematical multiverse” ideas of Tegmark and others, which hypothesize that abstract mathematical concepts are actually all that exists. I’m not sure what to think about such ideas, but they’re not obviously wrong, because they’ve been subjected to very harsh criticism from many commenters here, yet they’re still standing. The closest I’ve come to a refutation is the pheasant argument (search for “pheasant” on this site), but it’s not as conclusive as I’d like.
I think it’s very encouraging that we’ve come to a concrete disagreement at last!
ETA: I didn’t downvote you, and don’t like the fact that you’re being downvoted. A concrete disagreement is better than confused rhetoric.
They may not be obviously wrong, but the important point is that it remains a pure metaphysical speculation and that other metaphysical systems exist, and other people even deny that any metaphysical system can ever be “true” (or real or whatever). The last point is rather consensual among modern philosophers: it is commonly assumed that any attempt to build a definitive metaphysical system will necessarily be a failure (because there is no definitive ground on which any concept rests). As a consequence, we have to rely on pragmatism (as you did in a previous comment). But anyway, the important point is that different approaches exist, and none is a definitive answer.
No, an algorithm can exist inside another algorithm as a regularity, and evidence suggests that the universe itself is an algorithm.
No, evidence does no suggest that the universe is an algorithm. This is perfectly meaningless.
You need to actually explain your point and not just keep repeating it.
Evidence suggests that the universe is composed of qualia. The ability to build a mathematical model that fits our scientific measurements (= a probabilistic description of the correlations between qualia) does not remotely suggest that the universe is an algorithm.
It may not suggest this to your satisfaction but it certainly suggests it remotely (and the mathematical model involves counterfactual dependencies of qualia, not just correlations). What does it mean to say that the universe is composed of qualia? That sounds like an obvious confusion between representation and reality.
Well my opinion is that the confusion between representation and reality is on your side.
Indeed, a scientific model is a representation of reality—not reality. It can be found inside books or learned at school, it is interpreted. On the contrary, qualia are not represented but directly experienced. They are real.
That sounds obvious. No?
Not at all. What you call “qualia” could be the combination of a mental symbol, the connections and associations this symbol has and various abstract entities. When you experience experiencing such a “quale” the actual symbol might or might not be replaced with a symbol for the symbol, possibly using a set of neural machinery overlapping with the set for the actual symbol (so you can remember or imagine things without causing all of the involuntary reactions the actual experience causes)
I define qualia as the elements of my subjective experience. “That sounds obvious” was an euphemism. It’s more than obvious that qualia are real, it’s given, it is the only truth that does not need to be proven.
Do you have some links to this evidence, or studies that come to this conclusion?
Unless you’re making a use-mention distinction (and why would you?), I don’t see your point. An algorithm can be realized in a mechanism. Are you saying that he should say “you can be an implementation of an algorithm” instead?
What I mean is that the notion of algorithm is always relative to an observer. Something is an algorithm because someone decides to view it as an algorithm. She/He decides what its inputs are and what its outputs. She/He decides what is the relevant scale for defining what a signal is. All these decisions are arbitrary (say I decide that the text-processing algorithm that runs on my computer extends to my typing fingers and the “calculation” performed by the molecules of them—why not? My hand is part of my computer. Does my computer “feel it”? Only because I decided to view things like that?). Being, on the contrary, is independent on any observer and is not arbitrary. Therefore being an algorithm is meaningless.
“Algorithm” is a type; things can be algorithms in the same sense that 5 is an integer and {”hello”,”world”} is a list. This does not depend on the observer, or even the existence of an observer.
I’m not sure you understand where quen tin is coming from. He would regard integers, list and “algorithms” in your sense as abstract entities, and maintain (as a point so fundamental that it’s never spelled out) that abstract entities are not physically real. At most they provide patterns that we can usefully superimpose on various ‘systems’ in the world.
The point isn’t whether or not abstract entities are observer-dependent, the point is that the business of superimposing abstract entities on real things is observer-dependent (on quen tin’s view). And observers themselves are “real things” not abstracta.
(Not that I agree with this personally, but it’s important to at least understand how others view things.)
There is a sense in which the view of the universe that just consists of me (an algorithm) receiving input from the universe (another algorithm) feels like it’s missing something, it’s the intuition the Chinese room argument pumps. I’ve never really found a good way to unpump it. But attempts to articulate that other component keep falling apart so...
I think it does.
{”hello”, “world”} is a set of lighted pixels on my screen, or a list of characters in a text file containing source code, or a list of bytes in my computer’s memory, but in any case, there must be an observer so that they can be interpreted as a list of string. The real list of string only exists inside my representation.
Pretty sure I can write code that makes these same interpretations.
Your code is a list of characters in a text file, or a list of bytes in your computer’s memory. Only you interpret it as a code that interprets something.
What does it mean to ‘interpret’ something?
Edit: or rather, what does it mean for me to interpret something, ’cause I know exactly what it means for code to do it.
I will reply several messages at once.
Intepreting is giving a meaning to something. Stating that the “code interprets something” is a misuse of language for saying that the code “processes something”. You don’t know if the code gives meaning to anything since you are not the code, only you give the meaning. “Interpretation” is a first-person concept.
“mathematical model involves counterfactual dependencies of qualia” → I suggest you read David Mermin’s “What quantum physics is tring to tell us”. It can be found on arxiv. Quantum physics is only about correlations between measurements—or at least it can be successfully interpreted that way, and that solves quite every “paradox” of it...
“if you dispute this metaphysics you need to explain what the disadvantage” → It would require more than a few comments. I just found your self-confidence a bit arrogant, as far as scientific realism is far from being a consensus among philosophers and has many flaws. Personnaly, the main disavantage I see is that its an “objectual” conception, a conception of things as objects, which does not account for any subject, and does not acknowledge that an object merely exist as representations for subjects. It does not address first-person phenomenology (time, …). It does not seem to consider our cognitive situation seriously by uncritically claiming that our representation is reality, that’s all, which I find a bit naive.
(EDIT—formatting)
Okay… well what does it mean to give meaning to something? My claim is that I am a (really complex) code of sorts and that I interpret things in basically the same way code does. Now it often feels like this description is missing something and that’s the problem of consciousness/qualia for which I, like everyone else, have no solution. But “interpretation is a first-person concept” doesn’t let us represent humans.
You were disputing someone’s claim that ‘the universe is an algorithm’… why isn’t that reason enough to identify one possible disadvantage. Otherwise you’re just saying “Na -ahhhh!”
I’m really bewildered by this and imagine you must have read someone else and took their position to be mine. I’m a straight forward Quinean ontological relativist which is why I paraphrased the original claim in terms of ideal representation and dropped the ‘is’. I was just trying to explain the claim since it didn’t seem like you were understanding it- I didn’t even make the statement in question (though I do happen to think the algorithm approach is the best thing going, I’m not confident that thats the end of the story).
But I think we’re bumping up against competing conceptions of what philosophy should be. I think philosophy is a kind of meta-science which expands and clarifies the job of understanding the world. As such, it needs to find a way of describing the subject in the language of scientific representation. This is what the cognitive science end of philosophy is all about. But you want to insist on the subject as fundamental- as far as I’m concerned thats just refusing to let philosophy/science do it’s thing.
I also view philosophy as a meta-science. I think language is relational by nature (e.g. red refer to the strong correlation between our respective experiences of red) and is blind to singularity (I cannot explain by mean of language what it is like for me to see red, I can only give it a name, which you can understand only if my red is correlated to yours—my singular red cannot be expressed).
Since science is a product of language, its horizon is describing the relational framework of existing things, which are unspeakable. That’s exactly what science converge toward (Quantum physics is a relational description of measurables—with special relativity, space/time referentials are relative to an observer, etc.). Being a subject is unspeakable (my experience of existing is a succession of singularities) and is beyond the horizon of science, science can only define its contour—the relational framework.
I don’t think that we can describe the subject in the language of scientific representation, because I think that the scientific representation is always relative to a subject (therefore the subject is already in the representation, in a sense...). That is why I always insist on the subject. Not that I refuse to let philosophy do its thing, I just want to clarify what its thing exactly is, so that we are not deluded by a mythical scientific description of everything that would be totally independend of our existence (which would make of us an epiphenomenon).
I hope this clarify my position.
To your 3 paragraphs:
1: Yes—we assume that words mean the same thing to others when we use them, and it’s actually quite tricky to know when you’ve succeeded in communicating meaning.
2: “with special relativity, space/time referentials are relative to an observer, etc.”—this is rather sad and makes me think you’re trolling. What does this have to do with language? Nothing.
3: Your belief that we can’t describe things in certain ways has you preaching, instead of trying to discover what your interlocutor actually means. “which would make of us an epiphenomenon”—so what? It sounds like you’re prepared to derail any conversation by insisting everyone remind themselves that these are PEOPLE saying and thinking these things. Or maybe, more reasonably, you think that everyone ought to have a position about why they aren’t constantly saying “I think …”, and you’ll only derail when they refuse to admit that they’re making an aesthetic choice.
I only insist that people do not conflate representation and reality. To me, stating that an object is is already a fallacy (though I accept this as a convenient way of speaking). An object appears or is conceived, but we do not know what is, and we should not talk about what we do not know. To me, uncritically assuming that their exist an objective world and trying to figure out what it is is already a fallacy. Why I think that? Because I think there is no absolute, only relations.
Who cares?
So I agree that whether or not an observer views something as an algorithm is in fact, contingent. But the claim is that the people and the universe are in fact algorithms. To put it in pragmatic language: representing the universe as an algorithm and it’s components as subroutines is a useful and clarifying way of conceptualizing that universe relative to competing views and has no countervailing disadvantages relative to other ways of conceptualizing the universe.
I prefer this formulation, because you emphasize on the representational aspect. Now a representation (a conceptualization) requires someone that conceptualize/represents things. I think that this “useful and clarifying way” just forget that a representation is always relative to a subject. The last part of the sentence only expresses your proud ignorance (sorry)...
What proud ignorance? I haven’t proudly asserted anything (I’m not among your downvoters). My point is, if you dispute this metaphysics you need to explain what the disadvantages of it are and you haven’t done that which is what is frustrating people.
I am not saying that it is meant in this way, but the following could be construed as a proud assertion:
I agree that representing the universe as an algorithm is a useful view. I am not sure what you mean by “it’s components as subroutines”, though. What are the components of the universe?
Re: the first part, that’s just what it means to assert that “the universe is an algorithm”.
The components are you, me, the galaxy, socks, etc.
I thought you were only talking about representing the universe as algorithms, which seems like a good idea. You could also claim that “the universe is an algorithm”, but I find that statement to be too vague, what does ‘is’ mean in this sentence?
Quinean ontological pragmatism just paraphrases existential claims as “x figures in our best explanation of the universe”. So ‘is’ in the sentence “the universe is an algorithm” means roughly the same thing as ‘are’ in the sentence “there are atoms in the universe”.
I see what you’re saying and on reflection it might be a dangerously misleading thing to say. The best candidate algorithm would not have such subroutines, however more complex but functional identical algorithms would.
The downvote corporatist system of this site is extremely annoying. I am proposing a valid and relevant argument. I expect counter-arguments from people who disagree, not downvotes. Why not keep downvotes for not-argumented/irrelevant comments?
I’m really curious: What work is the word “corporatist” doing in this sentence? In what sense is the downvote system “corporatist”?
Your above comment could be phrased better (it makes a valid point in a way that can be easily misinterpreted as proposing some mushy-headed subjective relativism), but I agree that people downvoting it are very likely overconfident in their own understanding of the problem.
My impression is that the concept of “algorithm” (and “computation” etc.) is dangerously close to being a semantic stop sign on LW. It is definitely often used to underscore a bottom line without concern for its present problematic status.
The guideline is to upvote things you want to see more of, and downvote things you want to see less of. That leaves room for interpretation about where the two quality thresholds should be, but in practice they’re both pretty high and I think that’s a good thing. There are a lot of things that could be wrong with a comment besides being irrelevant or not being argued. In this case, I think the problem is arguing one side of a confusing question rather than trying to clarify or dissolve it.
Votes are not always for good reasons, whatever the guidelines. Getting good behavior out of people works best if people are accountable for what they do, and tends to fail when they are not. People who comment are accountable in at least two ways that people who vote are not:
1) They have to explain themselves. That, after all, is what a comment is.
2) They have to identify themselves. You can’t comment without an account.
Voters have to do neither. Now, even though commenters are doubly accountable, I think most will agree that a certain nonzero proportion of the comments are not very good. Take away accountability, and the we should expect the proportion of the bad to increase.
It’s a category error. I am not a concept, nor an instance of a concept.
So you’re not a person?
Inside your representation, I might be a person, and I do represent myself as a person sometimes.
″… and words will never hurt me” :)
All of those questions have known answers, but you have to take them on one at a time. Most of them go away when you switch from discrete (boolean) reasoning to continuous (probabilistic) reasoning.
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
Cox’s theorem. Probability reduces to set measure, which requires nothing but a small set of mathematical axioms.
My question is what does “happiness” rest upon? A probability of what? You need to have an apriori model oh what hapiness is in order to measure it (that is, a theory of mind), which you have not. Verifying your model depends on your model...
You argued that “I believe P with probability 0.53” might be as meaningless as “I am 53% happy”. It is a valid response to say, “Setting happiness aside, there actually is a rigorous foundation for quantifying belief—namely, Cox’s theorem.”
The pb here is that “I believe P” supposes a representation / a model of P. There must be a pre-existing model prior to using Cox’s theorem on something. My question is semantic: what does this model lie on? The probabilities you will get will depend on the model you will adopt, and I am pretty sure that there is no definitive model/conception of anything (see the problem of translation analysed by Quine for example).