But the thing is, I basically hate the way everybody else hates waste, because I get the impression that they don’t actually hate waste, they hate something else.
People who talk about limited resources don’t actually hate waste—they hate the expenditure of limited resources.
People who talk about waste disposal don’t actually hate waste—they hate landfills, or trash on the side of the road, or any number of other things that aren’t actually waste.
People who talk about opportunity costs (‘wasteful spending’) don’t hate the waste, they hate how choices were made, or who made the choices.
Mind, wasting limited resources is bad. Waste disposal is itself devoting additional resources—say, the land for the landfill—to waste. And opportunity costs are indeed the heart of the issue with waste.
At this point, the whole concept of finishing the food on your plate because kids in Africa don’t have enough to eat is the kind of old-fashioned where jokes about it being old fashioned are becoming old fashioned, but the basic sentiment there really cuts to the heart of what I mean by “waste”, and what makes it a problem.
Waste is something that isn’t used. It is value that is destroyed.
The plastic wrapping your meat that you throw away isn’t waste. It had a purpose to serve, and it fulfilled it. Calling that waste is just a value judgment on the purpose it was put to. The plastic is garbage, and the conflation of waste and garbage has diminished an important concept.
Food you purchase, that is never eaten and thrown away? That is waste. Waste, in this sense, is the opposite of exploitation. To waste is to fail to exploit. However, we use the word waste now to just mean that we don’t approve of the way something is used, and the use of the word to express disapproval of a use has basically destroyed—not the original use of the word, but the root meaning which gives the very use of the word to express disapproval weight. Think of the term “wasteful spending”—you already know the phrase just means spending that the speaker disapproves of, the word “wasteful” has lost all other significance.
Mind, I’m not arguing that “waste” literally only means a specific thing. I’m arguing that an important concept has been eroded by use by people who were deliberately trying to establish a link with that concept.
Which is frustrating, because it has eroded a class of criticisms that I think society desperately needs, which have been supplanted by criticisms rooted in things like environmentalism, even when environmentalism isn’t actually a great fit for the criticisms—it’s just the framing for this class of criticism where there is a common conceptual referent, a common symbolic language.
And this actually undermines environmentalism; think about corporate “green” policies, and how often they’re actually cost-cutting measures. Cutting waste, once upon a time, had the same kind of public appeal; now if somebody talks about cutting waste, I’m wondering what they’re trying to take away from me. We’ve lost a symbol in our language, and the replacement isn’t actually a very good fit.
now if somebody talks about cutting waste, I’m wondering what they’re trying to take away from me.
This probably applies to applause lights in general. Individuals sometimes do things for idealists reasons, but corporations are led by people selected for their ability to grab power and resources. Therefore all their actions should be suspected as an attempt to gain more power and/or resources. A “green policy” might mean less toilet paper in the company bathrooms, but it never means fewer business trips for the management.
This concept is not fully formed. It is necessary that it is not fully formed, because once I have finished forming it, it won’t be something I can communicate any longer; it will become, to borrow a turn of phrase from SMBC, rotten with specificity.
I have noticed a shortcoming in my model of reality. It isn’t a problem with the accuracy of the model, but rather there is an important feature of the model missing. It is particularly to do with people, and the shortcoming is this: I have no conceptual texture, no conceptual hook, to attach nebuluous information to people to.
To explain what I need a hook for, a professional acquantance has recriprocated trust. There is a professional relationship there, but also a human interaction; the trust involved means we can proceed professionally without negotiating contractual terms beforehand. However, it would undermine the trust in a very fundamental way to actually treat this as the meaning of the trust. That is to say, modeling the relationship as transactional would undermine the basis of the relationship (but for the purposes of describing things, I’m going to do that anyways, basically because it’s easier to explain that way; any fair description of a relationship of any kind should not be put to a short number of words).
I have a pretty good model of this person, as a person. They have an (adult) child who has developed a chronic condition; part of basic social interaction is that, having received this information, I need to ask about said child the next time we interact. This is something that is troubling this person; my responsibility, to phrase it in a misleading way, is to acknowledge them, to make what they have said into something that has been heard, and in a way that lets them know that they have been heard.
So, returning to the shortcoming: I have no conceptual texture to attach this to. I have never built any kind of cognitive architecture that serves this kind of purpose; my interactions with other humans are focused on understanding them, which has basically served me socially thus far. But I have no conceptual hook to attach things like “Ask after this person’s child”. My model is now updated to include the pain of that situation; there is nothing in the model that is designed to prompt me to ask. I have heard; now I need to let this person know that they have been heard, and I reach for a tool I suddenly realize has never existed. I knew this particular tool was necessary, but have never needed it before.
It’s maybe tempting to build general-purpose mental architecture to deal with this problem, but as I examine it, it looks like maybe this is a problem that actually needs to be resolved on a more individual basis, because as I mentally survey the situation, a large part of the problem in the first place is the overuse of general-purpose mental architecture. I should have noticed this missing tool before.
I am not looking for solutions. Mostly it is just interesting to notice; usually, with these sorts of things, I’ve already solved the problem before I’ve had a chance to really notice, observe, and interact with the problem, much less notice the pieces of my mind which actually do the solving. Which itself is an interesting thing to notice; how routine the construction of this kind of conceptual architecture has gotten, that the need for novel mental architecture actually makes me stop for a moment, and pay attention to what is going on.
It can sometimes be hard to notice the things you mentally automate; the point of automating things is to stop noticing them, after all.
That’s a fascinating observation! When I introspect the same process (in my case, it might be “ask how this person’s diabetic cat is doing”), I find that nothing in the model itself is shaped like a specific reminder to ask about the cat. The way I end up asking is that when there’s a lull in the conversation, I scan the model for recent and important things that I’d expect the person might want to talk about, and that scan brings up the cat. My own generalizations, in turn, likely leave gaps which yours would cover, just as the opposite seems to be happening here.
Observe. (If you don’t want to or can’t, it’s a video showing the compression wave that forms in traffic when a car brakes.)
I first saw that video a few years ago. I remembered it a few weeks ago when driving in traffic, and realizing that a particular traffic condition was caused by an event that had happened some time in the past, that had left an impression, a memory, in the patterns of traffic. The event, no longer present, was still recorded. The wave form in the traffic patterns was a record of an event—traffic can operate as a storage device.
Considering traffic as a memory storage device, it appears traffic reaches its capacity when it is no longer possible for traffic to move at all. In practice I think traffic would approach this limit asymptotically, as each additional event stored in its memory reduces the speed at which traffic moves, such that it takes additional time to store each event in proportion to the number of events already stored. That is, the memory storage of traffic is not infinite.
Notably, however, the memory storage of traffic can, at least in principle, be read. Traffic itself is an imperfect medium of storage—it depends on a constant flow of cars, or else the memory is erased, and also cars don’t behave uniformly, such that events aren’t stored perfectly. However, it isn’t perfectly lossy, either; I can know from an arbitrary slow-down that some event happened in traffic in the past.
You could probably program a really, really terrible computer on traffic.
But the more interesting thing, to me, is the idea that a wave in a moving medium can store information; this seems like the sort of “Aha” moment that, were I working on a problem that this could apply to, would solve that problem for me. I’m not working on such a problem, but maybe this will find its way to somebody who is.
Edit: It occurs to me this may look like a trivial observation; of course waves can store information, that’s what radio is. But that’s not what I mean; radio is information transmission, not information storage. It’s the persistence of the information that surprised me, and which I think might be useful, not the mere act of encoding information onto a wave.
Data storage and transmission are the same thing. Both are communication to the future, (though sometimes to the very very near future). Over long enough distances, radio and wires can be information storage. Like all storage media, they aren’t permanent, and need to be refreshed periodically. For waves, this period is short, microseconds to hours. For more traditional storage (clay tablets or engraved gold discs sent to space, for instance), it could be decades to millenea.
Traffic is quite lossy as an information medium—effects remain for hours, but there are MANY possible causes of the same effects, and hard-to-predict decay and reinforcement rates, so it only carries a small amount of information: something happened in the recent past. Generally, this is a good thing—most of us prefer that we’re not part of that somewhat costly information storage, and we pay traffic engineers and car designers a great deal of money to minimize information retention in our roads.
Acoustic memory in mercury tubes was indeed used by most of first-generation electronic computers (1948-60ish); I love the aesthetic but admit they’re terrible even compared to electromagnetic delay lines. An even better (British) aesthetic would be Turing’s suggestion of using Gin as the acoustic medium...
So to try to summarize what I am now reasonably certain the criticism was:
Eliezer argues that “truth”, as a concept, reflects our expectation that our experiences of reality can match our experiences of reality.
Aella’s criticism is that “of reality” adds nothing to the previous sentence, and Eliezer is sneaking reality into his concept of truth; that is, Eliezer’s argument can be reframed “Our expectation of our experiences can match our experiences”.
The difficulty I had in understanding Aella’s argument is that she framed it as a criticism of the usefulessness of truth, itself. That is, I think she finds the kind of “truth” we are left with, after subtracting a reality (an external world) that adds nothing to it, to be kind of useless (either that or she expects readers to).
Whereas I think it’s basically the same thing. Just as subtracting “of reality” removes nothing from the argument, I think adding it doesn’t actually add anything to the argument, because I think “reality”, or “external world”, are themselves just pointers at the fact that our experiences can be expected, something already implicit in the idea of having expectations in the first place.
Reality is just the pattern of experiences that we experience. Truth is a pattern which correlates in some respect with some subset of the pattern of experiences that we experience.
When I expect it to rain and then it doesn’t and I feel surprised, what is happening? In my subjective experience, this moment, I am imagining a prior version of myself that had a belief about the world (it will rain!), and I am holding a different belief than what I imagine my previous self had (It isn’t raining!). I am holding a contrast between those two, and I am experiencing the sensation of surprise. This is all surprise is, deep down. Every interpretation of reality can be described in terms of a consistent explanation of the feeling of our mental framework right at this moment.
...it feels like some map-and-territory confusion. It’s like if I insisted that the only things that exist are words. And you could be like: “dude, just look at this rock! it is real!”, and I would say: “but ‘dude’, ‘just’, ‘look’, ‘at’, ‘this’, ‘rock’, ‘it’, ‘is’, and ‘real’ are just words, aren’t they?” And so on, whatever argument you give me, I will ignore it and merely point out that it consists of words, therefore it ultimately proves me right. -- Is this a deep insight, or am I just deliberately obtuse? To me it seems like the latter.
By this logic, it’s not even true that two plus two equals four. We only have a sensation of two plus two being four. But isn’t it interesting that these “sensations” together form a coherent mathematics? Nope, we only have a sensation of these sensations forming a coherent mathematics. Yeah, but the reason I have the sensation of math being coherent is because the math actually is coherent, or isn’t it? Nah, you just have a sensation of the reason of math’s coherency being the math’s actual coherency. But that’s because… Nope, just a sensation of becauseness...
To make it sound deeper: the moon allegedly does not exists, because your finger that points at it is merely a finger.
EDIT: A comment below the criticism points out that the argument against reality can be also used as an argument against existence of other people (ultimately, only your sensations of other people exist), therefore this line of thought logically ends at solipsism.
EDIT: It’s actually quite an interesting blogger! The article on reality didn’t impress me, but many others did. For example, Internet communities: Otters vs. Possums is a way more charitable interpretation of the “geeks and sociopaths” dynamics in communities.
EDIT: It’s actually quite an interesting blogger! The article on reality didn’t impress me, but many others did. For example, Internet communities: Otters vs. Possums is a way more charitable interpretation of the “geeks and sociopaths” dynamics in communities.
Her writing is pretty good, yeah.
The rest of the blog made me pause on the article for a lot longer than I usually would have, to try to figure out what the heck she was even arguing. There really is a thing there, which is why when I figured it out I came here and posted it. Apparently it translates no better than her own framing of it, which I find interesting.
Talking about words is an apt metaphor, but somewhat misleading in the specifics. Abstractly, I think Aella is saying that, in the map-territory dichotomy, the “territory” part of the dichotomy doesn’t actually add anything; we never experience the territory, it’s a strictly theoretical concept, and any correspondence we claim to have between maps and territory is actually a correspondence of maps and maps.
When you look at the world, you have a map; you are seeing a representation of the world, not the world itself. When you hear the world, you have a map. All of your senses provide maps of the world. Your interpretation of those senses is a map-of-a-map. Your model of those interpretations is a map-of-a-map-of-a-map. It’s maps all the way down, and there is no territory to be found anywhere. The “territory” is taken axiomatically—there is a territory, which maps can match better or worse, but it is never actually observed. In this sense, there is no external world, because there is no reality.
I think the criticism here is of a conceptualization of the universe in which there’s a platonic ideal of the universe—reality—which we interact with, and with regards to which we can make little facsimiles—theories, or statements, or maps—which can be more or less reproductions of the ideal (more or less true).
So strictly speaking, this it’s-all-maps explanation is also misleading. It’s territory all the way down, too; your sight isn’t a map of reality, it is part of reality. There are no maps; everything is territory. There is no external reality because there is not actually a point at which we go from “things that aren’t real” to “things that are real”, and on a deeper level, there’s not a point at which we go from the inside to the outside.
Is an old map of a city, which is no longer accurate, true?
The “maps all the way down” does not explain why there is (an illusion of) a reality that all these maps are about. If there is no underlying reality, why aren’t the maps completely arbitrary?
The criticism Aella is making is substantively different than “reality isn’t real”.
So, imagine you’re god. All of reality takes place in your mind; reality is literally just a thought you had. How does Eliezer’s concept of “truth” work in that case?
Suppose you’re mentally ill. How much should you trust something that claims to be a mind? Is it possible for imaginary things to surprise you? What does truth mean, if your interface to the “external world”/”reality” isn’t reliable?
Suppose you’re lucid dreaming. Does the notion of “truth” stop existing?
(But also, even if there is no underlying reality, the maps still aren’t going to be completely arbitrary, because a mind has a shape of its own.)
So, imagine you’re god. All of reality takes place in your mind; reality is literally just a thought you had. How does Eliezer’s concept of “truth” work in that case?
Then the god’s mind would be the reality; god’s psychological traits would be the new “laws of physics”, kind of.
I admit I have a problem imagining “thoughts” without also imagining a mind. The mechanism that implements the mind would be the underlying reality.
We can suppose that the god is just observing what happens when a particular mathematical equation runs; that is, the universe can, in a certain sense, be entirely independent of the god’s thoughts and psychological traits.
Independence might be close enough to “external” for the “external world” concept to apply; so we can evaluate reality as independent from, even for argument’s sake external to, the god’s mind, even though it exists within it.
So we can have truth which is analogous to Eliezer’s truth.
Now, the question is—does the “external world” and “independence” actually add anything?
Well, suppose that the god can and does alter things; observes how the equation is running, and tweaks the data.
Does “truth” only exist with respect to the parts of this world that the god hasn’t changed? Are the only “true” parts of this reality the parts that are purely the results of the original equation? If the god makes one adjustment ever, is truth forever contaminated?
Okay, let’s define the external world to be the equation itself. The god can choose which equation to run, can adjust the parameters; where exactly in this process does truth itself lay? Maybe in the mathematics used to run the equation? But mathematics is arbitrary; the god can alter the mathematics.
Well, up to a point, the point Aella points at as “consistency.” So there’s that piece; the truth has to at least be consistent. And I think I appreciate the “truth” of the universe that isn’t altered; there’s consistency there, too.
Which leaves the other part, experience.
Suppose, for a moment, we are insane (independently, just imagine being insane); the external reality you observe is illusory. Does that diminish the value of what we consider to be the truth in anticipating our experiences? If this is all a grand illusion—well, it’s quite a consistent illusion, and I know what will happen when I engage in the experience I refer to when I say I drop an apple. I call the illusion ‘reality’, and it exists, regardless of whether or not it satisfies the aesthetic ideal I have for what “existence” should actually mean.
Which is to say—it doesn’t matter if I am living in reality, or in a god’s mathematical equation, or in a fantasy. The existence or nonexistence of an external reality has no bearing on whether or not I expect an apple to hit the ground when I let go of it; the existence or nonexistence of an external reality has no bearing on whether the apple will do so. Whether the apple exists in the real world, or as a concept in my mind, it has a particular set of consistent behaviors, which I experience in a particular way.
Whereas I take the view that truth in the sense in the sense of instrumentalism, prediction of experience, and truth in the sense of realism, correspondence to the territory, are different and both valid. Having recognised the difference, you don’t have to eliminate one, or identify it with the other
If we consider the extra dimension(s) on which the amplitude of the wave function given to the Schrodinger Equation, the wave function instead defines a topology (or possibly another geometric object, depending on exactly what properties end up being invariant.)
If the topology can be evaluated over time by some alternative mathematical construct, that alternative mathematical construct may form the basis for a more powerful (in the sense of describing a wider range of potential phenomena) physics, because it should be constructable in such a way as to not possess the limitations of the Schrodinger Equation that the function returns a value for the entire dimensional space under consideration. (That is, observe that for y=sin(x), the waveform cannot be evaluated in terms of y, because it isn’t defined for all of y.)
Additionally, the amplitude of quantum waves are geometrically limited in a way that a geometric object possessing the dimension(s) of amplitude shouldn’t be; quantum waves have an extent from 0 to the amplitude, whereas a generalized geometric object should permit discontinuous extents, or extents which do not include the origin, or extents which cross the origin. If we treat the position but not the measure of the extent of the dimension(s) of amplitude as having topological properties, then with the exception of discontinuous extents / amplitudes, many of these geometries may be homotopic with the wave-function itself; however, there may be properties that can be described in terms of a geometric object / topology that cannot be described in terms of the homotopic wave function.
Suppose you have a list of choices a selection must be made from, and that the decision theory axioms of orderability and transitivity apply.
It should then be possible to construct a binary tree representing this list of choices, such that a choice can be represented as a binary string.
Likewise, a binary string, in a certain sense, represents a choice.
In this specific sense, what computers automate is the process of selection, of choice. Noticing this, and noticing that computers have automated away considerable amounts of “work”, we must notice that “work”, in the occupational sense, is to a significant extent the process of making selections. The process of mechanization has been the process of converting physical labor into selective labor, and in some cases the creation of physical heuristics that substantially solve selective problems—a vacuum operates on a physical heuristic that things of certain sizes and weights are undesirable to have on/in carpeting, for instance.
Noticing that the information age has largely been an ongoing project of automating selection efforts, one notable exception does crop up—cases of crowdsources selection efforts. Upvotes and downvotes and retweets and the various other crowdsourced mechanisms by which selective pressure is created are selective labor. We tend to think of this process as being to our own benefit, but I will observe the massive amount of monetary value that is extracted by the hosting platforms in the process—value that the hosting platforms enable, but does not create.
There are additionally individuals who create value—and, I would hope, a livelihood—based purely on selective labor we might not notice as labor. Curated musical lists, for example. Additionally, I notice an increasing trend of corporate entities performing selective effort on behalf of individual clients; when you get down to it, housing renting versus buying is a tradeoff between selection power (the ability to make selections) versus selection effort (the requirement to do so). And I notice the cost of renting is increasing relative to the cost of buying, and yet people I know who could buy, are still choosing to rent, and those who do buy, are increasingly buying housing which limits their exposure to selection effort (such as buying condos, duplexes, and in HOAs).
Other ways in which it looks like society is increasingly outsourcing selective effort: Political beliefs, truth-deciding (science), investment, maintenance of household items, movie selection, book selection, food selection. Anything where a company sends people a box of preselected items on a regular basis, where that is supplanting a previous personal selection effort.
The combination of these two things, to me, is interesting, combined with other observations of society. Because the high degree of selective effort we undertake for free in some situations, combined with what I can only describe as an increasingly widespread social resistance to other forms of selective effort, looks like fatigue of executive function. We spend considerable effort making decisions for the financial benefit of social media corporations, and have little selective energy left to make decisions about our own lives.
---
This situation might be a problem, might not. It looks odd to me, to be certain, and I’m increasingly dubious over the structure of ownership of commons whose value is not created by those who extract value from them.
However, I think it’s more important than I’m suggesting here. I’ll return to the idea that mechanization has been the process of eliminating all work except selective effort: This suggests that the effectiveness of a corporation is entirely in how effectively selective efforts take place.
The problem for a corporation, government, or other large impersonal entity is compounded, because the first selective effort, is selecting people to make selective efforts, who in turn will be selecting people to make selective efforts. A structure must be created to maintain a consistent form of selective effort—bureaucracy.
This begins to look a lot like the alignment problem, because, indeed, that’s basically what it is. And it is perhaps illlustrative that thousands of years of social development have only really come up with one functional solution to the problem of alignment corruption: Competition, such that structures whose alignment becomes corrupted are dismantled or destroyed. Which is to say, yet another form of selection.
Only loosely related but your first sentences prompted it: A way to convert complex decisions into a tree of binary choices for humans is Final Version Perfected.
Another phrase for “binary string” is “number”. A choice can be represented by a number, ok. I think you’re skipping the hard part—discovering the choices, and mapping them to numbers.
Then you lose me when you start talking about crowdsourcing and political beliefs and investment and such. That’s all the hard part of mapping. And the resulting map is likely to be uncomputable given current limits (possibly even theoretically, if the computation includes the substrate on which it’s computing).
I don’t think there’s any logical chain here—just rambling.
The point is that meaningful labor is increasingly “selection effort”, the work involved in making a decision between multiple competing choices, and some starter thoughts about how society can be viewed once you notice the idea of making choices as meaningful labor (maybe even the only meaningful form of labor).
The idea of mapping binary strings to choices is a point that information is equivalent to a codification of a sequence of choices; that is, the process of making choices is in fact the process of creating information. For a choice between N options, the options can be considered a series of binary gates, whose value can be 0 or 1, and thus the choice between those options produces a binary string; information. Or a number, if you prefer to think of it that way. That is, making decisions is an information-producing activity.
I’m not sure if I’m just misunderstanding, or actively disagreeing.
Whether you model something as a tree of binary choices, or a lookup table of options doesn’t matter much on this level. The tree is less efficient, but easier to modify, but that’s a completely different level than your post seems to be about, and not relevant to whatever you’re trying to show.
But the hard and important point is NOT in making the decision or executing the choice(s) (whether a jump or a sequence of binary). That just does not matter. Actually identifying the options and decisions and figuring out what decisions are POSSIBLE is the only thing that matters.
The FRAMING of decisions is massively information-producing. Making decisions is also information-producing (in that the uncertainty of the future becomes the truth of the past), but isn’t “information labor” in the same way that creating the model is.
What are you calling the “framing” of a decision? Is it something other than a series of decisions about what qualities with regard to the results of that decision that you care about?
The “framing” of a decision is the identification that there’s a decision to make, and the enumeration of the set or series of sub-decisions that describe the possible actions.
Suppose for a moment your washing machine is broken.
You have some options; you could ignore the problem. You could try to fix it yourself. You could call somebody to fix it. This isn’t intended to be a comprehensive list of options, mind, these are cached thoughts.
Each of these options in turn produce new choices; what to do instead, what to try to do to fix it, who to call.
Let’s suppose for a moment that you decide to call somebody. Who do you call? You could dial random numbers into your phone, but clearly that’s not a great way of making that decision. You could look up a washing machine repair company on the internet; let’s suppose you do this.
How do you decide which repair company to call? There are reviews—these are choices other people have made about how they liked the service. But before you even get there, what site do you use to get reviews? That’s a choice. Maybe you let Google make that choice for you—you just pick whatever is the first listed site. The search engine is making choices for you; the review site algorithm is making choices for you; the people who posted reviews are making choices for you. Out of a vast space of options, you arrive at only a few.
Notice all the choices other people are making on your behalf in that process, however. You’re not calling a car mechanic to repair your washing machine, yet that is, in fact, an option.
---
Suppose you need to drive to a grocery store in a new city. What choices are you making, and what choices do you ask your cell phone navigation application to make for you? Are you making more or less choices than your parents would? What about your grandparents? What is the difference in the kind and quantity of choices being made?
Are there differences in the quality of choices being made? Who benefits from the choices we make now?
“Goal-oriented behavior” is actually pretty complicated, and is not, in fact, a natural byproduct of general AI. I think the kinds of tasks we currently employ computers to do are hiding a lot of this complexity.
Specifically, what we think of as artificial intelligence is distinct from motivational intelligence is distinct from goal-oriented behaviors. Creating an AI that can successfully play any video game is an entirely different technology stack from creating an AI that “wants” to play video games, which in turn is an entirely different technology stack from creating an AI that translates a “desire” to play video games into a sequence of behaviors which can actually do so.
The AI alignment issue is noticing that good motivation is hard to get right; I think this needs to be “motivation is going to be hard to do at all, good or bad”; possibly harder than intelligence itself. Part of the problem with AI writing right now is that the writing is, basically, unintentional. You can get a lot further with unintentional writing, but intentional writing is far beyond anything that exists right now. I think a lot of fears come about because of a belief that motivation can arise spontaneously, or that intentionality can arise out of the programming itself; that we might write our desires into machines such that machines will know desire.
What would it take for GPT-3 to want to run itself? I don’t think we have a handle on that question at all.
Goal-oriented behaviors, meanwhile, correspond to an interaction between motivation and intelligence that itself is immensely more complicated than either independently.
---
I think part of the issue here is that, if you ask why a computer does something, the answer is “Because it was programmed to.” So, to make a program do something, you just program it to do it. Except this is moving the motivation, and intentionality, to the programmer; or, alternatively, to the person pressing the button causing the AI to act.
The AI in a computer game does what it does, because it is a program that is running, that causes things to happen. If it’s a first person shooter, the AI is trying to kill the player. The AI has no notion of killing the player, however; it doesn’t know what it is trying to do, it is just a series of instructions, which are, if you think about it, a set of heuristics that the programmer developed to kill the player.
This doesn’t change if it’s a neural network. AlphaGo is not, in fact, trying to win a game of Go; it is the humans who trained it who have any motivation, AlphaGo itself is just a set of really good heuristics. No matter how good you make those heuristics, AlphaGo will never start trying to win a game, because the idea of the heuristics in question trying to win a game is a category error.
I think, when people make the mental leap from “AI we have now” to “general AI”, they’re underspecifying what it is they are actually thinking about.
AI that can solve a specific, well-defined problem.
AI that can solve a well-defined problem. ← This is general AI; a set of universal problem-solving heuristics satisfies this criteria.
AI that can solve a poorly-defined problem. ← This, I think, is what people are afraid of, for fear that somebody will give it a problem, ask it to solve it, and it will tile the universe in paperclips.
Assuming all marginal economic growth comes from eliminating unnecessary expenses—increasing efficiency—then companies moving from a high-tax locality to a low-tax locality is, in fact, economic growth.
Is it a central example of economic growth, or am I just engaging in a rhetorical exercise?
Well, assuming a diverse ecosystem of localities with different taxes and baskets of goods, I think a company moving from a high-tax locality to a low-tax locality—that is, assuming that we do in fact get something for paying taxes—this means a company is effectively moving from a high-cost plan which covers a wide range of bundled services, to a low-cost plan which covers a smaller range of bundled goods and services—that is, insofar as high taxes pay for anything at all, a company moving to a low-tax locality is reducing their consumption of those goods and services. So, assuming that marginal economic growth arises from reducing consumption, and assuming that taxes are purchasing something, it is fair to describe this as a average, that is, central example of economic growth at the margins.
Whether or not taxes actually buy anything is, of course, another question entirely. Another question is whether such economic growth in this case is coming at the expense of values we’d rather not sacrifice.
Government spending is included in GDP, so GDP will go up some as the company is able to buy and sell more stuff, but down some as the government is less able to buy and sell stuff.
I think that line of argument proves too much; anytime anybody consumes less of a good, the seller has less ability to buy and sell things, where the buyer has more ability to buy and sell things; the government isn’t a special case here. More, the reversal of this argument is just the broken window fallacy with a reversal of the status quo.
Here’s what I understood you to be saying in the OP: that paying less taxes is economic growth because if you pay less taxes, you can produce more for less money. I’m saying that isn’t necessarily true because you’re not accounting for the reduction in economic activity that comes from the government being less able to buy and distribute things. It may well be true that moving to a low-tax locality does cause economic growth, but it won’t always, so I wouldn’t say that it’s a central example of economic growth.
I don’t get what you mean by the analogy to consuming less of a good. Are you trying to say that my response is wrong because it implies that consuming fewer goods doesn’t always increase economic growth, because consuming fewer goods is like paying less taxes? Well, I don’t think that those are all that similar (the benefits you get from living in a locale are mostly funded by how much tax other people pay as well as non-government perks, you could totally move to a lower-tax jurisdiction and get more goods and services), but also it’s totally correct that consuming fewer goods doesn’t always increase economic growth.
More, the reversal of this argument is just the broken window fallacy with a reversal of the status quo.
Take two societies. They are exactly identical except in one respect: One has figured out how to manufacture light bulbs using 20% less glass.
Which society is richer?
I don’t know what you mean by this.
https://en.wikipedia.org/wiki/Parable_of_the_broken_window for a basic breakdown. When I say your argument is a reversal of the broken window fallacy, I’m saying your argument amounts to the idea that, in a society in which people routinely break windows, and this is a major source of economic activity, people shouldn’t stop breaking windows, on account of all the economic activity it generates.
OK: I think I missed that you’re implying that the cases where companies in fact move to low-tax jurisdictions count as growth, rather than all cases. It makes sense that if you model choice of how much taxes to pay as a choice of how much of some manufacturing input to buy, then companies only do that if it increases efficiency, and my argument above doesn’t make sense taken totally straightforwardly.
I still think you can be wrong for a related reason. Suppose the government spends taxes on things that increase economic growth that no private company would spend money on (e.g. foundational scientific research). Suppose also that that’s all it does with the money: it doesn’t e.g. build useless things, or destroy productive capabilities in other countries. Then moving to a lower tax jurisdiction will make your company more efficient, but will mean that less of the pro-growth stuff governments do happens. This makes the effect on growth neutral. Is this a good model of government? Well, depends on the government, but they really do do some things which I imagine increase growth.
My main objection is that thinking of government as providing services to the people who pay them is a bad model—in other words, it’s a bad idea to think of taxes as paying for a manufacturing input. When you move out of a state, the government probably spends less on the people still in there, and when you move in to a new state, you mainly benefit from other people’s taxes, not your own. It’s as if if you stopped buying glass from a glass company, they made everyone else’s glass worse: then it’s less obvious that your lightbulb company buys less glass, society will get richer.
My crackpot physics just got about 10% less crackpot. As it transpires, one of the -really weird- things in my physics, which I thought of as a negative dimension, already exists in mathematics—it’s a Riemann Sphere. (Thank you, Pato!)
This “really weird” thing is kind of the underlying topology of the universe in my crackpot physics—I analogized the interaction between this topology and mass once to an infinite series of Matryoshka dolls, where every other doll is “inside out and backwards”. Don’t ask me to explain that; that entire avenue of “attempting to communicate this idea” was a complete and total failure, and it was only after drawing a picture of the topology I had in mind that someone (Pato) observed that I had just drawn a somewhat inaccurate picture of a Riemann Sphere. (I drew it as a disk in which the entire boundary was the same point, 0, with dual infinities coinciding at the origin. I guess, in retrospect, a sphere was a more obvious way of describing that.)
If we consider that the points are not evenly allocated over the surface of the sphere—they’re concentrated at the poles (each of which is simultaneously 0 and infinity, the mapping is ambiguous), if we drew a line such that it crosses the same number of points with each revolution, we get—something like a logarithmic spiral. (Well, it’s a logarithmic spiral with the “disk” interpretation; it’s a spherical spiral whose name I don’t know in the spherical interpretation.)
If we consider the bundle of lines connecting the poles, and use this constant-measure-per-revolution spiral to describe their path, I think that’s about … 20% of the way to actually converting the insanity in my head into real mathematics. Each of these lines is “distance” (or, alternatively, “time”—it depends on which of the two charts you employ). The bundle of these spirals provides one dimension of rotation; there’s a mathematical way of extracting a second dimension of rotation, to get a three-dimensional space, but I don’t understand it at an intuitive level yet.
A particle’s perspective is “constantly falling into an infinity”; because of the hyperbolic nature of the space, I think a particle always “thinks” it is at the equator—it never actually gets any closer. Because the lines describe a spiral, the particle is “spinning”. Because of the nature of the geometry of the sphere, this spin expresses itself as a spinor, or at least something analogous to one.
Also, apparently, Riemann Spheres are already used in both relativistic vacuum field equations and quantum mechanics. Which, uh, really annoys me, because I’m increasingly certain there is “something” here, and increasingly annoyed that nobody else has apparently just sat down and tried to unify the fields in what, to me, is the most obvious bloody way to unify them; just assume they’re all curvature, that the curvature varies like a decaying sine wave (like “sin(ln(x))/x”, which exhibits exactly the kind of decay I have in mind). Logarithmic decay of frequency over distance ensures that there is a scalar symmetry, as does a linear decay of amplitude over distance.
Yes, I’m aware of the intuitive geometry involved in an inverse-square law; I swear that the linear decay makes geometric sense too, given the topology in my head. Rotation of a logarithmic spiral gives rise to a linear rescaling relative to the arclength of that rotation. Yes, I’m aware that the inverse-square law also has lots of evidence—but it also has lots of evidence against it, which we’ve attempted to patch by assuming unobserved mass that precisely accounts for the observed anomalies. I posit that the sinusoidal wave in question has ranges wherein the amplitude is decaying approximately linearly, which creates the apparent inverse-square behavior for certain ranges of distances—and because these regions of space are where matter tends to accumulate, having the most stable configurations, they’re disproportionately where all of our observations are made. It’s kind of literally the edge cases, where the inverse-square relationship begins to break down (whether it really does, or apparently does), and the configurations become less stable, that we begin to observe deviations.
I’m still working on mapping my “sin(ln(x))/x” equation (this is not the correct equation, I don’t think, it’s just an equation that kind of looks right for what’s in my head, and it gave me hints about where to start looking) to this structure; there are a few options, but none stand out yet as obviously correct. The spherical logarithmic spiral is a likely candidate, but figuring out the definition of the spiral that maintains constant “measure” with each rotation requires some additional understanding on my part.
First, what phenomenon are we even talking about? It’s important to start here. I’m going to start somewhat cavalierly: Motion is a state of affairs in which, if we measure two variables, X and T, where X is the position on some arbitrary dimension relative to some arbitrary point using some arbitrary scale, and T is the position in “time” as measured by a clock (also arbitrary), we can observe that X varies with T.
Notice there are actually two distinct phenomena here: There is the fact that “X” changed, which I am going to talk about. Then there is the fact that “T” changed, which I will talk about later. For now, it is taken as a given that “time passes”. In particular, the value “T” refers to the measurement of a clock whose position is given by X.
Taking “T” as a dimension for the purposes of our discussion here, what this means is that motion is a state of affairs in which a change in position in time additionally creates a change in position in space; that is, motion is a state of affairs in which a spacial dimension, and the time dimension, are not independent variables. Indeed, special relativity gives us precisely the degree to which they are dependent on one another.
Consider geometries which are consistent with this behavior; if we hold motion in time as a given—that is, if we assume that the value on the clock will change—then the geometry can be a simple rotation.
However, suppose we don’t hold motion in time as a given. What geometry are we describing then? I think we’re looking at a three-dimensional geometry for this case, rather than a four-dimensional geometry.
Instead of further elaborations on my crackpot nonsense, something short:
I expect that there is some distance from a magnetic source between 10^5 meters and 10^7 meters at which there will be magnetic anomalies; in particular, there will be a phenomenon by which the apparent field strength drops much faster than expected and passes through zero into the negative (reversed polarity).
I specifically expect this to be somewhere in the vicinity of 10^6 meters, although the specific distance will vary with the mass of the object.
There should be a second magnetic anomaly somewhere in the vicinity of 10^12 m (So between 10^11 and 10^13), although I suspect at that distance it will be too faint to detect.
More easily detected, because there is a repulsive field at work at these distances—mass should be scarce at this distance from the dominant “local” masses, a scarcity that should continue up to about 10^18 m (between 10^17 and 10^19, although errors really begin to compound here); at 10^18 m, I expect an unusually dense distribution of matter; this value in the vicinity of 10^18 m should be the most common distance between objects in the galaxy.
It should be possible to find large masses (say, black holes) orbiting each other at, accounting for relativistic changes in distance, 10^18m, which we might otherwise expect to fall into one another—that is, there should be unexplainable orbital mechanics between large masses that are this distance apart.
I expect that there is some radius between 10^22 meters and 10^26 meters (vicinity of 10^24) which marks the largest possible size of a galaxy, and some radius between 10^28 and 10^32 (vicinity of 10^30) which marks the most common distance between galaxies.
Galaxies which are between the vicinity of 10^24 and the vicinity of 10^30 meters from one another should be moving apart, on average; galaxies which are greater than the vicinity of 10^30 meters from one another should be falling into one another on average.
Galaxies which are approximately 10^30 meters apart should be orbiting one another—neither moving towards nor away.
I really, really dislike waste.
But the thing is, I basically hate the way everybody else hates waste, because I get the impression that they don’t actually hate waste, they hate something else.
People who talk about limited resources don’t actually hate waste—they hate the expenditure of limited resources.
People who talk about waste disposal don’t actually hate waste—they hate landfills, or trash on the side of the road, or any number of other things that aren’t actually waste.
People who talk about opportunity costs (‘wasteful spending’) don’t hate the waste, they hate how choices were made, or who made the choices.
Mind, wasting limited resources is bad. Waste disposal is itself devoting additional resources—say, the land for the landfill—to waste. And opportunity costs are indeed the heart of the issue with waste.
At this point, the whole concept of finishing the food on your plate because kids in Africa don’t have enough to eat is the kind of old-fashioned where jokes about it being old fashioned are becoming old fashioned, but the basic sentiment there really cuts to the heart of what I mean by “waste”, and what makes it a problem.
Waste is something that isn’t used. It is value that is destroyed.
The plastic wrapping your meat that you throw away isn’t waste. It had a purpose to serve, and it fulfilled it. Calling that waste is just a value judgment on the purpose it was put to. The plastic is garbage, and the conflation of waste and garbage has diminished an important concept.
Food you purchase, that is never eaten and thrown away? That is waste. Waste, in this sense, is the opposite of exploitation. To waste is to fail to exploit. However, we use the word waste now to just mean that we don’t approve of the way something is used, and the use of the word to express disapproval of a use has basically destroyed—not the original use of the word, but the root meaning which gives the very use of the word to express disapproval weight. Think of the term “wasteful spending”—you already know the phrase just means spending that the speaker disapproves of, the word “wasteful” has lost all other significance.
Mind, I’m not arguing that “waste” literally only means a specific thing. I’m arguing that an important concept has been eroded by use by people who were deliberately trying to establish a link with that concept.
Which is frustrating, because it has eroded a class of criticisms that I think society desperately needs, which have been supplanted by criticisms rooted in things like environmentalism, even when environmentalism isn’t actually a great fit for the criticisms—it’s just the framing for this class of criticism where there is a common conceptual referent, a common symbolic language.
And this actually undermines environmentalism; think about corporate “green” policies, and how often they’re actually cost-cutting measures. Cutting waste, once upon a time, had the same kind of public appeal; now if somebody talks about cutting waste, I’m wondering what they’re trying to take away from me. We’ve lost a symbol in our language, and the replacement isn’t actually a very good fit.
This probably applies to applause lights in general. Individuals sometimes do things for idealists reasons, but corporations are led by people selected for their ability to grab power and resources. Therefore all their actions should be suspected as an attempt to gain more power and/or resources. A “green policy” might mean less toilet paper in the company bathrooms, but it never means fewer business trips for the management.
This concept is not fully formed. It is necessary that it is not fully formed, because once I have finished forming it, it won’t be something I can communicate any longer; it will become, to borrow a turn of phrase from SMBC, rotten with specificity.
I have noticed a shortcoming in my model of reality. It isn’t a problem with the accuracy of the model, but rather there is an important feature of the model missing. It is particularly to do with people, and the shortcoming is this: I have no conceptual texture, no conceptual hook, to attach nebuluous information to people to.
To explain what I need a hook for, a professional acquantance has recriprocated trust. There is a professional relationship there, but also a human interaction; the trust involved means we can proceed professionally without negotiating contractual terms beforehand. However, it would undermine the trust in a very fundamental way to actually treat this as the meaning of the trust. That is to say, modeling the relationship as transactional would undermine the basis of the relationship (but for the purposes of describing things, I’m going to do that anyways, basically because it’s easier to explain that way; any fair description of a relationship of any kind should not be put to a short number of words).
I have a pretty good model of this person, as a person. They have an (adult) child who has developed a chronic condition; part of basic social interaction is that, having received this information, I need to ask about said child the next time we interact. This is something that is troubling this person; my responsibility, to phrase it in a misleading way, is to acknowledge them, to make what they have said into something that has been heard, and in a way that lets them know that they have been heard.
So, returning to the shortcoming: I have no conceptual texture to attach this to. I have never built any kind of cognitive architecture that serves this kind of purpose; my interactions with other humans are focused on understanding them, which has basically served me socially thus far. But I have no conceptual hook to attach things like “Ask after this person’s child”. My model is now updated to include the pain of that situation; there is nothing in the model that is designed to prompt me to ask. I have heard; now I need to let this person know that they have been heard, and I reach for a tool I suddenly realize has never existed. I knew this particular tool was necessary, but have never needed it before.
It’s maybe tempting to build general-purpose mental architecture to deal with this problem, but as I examine it, it looks like maybe this is a problem that actually needs to be resolved on a more individual basis, because as I mentally survey the situation, a large part of the problem in the first place is the overuse of general-purpose mental architecture. I should have noticed this missing tool before.
I am not looking for solutions. Mostly it is just interesting to notice; usually, with these sorts of things, I’ve already solved the problem before I’ve had a chance to really notice, observe, and interact with the problem, much less notice the pieces of my mind which actually do the solving. Which itself is an interesting thing to notice; how routine the construction of this kind of conceptual architecture has gotten, that the need for novel mental architecture actually makes me stop for a moment, and pay attention to what is going on.
It can sometimes be hard to notice the things you mentally automate; the point of automating things is to stop noticing them, after all.
That’s a fascinating observation! When I introspect the same process (in my case, it might be “ask how this person’s diabetic cat is doing”), I find that nothing in the model itself is shaped like a specific reminder to ask about the cat. The way I end up asking is that when there’s a lull in the conversation, I scan the model for recent and important things that I’d expect the person might want to talk about, and that scan brings up the cat. My own generalizations, in turn, likely leave gaps which yours would cover, just as the opposite seems to be happening here.
Observe. (If you don’t want to or can’t, it’s a video showing the compression wave that forms in traffic when a car brakes.)
I first saw that video a few years ago. I remembered it a few weeks ago when driving in traffic, and realizing that a particular traffic condition was caused by an event that had happened some time in the past, that had left an impression, a memory, in the patterns of traffic. The event, no longer present, was still recorded. The wave form in the traffic patterns was a record of an event—traffic can operate as a storage device.
Considering traffic as a memory storage device, it appears traffic reaches its capacity when it is no longer possible for traffic to move at all. In practice I think traffic would approach this limit asymptotically, as each additional event stored in its memory reduces the speed at which traffic moves, such that it takes additional time to store each event in proportion to the number of events already stored. That is, the memory storage of traffic is not infinite.
Notably, however, the memory storage of traffic can, at least in principle, be read. Traffic itself is an imperfect medium of storage—it depends on a constant flow of cars, or else the memory is erased, and also cars don’t behave uniformly, such that events aren’t stored perfectly. However, it isn’t perfectly lossy, either; I can know from an arbitrary slow-down that some event happened in traffic in the past.
You could probably program a really, really terrible computer on traffic.
But the more interesting thing, to me, is the idea that a wave in a moving medium can store information; this seems like the sort of “Aha” moment that, were I working on a problem that this could apply to, would solve that problem for me. I’m not working on such a problem, but maybe this will find its way to somebody who is.
Edit: It occurs to me this may look like a trivial observation; of course waves can store information, that’s what radio is. But that’s not what I mean; radio is information transmission, not information storage. It’s the persistence of the information that surprised me, and which I think might be useful, not the mere act of encoding information onto a wave.
Data storage and transmission are the same thing. Both are communication to the future, (though sometimes to the very very near future). Over long enough distances, radio and wires can be information storage. Like all storage media, they aren’t permanent, and need to be refreshed periodically. For waves, this period is short, microseconds to hours. For more traditional storage (clay tablets or engraved gold discs sent to space, for instance), it could be decades to millenea.
Traffic is quite lossy as an information medium—effects remain for hours, but there are MANY possible causes of the same effects, and hard-to-predict decay and reinforcement rates, so it only carries a small amount of information: something happened in the recent past. Generally, this is a good thing—most of us prefer that we’re not part of that somewhat costly information storage, and we pay traffic engineers and car designers a great deal of money to minimize information retention in our roads.
You have (re)invented delay-line memory!
Acoustic memory in mercury tubes was indeed used by most of first-generation electronic computers (1948-60ish); I love the aesthetic but admit they’re terrible even compared to electromagnetic delay lines. An even better (British) aesthetic would be Turing’s suggestion of using Gin as the acoustic medium...
It took me a little while to understand what criticism Aella raised over Eliezer’s defense of the concept of truth.
So to try to summarize what I am now reasonably certain the criticism was:
Eliezer argues that “truth”, as a concept, reflects our expectation that our experiences of reality can match our experiences of reality.
Aella’s criticism is that “of reality” adds nothing to the previous sentence, and Eliezer is sneaking reality into his concept of truth; that is, Eliezer’s argument can be reframed “Our expectation of our experiences can match our experiences”.
The difficulty I had in understanding Aella’s argument is that she framed it as a criticism of the usefulessness of truth, itself. That is, I think she finds the kind of “truth” we are left with, after subtracting a reality (an external world) that adds nothing to it, to be kind of useless (either that or she expects readers to).
Whereas I think it’s basically the same thing. Just as subtracting “of reality” removes nothing from the argument, I think adding it doesn’t actually add anything to the argument, because I think “reality”, or “external world”, are themselves just pointers at the fact that our experiences can be expected, something already implicit in the idea of having expectations in the first place.
Reality is just the pattern of experiences that we experience. Truth is a pattern which correlates in some respect with some subset of the pattern of experiences that we experience.
I have read that criticism, and...
...it feels like some map-and-territory confusion. It’s like if I insisted that the only things that exist are words. And you could be like: “dude, just look at this rock! it is real!”, and I would say: “but ‘dude’, ‘just’, ‘look’, ‘at’, ‘this’, ‘rock’, ‘it’, ‘is’, and ‘real’ are just words, aren’t they?” And so on, whatever argument you give me, I will ignore it and merely point out that it consists of words, therefore it ultimately proves me right. -- Is this a deep insight, or am I just deliberately obtuse? To me it seems like the latter.
By this logic, it’s not even true that two plus two equals four. We only have a sensation of two plus two being four. But isn’t it interesting that these “sensations” together form a coherent mathematics? Nope, we only have a sensation of these sensations forming a coherent mathematics. Yeah, but the reason I have the sensation of math being coherent is because the math actually is coherent, or isn’t it? Nah, you just have a sensation of the reason of math’s coherency being the math’s actual coherency. But that’s because… Nope, just a sensation of becauseness...
To make it sound deeper: the moon allegedly does not exists, because your finger that points at it is merely a finger.
EDIT: A comment below the criticism points out that the argument against reality can be also used as an argument against existence of other people (ultimately, only your sensations of other people exist), therefore this line of thought logically ends at solipsism.
EDIT: It’s actually quite an interesting blogger! The article on reality didn’t impress me, but many others did. For example, Internet communities: Otters vs. Possums is a way more charitable interpretation of the “geeks and sociopaths” dynamics in communities.
Her writing is pretty good, yeah.
The rest of the blog made me pause on the article for a lot longer than I usually would have, to try to figure out what the heck she was even arguing. There really is a thing there, which is why when I figured it out I came here and posted it. Apparently it translates no better than her own framing of it, which I find interesting.
Talking about words is an apt metaphor, but somewhat misleading in the specifics. Abstractly, I think Aella is saying that, in the map-territory dichotomy, the “territory” part of the dichotomy doesn’t actually add anything; we never experience the territory, it’s a strictly theoretical concept, and any correspondence we claim to have between maps and territory is actually a correspondence of maps and maps.
When you look at the world, you have a map; you are seeing a representation of the world, not the world itself. When you hear the world, you have a map. All of your senses provide maps of the world. Your interpretation of those senses is a map-of-a-map. Your model of those interpretations is a map-of-a-map-of-a-map. It’s maps all the way down, and there is no territory to be found anywhere. The “territory” is taken axiomatically—there is a territory, which maps can match better or worse, but it is never actually observed. In this sense, there is no external world, because there is no reality.
I think the criticism here is of a conceptualization of the universe in which there’s a platonic ideal of the universe—reality—which we interact with, and with regards to which we can make little facsimiles—theories, or statements, or maps—which can be more or less reproductions of the ideal (more or less true).
So strictly speaking, this it’s-all-maps explanation is also misleading. It’s territory all the way down, too; your sight isn’t a map of reality, it is part of reality. There are no maps; everything is territory. There is no external reality because there is not actually a point at which we go from “things that aren’t real” to “things that are real”, and on a deeper level, there’s not a point at which we go from the inside to the outside.
Is an old map of a city, which is no longer accurate, true?
The “maps all the way down” does not explain why there is (an illusion of) a reality that all these maps are about. If there is no underlying reality, why aren’t the maps completely arbitrary?
The criticism Aella is making is substantively different than “reality isn’t real”.
So, imagine you’re god. All of reality takes place in your mind; reality is literally just a thought you had. How does Eliezer’s concept of “truth” work in that case?
Suppose you’re mentally ill. How much should you trust something that claims to be a mind? Is it possible for imaginary things to surprise you? What does truth mean, if your interface to the “external world”/”reality” isn’t reliable?
Suppose you’re lucid dreaming. Does the notion of “truth” stop existing?
(But also, even if there is no underlying reality, the maps still aren’t going to be completely arbitrary, because a mind has a shape of its own.)
Then the god’s mind would be the reality; god’s psychological traits would be the new “laws of physics”, kind of.
I admit I have a problem imagining “thoughts” without also imagining a mind. The mechanism that implements the mind would be the underlying reality.
We can suppose that the god is just observing what happens when a particular mathematical equation runs; that is, the universe can, in a certain sense, be entirely independent of the god’s thoughts and psychological traits.
Independence might be close enough to “external” for the “external world” concept to apply; so we can evaluate reality as independent from, even for argument’s sake external to, the god’s mind, even though it exists within it.
So we can have truth which is analogous to Eliezer’s truth.
Now, the question is—does the “external world” and “independence” actually add anything?
Well, suppose that the god can and does alter things; observes how the equation is running, and tweaks the data.
Does “truth” only exist with respect to the parts of this world that the god hasn’t changed? Are the only “true” parts of this reality the parts that are purely the results of the original equation? If the god makes one adjustment ever, is truth forever contaminated?
Okay, let’s define the external world to be the equation itself. The god can choose which equation to run, can adjust the parameters; where exactly in this process does truth itself lay? Maybe in the mathematics used to run the equation? But mathematics is arbitrary; the god can alter the mathematics.
Well, up to a point, the point Aella points at as “consistency.” So there’s that piece; the truth has to at least be consistent. And I think I appreciate the “truth” of the universe that isn’t altered; there’s consistency there, too.
Which leaves the other part, experience.
Suppose, for a moment, we are insane (independently, just imagine being insane); the external reality you observe is illusory. Does that diminish the value of what we consider to be the truth in anticipating our experiences? If this is all a grand illusion—well, it’s quite a consistent illusion, and I know what will happen when I engage in the experience I refer to when I say I drop an apple. I call the illusion ‘reality’, and it exists, regardless of whether or not it satisfies the aesthetic ideal I have for what “existence” should actually mean.
Which is to say—it doesn’t matter if I am living in reality, or in a god’s mathematical equation, or in a fantasy. The existence or nonexistence of an external reality has no bearing on whether or not I expect an apple to hit the ground when I let go of it; the existence or nonexistence of an external reality has no bearing on whether the apple will do so. Whether the apple exists in the real world, or as a concept in my mind, it has a particular set of consistent behaviors, which I experience in a particular way.
Whereas I take the view that truth in the sense in the sense of instrumentalism, prediction of experience, and truth in the sense of realism, correspondence to the territory, are different and both valid. Having recognised the difference, you don’t have to eliminate one, or identify it with the other
If we consider the extra dimension(s) on which the amplitude of the wave function given to the Schrodinger Equation, the wave function instead defines a topology (or possibly another geometric object, depending on exactly what properties end up being invariant.)
If the topology can be evaluated over time by some alternative mathematical construct, that alternative mathematical construct may form the basis for a more powerful (in the sense of describing a wider range of potential phenomena) physics, because it should be constructable in such a way as to not possess the limitations of the Schrodinger Equation that the function returns a value for the entire dimensional space under consideration. (That is, observe that for y=sin(x), the waveform cannot be evaluated in terms of y, because it isn’t defined for all of y.)
Additionally, the amplitude of quantum waves are geometrically limited in a way that a geometric object possessing the dimension(s) of amplitude shouldn’t be; quantum waves have an extent from 0 to the amplitude, whereas a generalized geometric object should permit discontinuous extents, or extents which do not include the origin, or extents which cross the origin. If we treat the position but not the measure of the extent of the dimension(s) of amplitude as having topological properties, then with the exception of discontinuous extents / amplitudes, many of these geometries may be homotopic with the wave-function itself; however, there may be properties that can be described in terms of a geometric object / topology that cannot be described in terms of the homotopic wave function.
Suppose you have a list of choices a selection must be made from, and that the decision theory axioms of orderability and transitivity apply.
It should then be possible to construct a binary tree representing this list of choices, such that a choice can be represented as a binary string.
Likewise, a binary string, in a certain sense, represents a choice.
In this specific sense, what computers automate is the process of selection, of choice. Noticing this, and noticing that computers have automated away considerable amounts of “work”, we must notice that “work”, in the occupational sense, is to a significant extent the process of making selections. The process of mechanization has been the process of converting physical labor into selective labor, and in some cases the creation of physical heuristics that substantially solve selective problems—a vacuum operates on a physical heuristic that things of certain sizes and weights are undesirable to have on/in carpeting, for instance.
Noticing that the information age has largely been an ongoing project of automating selection efforts, one notable exception does crop up—cases of crowdsources selection efforts. Upvotes and downvotes and retweets and the various other crowdsourced mechanisms by which selective pressure is created are selective labor. We tend to think of this process as being to our own benefit, but I will observe the massive amount of monetary value that is extracted by the hosting platforms in the process—value that the hosting platforms enable, but does not create.
There are additionally individuals who create value—and, I would hope, a livelihood—based purely on selective labor we might not notice as labor. Curated musical lists, for example. Additionally, I notice an increasing trend of corporate entities performing selective effort on behalf of individual clients; when you get down to it, housing renting versus buying is a tradeoff between selection power (the ability to make selections) versus selection effort (the requirement to do so). And I notice the cost of renting is increasing relative to the cost of buying, and yet people I know who could buy, are still choosing to rent, and those who do buy, are increasingly buying housing which limits their exposure to selection effort (such as buying condos, duplexes, and in HOAs).
Other ways in which it looks like society is increasingly outsourcing selective effort: Political beliefs, truth-deciding (science), investment, maintenance of household items, movie selection, book selection, food selection. Anything where a company sends people a box of preselected items on a regular basis, where that is supplanting a previous personal selection effort.
The combination of these two things, to me, is interesting, combined with other observations of society. Because the high degree of selective effort we undertake for free in some situations, combined with what I can only describe as an increasingly widespread social resistance to other forms of selective effort, looks like fatigue of executive function. We spend considerable effort making decisions for the financial benefit of social media corporations, and have little selective energy left to make decisions about our own lives.
---
This situation might be a problem, might not. It looks odd to me, to be certain, and I’m increasingly dubious over the structure of ownership of commons whose value is not created by those who extract value from them.
However, I think it’s more important than I’m suggesting here. I’ll return to the idea that mechanization has been the process of eliminating all work except selective effort: This suggests that the effectiveness of a corporation is entirely in how effectively selective efforts take place.
The problem for a corporation, government, or other large impersonal entity is compounded, because the first selective effort, is selecting people to make selective efforts, who in turn will be selecting people to make selective efforts. A structure must be created to maintain a consistent form of selective effort—bureaucracy.
This begins to look a lot like the alignment problem, because, indeed, that’s basically what it is. And it is perhaps illlustrative that thousands of years of social development have only really come up with one functional solution to the problem of alignment corruption: Competition, such that structures whose alignment becomes corrupted are dismantled or destroyed. Which is to say, yet another form of selection.
Only loosely related but your first sentences prompted it: A way to convert complex decisions into a tree of binary choices for humans is Final Version Perfected.
Another phrase for “binary string” is “number”. A choice can be represented by a number, ok. I think you’re skipping the hard part—discovering the choices, and mapping them to numbers.
Then you lose me when you start talking about crowdsourcing and political beliefs and investment and such. That’s all the hard part of mapping. And the resulting map is likely to be uncomputable given current limits (possibly even theoretically, if the computation includes the substrate on which it’s computing).
I don’t think there’s any logical chain here—just rambling.
The point is that meaningful labor is increasingly “selection effort”, the work involved in making a decision between multiple competing choices, and some starter thoughts about how society can be viewed once you notice the idea of making choices as meaningful labor (maybe even the only meaningful form of labor).
The idea of mapping binary strings to choices is a point that information is equivalent to a codification of a sequence of choices; that is, the process of making choices is in fact the process of creating information. For a choice between N options, the options can be considered a series of binary gates, whose value can be 0 or 1, and thus the choice between those options produces a binary string; information. Or a number, if you prefer to think of it that way. That is, making decisions is an information-producing activity.
I’m not sure if I’m just misunderstanding, or actively disagreeing.
Whether you model something as a tree of binary choices, or a lookup table of options doesn’t matter much on this level. The tree is less efficient, but easier to modify, but that’s a completely different level than your post seems to be about, and not relevant to whatever you’re trying to show.
But the hard and important point is NOT in making the decision or executing the choice(s) (whether a jump or a sequence of binary). That just does not matter. Actually identifying the options and decisions and figuring out what decisions are POSSIBLE is the only thing that matters.
The FRAMING of decisions is massively information-producing. Making decisions is also information-producing (in that the uncertainty of the future becomes the truth of the past), but isn’t “information labor” in the same way that creating the model is.
What are you calling the “framing” of a decision? Is it something other than a series of decisions about what qualities with regard to the results of that decision that you care about?
The “framing” of a decision is the identification that there’s a decision to make, and the enumeration of the set or series of sub-decisions that describe the possible actions.
Suppose for a moment your washing machine is broken.
You have some options; you could ignore the problem. You could try to fix it yourself. You could call somebody to fix it. This isn’t intended to be a comprehensive list of options, mind, these are cached thoughts.
Each of these options in turn produce new choices; what to do instead, what to try to do to fix it, who to call.
Let’s suppose for a moment that you decide to call somebody. Who do you call? You could dial random numbers into your phone, but clearly that’s not a great way of making that decision. You could look up a washing machine repair company on the internet; let’s suppose you do this.
How do you decide which repair company to call? There are reviews—these are choices other people have made about how they liked the service. But before you even get there, what site do you use to get reviews? That’s a choice. Maybe you let Google make that choice for you—you just pick whatever is the first listed site. The search engine is making choices for you; the review site algorithm is making choices for you; the people who posted reviews are making choices for you. Out of a vast space of options, you arrive at only a few.
Notice all the choices other people are making on your behalf in that process, however. You’re not calling a car mechanic to repair your washing machine, yet that is, in fact, an option.
---
Suppose you need to drive to a grocery store in a new city. What choices are you making, and what choices do you ask your cell phone navigation application to make for you? Are you making more or less choices than your parents would? What about your grandparents? What is the difference in the kind and quantity of choices being made?
Are there differences in the quality of choices being made? Who benefits from the choices we make now?
“Goal-oriented behavior” is actually pretty complicated, and is not, in fact, a natural byproduct of general AI. I think the kinds of tasks we currently employ computers to do are hiding a lot of this complexity.
Specifically, what we think of as artificial intelligence is distinct from motivational intelligence is distinct from goal-oriented behaviors. Creating an AI that can successfully play any video game is an entirely different technology stack from creating an AI that “wants” to play video games, which in turn is an entirely different technology stack from creating an AI that translates a “desire” to play video games into a sequence of behaviors which can actually do so.
The AI alignment issue is noticing that good motivation is hard to get right; I think this needs to be “motivation is going to be hard to do at all, good or bad”; possibly harder than intelligence itself. Part of the problem with AI writing right now is that the writing is, basically, unintentional. You can get a lot further with unintentional writing, but intentional writing is far beyond anything that exists right now. I think a lot of fears come about because of a belief that motivation can arise spontaneously, or that intentionality can arise out of the programming itself; that we might write our desires into machines such that machines will know desire.
What would it take for GPT-3 to want to run itself? I don’t think we have a handle on that question at all.
Goal-oriented behaviors, meanwhile, correspond to an interaction between motivation and intelligence that itself is immensely more complicated than either independently.
---
I think part of the issue here is that, if you ask why a computer does something, the answer is “Because it was programmed to.” So, to make a program do something, you just program it to do it. Except this is moving the motivation, and intentionality, to the programmer; or, alternatively, to the person pressing the button causing the AI to act.
The AI in a computer game does what it does, because it is a program that is running, that causes things to happen. If it’s a first person shooter, the AI is trying to kill the player. The AI has no notion of killing the player, however; it doesn’t know what it is trying to do, it is just a series of instructions, which are, if you think about it, a set of heuristics that the programmer developed to kill the player.
This doesn’t change if it’s a neural network. AlphaGo is not, in fact, trying to win a game of Go; it is the humans who trained it who have any motivation, AlphaGo itself is just a set of really good heuristics. No matter how good you make those heuristics, AlphaGo will never start trying to win a game, because the idea of the heuristics in question trying to win a game is a category error.
I think, when people make the mental leap from “AI we have now” to “general AI”, they’re underspecifying what it is they are actually thinking about.
AI that can solve a specific, well-defined problem.
AI that can solve a well-defined problem. ← This is general AI; a set of universal problem-solving heuristics satisfies this criteria.
AI that can solve a poorly-defined problem. ← This, I think, is what people are afraid of, for fear that somebody will give it a problem, ask it to solve it, and it will tile the universe in paperclips.
Assuming all marginal economic growth comes from eliminating unnecessary expenses—increasing efficiency—then companies moving from a high-tax locality to a low-tax locality is, in fact, economic growth.
Is it a central example of economic growth, or am I just engaging in a rhetorical exercise?
Well, assuming a diverse ecosystem of localities with different taxes and baskets of goods, I think a company moving from a high-tax locality to a low-tax locality—that is, assuming that we do in fact get something for paying taxes—this means a company is effectively moving from a high-cost plan which covers a wide range of bundled services, to a low-cost plan which covers a smaller range of bundled goods and services—that is, insofar as high taxes pay for anything at all, a company moving to a low-tax locality is reducing their consumption of those goods and services. So, assuming that marginal economic growth arises from reducing consumption, and assuming that taxes are purchasing something, it is fair to describe this as a average, that is, central example of economic growth at the margins.
Whether or not taxes actually buy anything is, of course, another question entirely. Another question is whether such economic growth in this case is coming at the expense of values we’d rather not sacrifice.
Government spending is included in GDP, so GDP will go up some as the company is able to buy and sell more stuff, but down some as the government is less able to buy and sell stuff.
I think that line of argument proves too much; anytime anybody consumes less of a good, the seller has less ability to buy and sell things, where the buyer has more ability to buy and sell things; the government isn’t a special case here. More, the reversal of this argument is just the broken window fallacy with a reversal of the status quo.
Here’s what I understood you to be saying in the OP: that paying less taxes is economic growth because if you pay less taxes, you can produce more for less money. I’m saying that isn’t necessarily true because you’re not accounting for the reduction in economic activity that comes from the government being less able to buy and distribute things. It may well be true that moving to a low-tax locality does cause economic growth, but it won’t always, so I wouldn’t say that it’s a central example of economic growth.
I don’t get what you mean by the analogy to consuming less of a good. Are you trying to say that my response is wrong because it implies that consuming fewer goods doesn’t always increase economic growth, because consuming fewer goods is like paying less taxes? Well, I don’t think that those are all that similar (the benefits you get from living in a locale are mostly funded by how much tax other people pay as well as non-government perks, you could totally move to a lower-tax jurisdiction and get more goods and services), but also it’s totally correct that consuming fewer goods doesn’t always increase economic growth.
I don’t know what you mean by this.
Take two societies. They are exactly identical except in one respect: One has figured out how to manufacture light bulbs using 20% less glass.
Which society is richer?
https://en.wikipedia.org/wiki/Parable_of_the_broken_window for a basic breakdown. When I say your argument is a reversal of the broken window fallacy, I’m saying your argument amounts to the idea that, in a society in which people routinely break windows, and this is a major source of economic activity, people shouldn’t stop breaking windows, on account of all the economic activity it generates.
OK: I think I missed that you’re implying that the cases where companies in fact move to low-tax jurisdictions count as growth, rather than all cases. It makes sense that if you model choice of how much taxes to pay as a choice of how much of some manufacturing input to buy, then companies only do that if it increases efficiency, and my argument above doesn’t make sense taken totally straightforwardly.
I still think you can be wrong for a related reason. Suppose the government spends taxes on things that increase economic growth that no private company would spend money on (e.g. foundational scientific research). Suppose also that that’s all it does with the money: it doesn’t e.g. build useless things, or destroy productive capabilities in other countries. Then moving to a lower tax jurisdiction will make your company more efficient, but will mean that less of the pro-growth stuff governments do happens. This makes the effect on growth neutral. Is this a good model of government? Well, depends on the government, but they really do do some things which I imagine increase growth.
My main objection is that thinking of government as providing services to the people who pay them is a bad model—in other words, it’s a bad idea to think of taxes as paying for a manufacturing input. When you move out of a state, the government probably spends less on the people still in there, and when you move in to a new state, you mainly benefit from other people’s taxes, not your own. It’s as if if you stopped buying glass from a glass company, they made everyone else’s glass worse: then it’s less obvious that your lightbulb company buys less glass, society will get richer.
Another crackpot physics thing:
My crackpot physics just got about 10% less crackpot. As it transpires, one of the -really weird- things in my physics, which I thought of as a negative dimension, already exists in mathematics—it’s a Riemann Sphere. (Thank you, Pato!)
This “really weird” thing is kind of the underlying topology of the universe in my crackpot physics—I analogized the interaction between this topology and mass once to an infinite series of Matryoshka dolls, where every other doll is “inside out and backwards”. Don’t ask me to explain that; that entire avenue of “attempting to communicate this idea” was a complete and total failure, and it was only after drawing a picture of the topology I had in mind that someone (Pato) observed that I had just drawn a somewhat inaccurate picture of a Riemann Sphere. (I drew it as a disk in which the entire boundary was the same point, 0, with dual infinities coinciding at the origin. I guess, in retrospect, a sphere was a more obvious way of describing that.)
If we consider that the points are not evenly allocated over the surface of the sphere—they’re concentrated at the poles (each of which is simultaneously 0 and infinity, the mapping is ambiguous), if we drew a line such that it crosses the same number of points with each revolution, we get—something like a logarithmic spiral. (Well, it’s a logarithmic spiral with the “disk” interpretation; it’s a spherical spiral whose name I don’t know in the spherical interpretation.)
If we consider the bundle of lines connecting the poles, and use this constant-measure-per-revolution spiral to describe their path, I think that’s about … 20% of the way to actually converting the insanity in my head into real mathematics. Each of these lines is “distance” (or, alternatively, “time”—it depends on which of the two charts you employ). The bundle of these spirals provides one dimension of rotation; there’s a mathematical way of extracting a second dimension of rotation, to get a three-dimensional space, but I don’t understand it at an intuitive level yet.
A particle’s perspective is “constantly falling into an infinity”; because of the hyperbolic nature of the space, I think a particle always “thinks” it is at the equator—it never actually gets any closer. Because the lines describe a spiral, the particle is “spinning”. Because of the nature of the geometry of the sphere, this spin expresses itself as a spinor, or at least something analogous to one.
Also, apparently, Riemann Spheres are already used in both relativistic vacuum field equations and quantum mechanics. Which, uh, really annoys me, because I’m increasingly certain there is “something” here, and increasingly annoyed that nobody else has apparently just sat down and tried to unify the fields in what, to me, is the most obvious bloody way to unify them; just assume they’re all curvature, that the curvature varies like a decaying sine wave (like “sin(ln(x))/x”, which exhibits exactly the kind of decay I have in mind). Logarithmic decay of frequency over distance ensures that there is a scalar symmetry, as does a linear decay of amplitude over distance.
Yes, I’m aware of the intuitive geometry involved in an inverse-square law; I swear that the linear decay makes geometric sense too, given the topology in my head. Rotation of a logarithmic spiral gives rise to a linear rescaling relative to the arclength of that rotation. Yes, I’m aware that the inverse-square law also has lots of evidence—but it also has lots of evidence against it, which we’ve attempted to patch by assuming unobserved mass that precisely accounts for the observed anomalies. I posit that the sinusoidal wave in question has ranges wherein the amplitude is decaying approximately linearly, which creates the apparent inverse-square behavior for certain ranges of distances—and because these regions of space are where matter tends to accumulate, having the most stable configurations, they’re disproportionately where all of our observations are made. It’s kind of literally the edge cases, where the inverse-square relationship begins to break down (whether it really does, or apparently does), and the configurations become less stable, that we begin to observe deviations.
I’m still working on mapping my “sin(ln(x))/x” equation (this is not the correct equation, I don’t think, it’s just an equation that kind of looks right for what’s in my head, and it gave me hints about where to start looking) to this structure; there are a few options, but none stand out yet as obviously correct. The spherical logarithmic spiral is a likely candidate, but figuring out the definition of the spiral that maintains constant “measure” with each rotation requires some additional understanding on my part.
What does it mean, for a thing to move?
First, what phenomenon are we even talking about? It’s important to start here. I’m going to start somewhat cavalierly: Motion is a state of affairs in which, if we measure two variables, X and T, where X is the position on some arbitrary dimension relative to some arbitrary point using some arbitrary scale, and T is the position in “time” as measured by a clock (also arbitrary), we can observe that X varies with T.
Notice there are actually two distinct phenomena here: There is the fact that “X” changed, which I am going to talk about. Then there is the fact that “T” changed, which I will talk about later. For now, it is taken as a given that “time passes”. In particular, the value “T” refers to the measurement of a clock whose position is given by X.
Taking “T” as a dimension for the purposes of our discussion here, what this means is that motion is a state of affairs in which a change in position in time additionally creates a change in position in space; that is, motion is a state of affairs in which a spacial dimension, and the time dimension, are not independent variables. Indeed, special relativity gives us precisely the degree to which they are dependent on one another.
Consider geometries which are consistent with this behavior; if we hold motion in time as a given—that is, if we assume that the value on the clock will change—then the geometry can be a simple rotation.
However, suppose we don’t hold motion in time as a given. What geometry are we describing then? I think we’re looking at a three-dimensional geometry for this case, rather than a four-dimensional geometry.
Instead of further elaborations on my crackpot nonsense, something short:
I expect that there is some distance from a magnetic source between 10^5 meters and 10^7 meters at which there will be magnetic anomalies; in particular, there will be a phenomenon by which the apparent field strength drops much faster than expected and passes through zero into the negative (reversed polarity).
I specifically expect this to be somewhere in the vicinity of 10^6 meters, although the specific distance will vary with the mass of the object.
There should be a second magnetic anomaly somewhere in the vicinity of 10^12 m (So between 10^11 and 10^13), although I suspect at that distance it will be too faint to detect.
More easily detected, because there is a repulsive field at work at these distances—mass should be scarce at this distance from the dominant “local” masses, a scarcity that should continue up to about 10^18 m (between 10^17 and 10^19, although errors really begin to compound here); at 10^18 m, I expect an unusually dense distribution of matter; this value in the vicinity of 10^18 m should be the most common distance between objects in the galaxy.
It should be possible to find large masses (say, black holes) orbiting each other at, accounting for relativistic changes in distance, 10^18m, which we might otherwise expect to fall into one another—that is, there should be unexplainable orbital mechanics between large masses that are this distance apart.
I expect that there is some radius between 10^22 meters and 10^26 meters (vicinity of 10^24) which marks the largest possible size of a galaxy, and some radius between 10^28 and 10^32 (vicinity of 10^30) which marks the most common distance between galaxies.
Galaxies which are between the vicinity of 10^24 and the vicinity of 10^30 meters from one another should be moving apart, on average; galaxies which are greater than the vicinity of 10^30 meters from one another should be falling into one another on average.
Galaxies which are approximately 10^30 meters apart should be orbiting one another—neither moving towards nor away.