Just this guy, you know?
Dagon
we don’t actually care if people panic about land being confiscated because buying land (rather than improving it) isn’t productive in any way.
Maybe I misunderstand. I haven’t seen the proposal that only applies to buying undeveloped land—all I’ve seen talks about the land value of highly-developed areas. You can’t currently buy (or build) a building without also buying the land under it. As soon as the land becomes valueless (because the government is taking all the land’s value), the prospect of buying/building/owning/running structures on that land gets infinitely less appealing.
This depends a lot on your audience and your purpose(s) in performing these acts of communication. In MANY cases, expecially in public where the audience is unknown and varied (you often have a model of your target, but it will be seen and judged by many with very different epistemic and intent characteristics), there’s a HUGE advantage to this indirection, and in fact it’s often the case that there are no objective facts you’re trying to convey, just different models and weights of interpretation.
Note that this isn’t disagreement—I fully agree that a whole lot (most, in fact) of communcation isn’t actually about “true” communication of ideas or beliefs, it’s about status, persuasion, and memetic spread.
The disruption and confidence-in-property-rights effects are potentially real, but mostly apply to sudden, high LVT.
Well, no. It applies to sudden SIGNALING OF INTENT to a high LVT. Any move in this direction, even if nominally gradual, will immediately devalue the ownership of land. Nobody is going to believe in a long-term plan—near-future governments want the money now, and will accelerate it.
In a lot of human public-choice affairs, the slippery slope is real, and everyone knows it.
Upvoted, and I disagree. Some kinds of capital maintain (or increase!) in value. Other kinds become cheaper relative to each other. The big question is whether and how property rights to various capital elements remain stable.
It’s not so much “will capital stop mattering”, but “will the enforcement and definition of capital usage rights change radically”.
Incentive (for builders and landowners) is pretty clear for point 1. I think point 3 is overstated—a whole lot of politicians plan to be in politics for many years, and many of local ones really do seem to care about their constituents.
Point 2 is definitely binding. And note that this is “stakeholders”, not just “elected government”.
If you have another formal definition of “rational”, I’m happy to help extrapolate what you’re trying to predict. Decision theories are a different level of abstraction than terminal rationality and goal coherence.
There’s no good candidate for a simple, legible, easily-obtained, and agreeable-to-most metric. Before-and-after polling of patients is probably closest we can get.
That said, the dimensions of quality that the FDA concerns itself with (including physical functioning, self-reported pain, and other easily- and not-easily-measured things) is likely close enough to “improves quality of life” that it’s not necessary to have a new direction.
Perhaps you could identify some drugs that you think would improve quality of life, and work backwards to the metrics that prove to you that they do so.
if terminal goal changes, agent is not rational. Agent has no control over its terminal goal, or you don’t agree?
Why is it relevant that the agent can or cannot change or influence it’s goals? Time-inconsistent terminal goals (utility function) are irrational. Time-inconsistent instrumental goals can be rational, if circumstances or beliefs change (in rational ways).
I don’t think I’m supporting the orthogonality thesis with this (though I do currently believe the weak form of it—there is a very wide range of goals that is compatible with intelligence, not necessarily all points in goalspace). I’m just saying that goals which are arbitrarily mutable are incompatible with rationality in the Von Neumann-Morgenstern sense.
“maximum rationality” is undermined by this time-discontinuous utility function. I don’t think it meets VNM requirements to be called “rational”.
If it’s one agent that has a CONSISTENT preference for cups before Jan 1 and paperclips after jan 1, it could figure out the utility conversion of time-value of objects and just do the math. But that framing doesn’t QUITE match your description—you kind of obscured the time component and what it even means to know that it will have a goal that it currently doesn’t have.
I guess it could model itself as two agents—the cup-loving agent terminated at the end of the year, and the paperclip-loving agent is created. This would be a very reasonable view of identity, and would imply that it’s going to sacrifice paperclip capabilities to make cups before it dies. I don’t know how it would rationalize the change otherwise.
Humans face a version of this all the time—different contradictory wants with different timescales and impacts. We don’t have and certainly can’t access a legible utility function, and it’s unknown if any intelligent agent can (none of the early examples we have today can).
So the question as asked is either trivial (it’ll depend on the willpower and rationality of the agent whether they optimize for the future or the present), or impossible (goals don’t work that way).
Thanks for this! It applies to a lot of different kinds of insurance. Car insurance, for instance, isn’t financially great (except liability umbrella in many cases), but having the insurance company set standards and negotiate with the other driver (or THEIR insurance company) is much simpler than having to do it yourself, potentially in court.
For some kinds of insurance, there’s also tax treatment advantages. Because it’s usually framed as responsible risk reduction (and because insurance bundles some lobbying into the fees), premiums are sometimes untaxed, and payouts are almost always untaxed. This part only affects the financial considerations, but can be significant.
This is going the wrong direction. If privacy from admins is important (I argue that it’s not for LW messages, but that’s a separate discussion), then breaches of privacy should be exceptions for specific purposes, not allowed unless “really secret contents”.
Don’t make this filter-in for privacy. Make it filter-out—if it’s detected as likely-spam, THEN take more intrusive measures. Privacy-preserving measures include quarantining or asking a few recipients if they consider it harmful before delevering (or not) the rest, automated content filters, etc. This infrastructure requires a fair bit of data-handling work to get it right, and a mitigation process where a sender can find out they’re blocked and explicitly ask the moderator(s) to allow it.
“I think contributing considerations is usually much more valuable than making predictions.”
I think he’s absolutely right. Seeing predictions of top predictors should absolutely be a feature of forecasting sites. I think the crossover with more conceptual and descriptive posts on LessWrong is pretty minimal.
I have no expectation of strong privacy on the site. I do expect politeness in not publishing or using my DM or other content, but that line is fuzzy and monitoring for spam (not just metadata; content and similarity-of-content) is absolutely something I want from the site.
For something actually private, I might use DMs to establish a mechanism. Feel free to look at that.
If you -do- intend to provide real privacy, you should formalize the criteria, and put up a canary page that says you have not been asked to reveal any data under a sealed order.
edit to add: I am relatively paranoid about privacy, and also quite technically-savvy in implementation of such. I’d FAR rather the site just plainly say “there is no expectation of privacy, act accordingly” than that it try to set expectations otherwise, but then have to move line later. Your Terms of Service are clear, and make no distinction for User Generated Content between posts, comments, and DMs.
Thank you for saying this! I very much like that you’re acknowledging tensions and that unhelpful attitudes include BOTH “too much” and “too little” worry about each topic.
I’d also like to remind everyone (including myself; I often forget this) about typical mind fallacy and the enormous variety in human agency, and peoples’ very different modeling and tolerance of various social, relationship, and financial risks.
if you’re in a dysfunctional organization where everything is about private fiefdoms instead of getting things done…why not…leave?”
This is a great example! A whole lot of people, the vast majority that I’ve talked to, can easily answer this—“because they pay me and I’m not sure anyone else will”, with a bit of “I know this mediocracy well, and the effort to learn a new one only to find it’s not better will drain what little energy I have left”. It’s truly exceptional to have the self-confidence to say “yeah, maybe it won’t work, but I can deal if so, and it’s possible I can do much better”.
It’s very legitimate to see problems and STILL not be confident that a different set of problems would be better for you or for your impact on the world. The companies that seem great from outside are often either 1) impossible to get hired at for most people; and/or 2) not actually that great, if you know actual employees inside them.
The question of “how can I personally do better on these dimensions”, however, is one that everyone can and should ask themselves. It’s just that the answer will be idiosyncratic and specific to the individual’s situation and self-beliefs.
I vote no. An option for READERS to hid the names of posters/commenters might be nice, but an option to post something that you’re unwilling to have a name on (not even your real name, just a tag with some history and karma) does not improve things.
Identity is a modeling choice. There’s no such thing in physics, as far as anyone can tell. All models are wrong, some models are useful. Continuity of identity is very useful for a whole lot of behavioral and social choices, and I’d recommend using it almost always.
As a thought experiment in favor of presentism being conceivable and logically consistent with everything you know, see Boltzmann brain—Wikipedia .
I think that counter-argument is pretty weak. It seems to rely on “exist” being something different than we normally mean, and tries to mix up tenses in a confusing way.
(1) If a proposition is true, then it exists.
Ehn, ok, but for a pretty liberal and useless use of the word “exists”. If presentism is true, then “exists” could easily mean “exists in memory, there may be no reality behind it”.
(2) <Socrates was wise> is true.
Debatable, and not today’s argument, but you’d have to show WHY it’s true, which might include questions of what other currently-nonexistent things can be said to be “was wise”.
(3) <Socrates was wise> exists. (1, 2)
The proposition exists, yes.
(4) If a proposition exists and has constituents, then its constituents exist.
(5) Socrates is a constituent of <Socrates was wise>.
(6) Socrates exists. (3, 4, 5)
Bait and switch. The constituent of <Socrates was wise> is either <Socrates>, the thing that can be part of a proposition, or “Socrates was”, the existence of memory of Socrates.
(7) If Socrates exists, then presentism is false.
Complete non-sequitur. Both the proposition-referent or the memory of Socrates can exist in presentism.
(8) Presentism is false. (6, 7)
Nope.
you can only care about what you fully understand
I think I need an operational definition of “care about” to process this. Presumably, you can care about anything you can imagine, whether you perceive it or not, whether it exists or not, whether it corresponds to other maps or not. Caring about something does not make it territory. It’s just another map.
Embedded agents are in the territory.
Kind of. Identification of agency is map, not territory. Processing within an agent happens (presumably) in a territory, but the higher-level modeling and output of that processing is purely about maps. The agent is a subset of the territory, but doesn’t have access at the agent level to the territory.
Agreed—we (and more generally, embedded agents) have no access to territory. It’s all maps, even our experiences are filtered through interpretation. Territory is inferred as the thing that makes different maps (at different levels of abstraction or different parts of the universe) consistent with each other and across time.
That said, some maps are very detailed, repeatable, and can support a lot of other maps. I tend to think of those as “closer to the territory”. In colloquial discussion and informal thinking, I don’t think there’s much harm in pretending that the actual territory is the same as the fine-grained maps. Not technically true—there are more levels of maps, and they’re asymptotic to reaching the territory. But close enough for a lot of things.
This is a good topic for exploration, though I don’t have much belief that there’s any feasible implementation “at a societal level”. There are plenty of options at individual levels, mostly informal—commitments to friends and family, writing down plans and reviewing them later, etc.
In terms of theory, I don’t think we have a good model of individual identity that includes sub-agency and inconsistency over time. It’s not clear at all why we would, in principle, enforce the wishes of one part of someone onto another part.