What about artists who think that reducing things to their bare essentials is the essence of art? Or styles like—well, broadly speaking, anime (or caricatures in general) - that are based on the emphasis of certain basic forms? Or writers like Eric Hoffer—“Wordiness is a sickness of American writing. Too many words dilute and blur ideas. [...] If you have nothing to say and want badly to say it, then all the words in all the dictionaries will not suffice.” ?
Estarlio
Setting a price isn’t necessarily a decision made with respects to the interests of one company. Not knowing precisely how the marketing groups for medical goods in the US are set up, beyond that they’re pretty abusive, I don’t care to argue that one way or the other though.
Depends on the price elasticity of demand. If you widen the access to the thing by lowering the price, it’s possible that you might make more profit than someone who has fewer customers who they make a lot more profit per customer off of.
Then, to extend the analogy: Imagine that digging has potentially negative utility as well as positive. I claim to have buried both a large number of nukes and a magical wand in the garden.
In order to motivate you to dig, you probably want some evidence of magical wands. In this context that would probably be recursively improving systems where, occasionally, local variations rapidly acquire super-dominance over their contemporaries when they reach some critical value. Evolution probably qualifies there—other bipedal frames with fingers aren’t particularly dominant over other creatures in the same way that we are, but at some point we got smart enough to make weapons (note that I’m not saying that was what intelligence was for though) and from then on, by comparison to all other macroscopic land-dwelling forms of life, we may as well have been god.
And since then that initial edge in dominance has only ever allowed us to become more dominant. Creatures afraid of wild animals are not able to create societies with guns and nuclear weapons—you’d never have the stability for long enough.
In order to motivate you not to dig, you probably want some evidence of nukes. In this context, recursively—I’m not sure improving is the right word here—systems with a feedback state, that create large amounts of negative value. Well, to a certain extent that’s a matter of perspective—from the perspective of extinct species the ascendancy of humanity would probably not be anything to cheer about, if they were in a position to appreciate it. But I suspect it can at least stand on its own that it tends to be the case that failure cascades are easier to make than cascade successes. One little thing goes wrong on your rocket and then the situation multiplies; a small error in alignment rapidly becomes a bigger one; or the timer on your patriot battery is losing a fraction of a second and over time your perception of where the missiles are is off significantly. - it’s only with significant effort that we create systems where errors don’t multiply.
(This is analogous to altering your expected value of information—like if earlier you’d said you didn’t want to dig and I’d said, ‘well there’s a million bucks there’ instead—you’d probably want some evidence that I had a million bucks, but given such evidence the information you’d gain from digging would be worth more.)
This seems to be fairly closely analogous to Elizer’s claims about AI, at least if I’ve understood them correctly, that we have to hit an extremely small target and it’s more likely that we’re going to blow ourselves to itty-bitty pieces/cover the universe in paperclips if we’re just fooling around hoping to hit on it by chance.
If you believe that such is the case, then the only people you’re going to want looking for that magic wand—if you let anyone do it at all—are specialists with particle detectors—indeed if your garden is in the middle of a city you’ll probably make it illegal for kids to play around anywhere near the potential bomb site.
Now, we may argue over quite how strongly we have to believe in the possible existence of magitech nukes to justify the cost of fencing off the garden—personally I think the statement:
if you take a thorough look at actually existing creatures, it’s not clear that smarter creatures have any tendency to increase their intelligence.
Is to constrain what you’ll accept for potential evidence pretty dramatically—we’re talking about systems in general, not just individual people, and recursively improving systems with high asymptotes relative to their contemporaries have happened before.
It’s not clear to me that the second claim he makes is even particularly meaningful:
In the real-world, self-reinforcing processes eventually asymptote. So even if smarter creatures were able to repeatedly increase their own intelligence, we should expect the incremental increases to get smaller and smaller over time, not skyrocket to infinity.
Sure, I think that they probably won’t go to infinity—but I don’t see any reason to suspect that they won’t converge on a much higher value than our own native ability. Pretty much all of our systems do, from calculators to cars.
We can even argue over how you separate the claims that something’s going to foom from the false claims of such (I’d suggest, initially, just seeing how many claims that something was going to foom have actually been made within the domain of technological artefacts, it may be that the base-line credibility is higher than we think.) But that’s a body of research that Caplan, as far as I’m aware, hasn’t forwarded. It’s not clear to me that it’s a body of research with the same order of difficulty as creating an actual AI either. And, in its absence, it’s not clear to me that to answer in effect, “I’ll believe it when I see the mushroom cloud.” is a particularly rational response.
Ah, I think I follow you.
Absence of evidence isn’t necessarily a weak kind of evidence.
If I tell you there’s a dragon sitting on my head, and you don’t see a dragon sitting on my head, then you can be fairly sure there’s not a dragon on my head.
On the other hand, if I tell you I’ve buried a coin somewhere in my magical 1cm deep garden—and you dig a random hole and don’t find it—not finding the coin isn’t strong evidence that I’ve not buried one. However, there there’s so much potential weak evidence against. If you’ve dug up all but a 1cm square of my garden—the coin’s either in that 1cm or I’m telling porkies, and what are the odds that—digging randomly—you wouldn’t have come across it by then? You can be fairly sure, even before digging up that square, that I’m fibbing.
Was what you meant analogous to one of those scenarios?
“The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.”
Nietzsche, Morgenröte. Gedanken über die moralischen Vorurteile
when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.
I’m not sure how you could have such a situation, given that absence of expected evidence is evidence of the absence. Do you have an example?
I don’t see why you’d think it faulty to mention the possibilities there—remember I’m not claiming that they’re true, just that they might be potential explanations for the suggested observation.
If you want to share the reason for the downvote, I promise not to dispute it so you don’t have to worry about it turning into a time sink and to give positive karma.
Is it some logical argument she is unable to find a fault in? If so, then how come there are multiple schools of philosophy disagreeing on the basics?
Maybe their level of logic is just low, or they have bad thought habits in applying that logic. Or maybe there’s some system level reward for not agreeing (I imagine that publish||die might have such an effect.)
I don’t disagree with you on any particular point there. However, the quote I was responding to wasn’t, as I see it, attempting to explore the cost/benefit of raising minimum wage or subsidising the future of children. It was stating that they just shouldn’t have kids—and in that much represented an effective blank cheque. That seems the opposite of your, much more nuanced, approach; bound by implications of fact and reason that are going to be specific to particular issues and cases and thus can’t be generalised in the same way.
Upvoted.
It is true that a woman in such a situation would be well advised to arm herself. However, a complaint about being raped—personal emotional traumas aside—would be a complaint about the necessity of doing so as much as anything else. The response that she should’a armed herself then doesn’t address the real meat of the issue; what sort of society we live in, how we want to relate to one another; whether we’re to respond with compassion or dismissive brutalism (or at what point on that scale.)
There are things that are the result of natural laws—you jump off a building with no precautions, then you’re probably gonna go splat. It makes limited sense to interpret those as complaints about the laws of physics. So, the balance in those cases swings more towards preventative advice in a way that’s rarely the case with issues that are the result of human action.
This isn’t rational. It’s just elitist snobbery. You can use the exact same structure of argument with respect to anything:
Aw, you got raped? Well who told you to go into a room with your friend without a handgun on you? Didn’t you know you should be prepared to kill every man around you in case they turn on you?
Structurally identical.
It’s an ideology of knives in the dark, the screams of the dying and enslaved, and the blood red light of fire on steel. Those who honestly endorse its underlying principles would just as happily endorse any barbarism on the strength of the defeated’s inability to escape it, provided it went on at some suitable distance from them.
Why not be honest and sum up the only real thing it says? - Vae victus.
Why are extremism and fanaticism correlated? In a world of Bayesians, there’d be a negative correlation. People would hold extreme views lightly, for at least three reasons. [...]
For fairness sake.
Strategic nuclear weapons—the original and most widespread nuclear weapons—cannot be used with restraint.
They can. One of the problems that America had, going into the 80s, was that its ICBM force was becoming vulnerable to a potential surprise attack by the CCCP. This concerned them because only the ICBM force, at the time, had the sort of accuracy necessary for taking out hardened targets in a limited strike—like their opponent’s strategic forces. And they were understandably reluctant to rely on systems that could only be used for city busting—i.e. the submarine force.
If you’re interested in this, I suggest the—contemporary with that problem—documentary First Strike.
I’ve always wondered why, on discovering nuclear weapons, the leaders of America didn’t continually pour a huge budget into it—stockpile a sufficient number of them and then destroy all their potential peers.
I can’t think of any explanation bar the morality in their culture. They could certainly have secured sufficient material for the task.
Because it’s really really useful?
What’s an example from your own life where building human capital and signaling quality to colleges have come into conflict? How did you resolve the conflict? Do you think you made the right choice? Is there anything you would have done differently?
When I planned to apply for university I had to find somewhere that would let me take A Levels, the cost of A Levels outside of school being prohibitive. Anyway, I eventually found a school that would let me do it. Naturally, however, the requirement to be in school from 8:30-15:00 allowed me far less time than I’d normally have to learn and pursue my own interests—the environment was utter hell if you were trying to learn in your free time. They didn’t even really have a library you could go if you wanted somewhere quiet to think; they had a small room with books in that backed onto an open-plan classroom but it wasn’t really suitable.
I resolved this by convincing the school change their registration procedures for the sixth form so that people could sign themselves in and out of school when they weren’t meant to be in class.
Do I think I made the right choice? Well, it wasn’t a bad choice. But there were better choices—I’ve since learned that some universities let you join without pre-existing qualifications if you do a bit of hoop-jumping, and I’d probably do that if I were doing it over again. Given what I knew then though I don’t think I could have done much better.
IIRC they decided not to use chemical weapons because they were under the impression that the Allies had developed comparable capabilities.
Foundations matter. Always and forever. Regardless of domain. Even if you meticulously plug all abstraction leaks, the lowest-level concepts on which a system is built will mercilessly limit the heights to which its high-level “payload” can rise. For it is the bedrock abstractions of a system which create its overall flavor. They are the ultimate constraints on the range of thinkable thoughts for designer and user alike. Ideas which flow naturally out of the bedrock abstractions will be thought of as trivial, and will be deemed useful and necessary. Those which do not will be dismissed as impractical frills — or will vanish from the intellectual landscape entirely. Line by line, the electronic shanty town grows. Mere difficulties harden into hard limits. The merely arduous turns into the impossible, and then finally into the unthinkable.
[...]
The ancient Romans could not know that their number system got in the way of developing reasonably efficient methods of arithmetic calculation, and they knew nothing of the kind of technological paths (i.e. deep-water navigation) which were thus closed to them.
Stanislav http://www.loper-os.org/?p=448
Yeah, I know. It’s just not clear that you have to love complexity and not like reductionism to get art. It’s not A <-> B.
If it’s not A <-> B then it’s A → B but even that seems sketchy. Lots of people love spouting, sketching, whatever, complex nonsense without doing anything I’d describe as art.
Of course, it’d help in this situation to be able to point at art—but the whole thought seems very muddled and imprecise, and the issues seems far from the blank assertion it’s presented as.
No.