Those are both moral improvements on typical chicken. Another example is mutton: sheep are commonly kept on rocky hillsides which would otherwise go to waste, and commonly have a life that’s about as good as it can get for a sheep, being mostly left alone to live as they would in the wild, except protected from predators and parasites.
rwallace
One problem with this argument is that to eat chicken or pork, you have to be okay not only with killing animals, but with torturing them as well—there’s no better word for the conditions in which chickens and pigs are typically kept.
Okay, as Swimmer observes, writing can easily be done from home on a random sleep schedule; so can graphics work, so can creating web comics. There’s plenty of relevant educational material for all of these that doesn’t require attending scheduled classes. And if you don’t bond well with random people, probably the best way to improve your social life is to look for people with whom you have shared interest; which means you might be better off getting the career stuff up and running first; once you do that, it will probably lead to encounters with people with whom you have something in common.
I was about to explain why nobody has an answer to the question you asked, when it turned out you already figured it out :) As for what you should actually do, here’s my suggestion:
Explain your actual situation and ask for advice.
For each piece of advice given, notice that you have immediately come up with at least one reason why you can’t follow it.
Your natural reaction will be to post those reasons, thereby getting into an argument with the advice givers. You will win this argument, thereby establishing that there is indeed nothing you can do.
This is the important bit: don’t do step 3! Instead, work on defeating or bypassing those reasons. If you can’t do this by yourself, go ahead and post the reasons, but always in a frame of “I know this reason can be defeated or bypassed, help me figure out how,” that aligns you with instead of against the advice givers.
You are allowed to reject some of the given advice, as long as you don’t reject all of it.
Is that true? Surely even on a purely factual matter, it is still the case that he who makes a claim, will typically give his best evidence for the claim, so if the best evidence offered is weak, that still suggests stronger evidence doesn’t exist.
Yes, I think so too.
That is true of the current intermediate technology level. If we manage to develop sufficiently advanced technology that we no longer need to use the Earth to support ourselves, it could change.
Given environmentalists proposing to turn the whole Earth into a nature reserve, the answer I would like to be able to give is, “Sure, have fun. I’m off to the asteroid belt to find some dead matter to turn into computational substrate. Send me some postcards.”
The problem with question 1 is that it makes the implicit assumption that either we have become extinct or things are more or less okay. I’m confident there will be still people around in, say, a thousand years. But it is the way of extinction that the outcome is often decided long before, and by factors having nothing to do with the cause of, the death of the last individual. If we fail to take this shot at escaping our boundaries, I’m not at all confident we’ll get another chance. We could end up in a scenario where our species still exists but extinction by ordinary geological processes has become inevitable.
The problem with question 3 is this it makes the implicit assumption that spending money with the stated aim of reducing the risk of extinction, will have the effect of reducing that risk rather than increasing it. Both the theory of how human psychology works in far mode, and experience with trying to spend money on politically charged good causes, suggests the opposite.
You aren’t obliged to agree with me on these points, but as they are the primary issues, if you are producing a document that claims to be a questionnaire, I will suggest it would be appropriate to spell out the primary issues as explicit questions rather than making them implicit assumptions.
An elegant puzzle.
Gur fbyhgvba vf rnfvrfg gb frr vs jr pbafvqre n svavgr ahzore bs vgrengvbaf bs gur cebprqher bs fcyvggvat bss n fznyyre znff. Va rnpu vgrengvba, gur yrsgzbfg znff onynaprf gur obbxf ol haqretbvat irel encvq nppryrengvba gb gur evtug. Va gur yvzvg nf gur ahzore bs vgrengvbaf tbrf gb vasvavgl, jr unir na vasvavgrfvzny znff onynapvat gur obbxf jvgu vasvavgryl ynetr nppryrengvba.
You seem to be assigning a high probability to exotic problems (information isn’t preserved by freezing, global apocalypse) and a low probability to mundane problems (you die of Alzheimer’s, cryonics companies go out of business). The reverse seems more likely to me.
The problem with the textbook security advice is that if you follow all of it, it will cost you more than the expected loss from following none of it, and since the people who write it rarely bother to give priority guidance, people end up rationally following none of it. What’s actually needed is a very short list of advice that’s feasible to remember and follow, and which will cost less to follow than its expected benefit.
Here’s a couple of my suggestions to start with:
If you’re going to use the same password for two dozen random websites, don’t also use that same password for your PayPal account.
Don’t publish your date of birth, it’s an attack vector for identity theft. If a website demands your date of birth, and you choose to give the real year, at least change the exact day. (Same goes for other identifying pieces of information e.g. mother’s maiden name, date of birth is just the one that comes up most often.)
I would actually think evolution a particularly poor choice.
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph. Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can’t discover those without discovering a lot of other things first.
But Aristotle could in principle have figured out evolution. The prior probability of doing so at that early stage may be small, but I’ll still bet evolution has a much larger variance in its discovery time than a lot of other things.
For example, not all large-scale data processing problems parallelize well in the current state-of-the-art. There are problems that would greatly benefit from being able to throw thousands of machines at them, except that attempting to do so hits bottlenecks of an algorithmic nature. Finding better algorithms to get past those bottlenecks would be worth a lot of money to the right people.
Upvoted for having at least some acquaintance with reality. To use the analogy in the post, I would put this into the category of “we should use chemical fuel, maybe in the form of a very large cannon,” by contrast with proposals on the level of psychic levitation or breeding kangaroos for the ability to jump really high.
Honestly?
If you have enough doubt in your mind that it even seems like a faintly good idea to ask for advice before setting out on this path… in my opinion, that constitutes strong evidence that it’s the wrong path for you, and the best time to cancel is before you start.
Sounds reasonable. Might be worth further subdividing into working memory versus long-term memory since they would seem likely to be useful for different things?
Well, take a look at one of the concrete proposals that have been made, say memorizing the Gettysburg address. Is this a good or a bad idea? I don’t know. Personally I have an explicit policy of delegating that kind of long-term memory to machines; if I were going to try any kind of memory exercise, I’d go for something like dual N-back that tries to train working memory. Does that mean memorizing the Gettysburg address is a bad suggestion? Well, I don’t know. It wasn’t presented with any accompanying rationale. For all I know, maybe there’s actually a very good reason why at least some people might want to thus train long-term memory, that the person who suggested it is seeing and I’m not. It would be easier to assess proposals if they were accompanied by discussion of how they are intended to meet some objective.
Cosmic goals no—those being far mode beliefs, if anything they are likely to be counterproductive. But it seems to me that leveling up is something to be done in the service of game objectives, not vice versa. For example, consider the following strategies:
I’m trying to optimize for lifespan. The available evidence says exercise is beneficial for that, so I will set out a program of going to the gym on a permanent sustainable basis as well as cutting down on calorie intake and driving, and aiming within the next decade to move to a region where cryonics is available. To help me achieve these goals, I will design a metric to measure them.
Everybody seems to think you should go to the gym, so I guess I’ll go to the gym.
I think the former strategy is, apart from anything else, more likely to have you still going to the gym in six months time. -shrug- It seems most people disagree, based on the vote total; so be it.
I’m neither agreeing nor disagreeing—frankly, I have no statistical data on what most people who are interested in self-improvement are striving towards. What I’m saying is that if you are interested in something else, it might be better to figure out why, and just what that something else is, and why you are trying to come up with a metric; the answers to those questions might help you figure out what the metric should be.
The quotes are correct in the sense that “P implies P” is correct; that is, the authors postulate the existence of an entity constructed in a certain way so as to have certain properties, then argue that it would indeed have those properties. True, but not necessarily consequential, as there is no compelling reason to believe in the future existence of an entity constructed in that way in the first place. Most humans aren’t like that, after all, and neither are existing or in-development AI programs; nor is it a matter of lacking “intelligence” considered as a scalar quantity, as there is no tendency for the more capable AI programs to be constructed more along the postulated lines (if anything, arguably the reverse).