Stress-resistant habits is one thing that has been on my mind recently.
Take for example dietary habits, I think the results are pretty conclusive by this point that Intermittent fasting (IF) or periodic long-term fasting provide benefits similar to a very/relatively healthy diet in terms of insulin response, tissue regeneration and maintaining a “healthy” insulin/glucagon/leptin balance throughout the day.
However, the main advantage they have over a diet, is that they are much less fragile. I can chose to follow a relatively empirically & scientifically proven diet (think for example low sucrose, low fructose, loads of nuts and fish, various vegetables that play well with the digestive system), and it might provide the same benefits as periodic fasting, but it will be much more subjective to external circumstance:
Not enough money to buy salmon from a low-mercury source ? Though luck
Decided to have a few beers one night ? You’ll probably feel like shit the next day (as opposed to someone that drinks more often, who’s liver can readily mobilize alcohol dehydrogenase production)
Something sucks and you decided to drown your sorrows in chocolate ? Welp, things are going to get worst now.
IF (or periodic longer-term fasting) has the advantage of being easy to follow and not having any major downsides when you break the habit for a few days or even weeks.
I think the same is true for other habits as well. About 8 months ago, I’ve started training myself to only use a 15″ laptop, a keyboard and an ergonomic mouse. I think I might have been a bit more productive with 2x 4k screens… but after sufficient re-training I don’t miss it much. I do however notice an incredible different when traveling, it allows me much more flexibility as to where I can get work/writing done.
I think a view can be taken for stuff like playing music to some extent, the biggest reason why I prefer a multi-effects modeling preamp versus a tube amp plus an array of 6 to 12 pedals, is that I can put preamp in my backpack. I don’t have to initiate in a gear-moving ceremony every time I want to play somewhere that’s not my home.
Not to mention stuff like exercise, where things like swimming are certainly loads of fun, weight lifting is certainly very efficient, jogging can be socially-oriented and provide a breath of fresh air ,but calisthenics rule in terms of being able to them anywhere, anytime, with minimal amounts of effort.
I think almost by nature most habits that stick with people fall into this bucket. Because you can only have so many fragile habits until some external stress comes along and you are forced to drop them. But paradoxically, people will probably tend to talk about fragile habits more, since it’s easy to put the easy-to-follow ones in the background and not think of them as such.
Going to manage stress is about as robust to stress as you can get.
I agree, but I think this is using a narrower definition of “stress” than the one I prescribe in the article.
The way I see it getting a load of new work which you take on because it’s fun, is also stress.
Traveling to a new country is “stress”, but it’s fun.
The actual practical example of “stress” I had in mind is changing countries a lot, since I tend to that (voluntarily because I find it fun overall), but then I though it applies to other types of stress as well (e.g. death of friend/pet, shitty weather, physical injury, having to go to driving school, lost a job… etc)
Physical performance is one thing that isn’t really “needed” in any sense of the word for most people.
For most people, the need for physical activity seems to boil down to the fact that you just feel better, live longer and overall get less health related issues if you do it.
But on the whole, I’ve seen very little proof that excelling in physical activity can help you with anything (other than being a professional athlete or trainer, that is). Indeed, it seems that the whole relation to mortality basically breaks down if you look at top perform. Going from things like strongman competitions and american football where life expectancy is lower, to things like running and cycling where some would argue but evidence is lacking, to football and tennis where it’s a bit above average.
But it’s basically a bloody book, I personally haven’t read all of it, but I often go back to it for references.
Also, there’s the much more obvious problem with pushing yourself to the limits, injury. I think this is hard to quantify and there’s few studies looking at it. In my experience I know a surprising amount of “active” people that got injured in life-altering ways from things like skating, skying, snowboarding and even football (not in the paraplegic sense, more in the “I have a bar of titanium going through my spine and I can’t lift more than 15kg safely” sort of way). Conversely, 100% of my couch-dwelling buddies in average physical shape doesn’t seem to suffer from any chronic pain.
To some extent, this annoys me, though I wonder if poor studies and anecdotal evidence is enough to warrant that annoyance.
For example, I frequent a climbing gym. Now, if you look at climbing, it’s relatively safe, there’s two things people complain about most sciatica and “climbers back” (basically a very weird looking but not that harmful form of kyphosis).
I honestly found the idea rather weird… since one of the main reason I climb (besides the fact that it’s fun) is that it helps and helped me correct my kyphosis and basically got rid of any back/neck discomfort I felt from sitting too much at a computer.
I think this boils down to how people climb, especially how they do bouldering.
Hurling limbs at tremendous speeds to try and crab onto something tiny.
Falling on the mat, often and from large heights. Climbing goes two ways up and down, most people doing bouldering only care about up
Indeed, a typical bouldering run might look something like: “Climb carefully and skillfully as much as possible, hurl yourself with the last bit of effort you have hoping you reach the top, fall on the mat rinse and repeat”.
This is probably one of the stupidest things I’ve seen from a health perspective. You’re essentially praying for articulation damage, dislocating a shoulder/knee, tearing a muscle (doesn’t look pretty, I assume doesn’t feel nice, recovery times are long and sometimes fully recovering is a matter of years) and spine damage (orthopedics don’t agree on much, but I think all would agree the worst thing you can do for your spine is fall from a considerable height… repeatedly, like, dozens of time every day).
But the thing is, you can pretty much do bouldering without this, as in, you can be “decent” at it without doing any of this. Personally I approach bouldering as slowly and steadily climbing… to the top, with enough energy to also climb down + climbing down whenever I feel that I’m to exhausted to continue. Somehow, this approach to the sport is the one that give you strange looks. The people pushing themselves above the limits risking injury and getting persistent spine damage from falling… are the standard.
Another things I enjoy is weight lifting, I especially enjoy weighted squats. Weighted squats are fun, they wake you up in the morning, they are a lazy person exercise when you’ve got nothing else in during that day.
I’ve heard people claim you can get lower back pain and injury from weighted squats, again, this seems confusing to me. I actually used to have minor lower back pain on occasions (again, from sitting), the one exercise that seemed to have permanently fixed that is a squat. A squat is what I do when I feel that my back is a bit stiff and I need some help.
But I think, again, this is because I am “getting squats wrong”, my approach to a squat is “Let me load a 5kg ergonomic bar with 25kg, do a squat like 8 times, check my posture on the last 2, if I’m able to hold it and don’t feel tired, do 5-10 more, if I still feel nice and energetic after a 1 minute break, rinse and repeat”.
Loading a bar with a few hundred kg, at least 2.5x your body weight, putting on a belt so that your intestines don’t fall out and lowering it “ONCE”, because fuck me you’re not going to be able to do that twice in a day. You should at least get some nosebleed every 2 or 3 tries if you’re doing this stuff correctly.
If you weigh 165 pounds and have one of the following fitness levels, the standard for your squat one-rep max is:
Untrained: 110 pounds
Novice: 205 pounds
… etc
To say this seems insane is far fetched, basically the advice around the internet seems to be “If you’ve never done this before, aim for 40-60kg, if you’ve been to the gym a few times, go for 100+”
Again, it’s hard to find data on this, but as someone that’s pretty bloody tall who has been using weight to train for years, the idea of starting with 50kg for a squat as an average person seem insane. I do 45kg from time to time to change things up, I’d never squat anything over 70kg even if you paid me… I can feel my body during the move, I can feel the tentative pressure on my lower back if my posture slips for a bit… that’s fine if you’re lifting 30kg, that seems dangerous as heck if you’re lifting more than your body weight, it even feels dangerous at 60kg.
But again, I’m not doing squats correctly, I am in the wrong here as far as people doing weight training are concerned.
I’m also wrong when it comes to every sport. I’m a bad runner because I give up once my lungs are burning for 5 minutes straight. I’m a horrible swimmer because I alter styles and stick with low-speed ones that are overall better for toning all muscles and have less risk of injury… etc
Granted, I don’t think that people are too pushy about going to extremes. The few times people tell me some version of “try harder” phrased as a friendly encouragement. I finish what I’m doing, say thanks and lie to them that I have a slight injury and I’d rather not push it.
But deep inside I have a very strong suspicion that I’m not wrong on this thing. That somehow we’ve got ourselves into a very unhealthy memetic loop around sports, where pushing yourself is seen as the natural thing to do, as the thing you should be doing every day.
A very dangerous memetic loop, dangerous to some extent in that it causes injury, but much more dangerous because it might be discouraging people from sports. Both in that they try once, get an injury and quit. Or in that they see it, they think it’s too hard (and, I think it is, the way most people do it) and they never really bother.
I’m honestly not sure why it might have started…
The obvious reason is that it physically feels good to do it, lifting a lot of running more than your body tells you that you should is “nice”. But it’s nice in the same way that smoking a tiny bit of heroine before going about your day is nice (as in, quite literally, it seems to me the feelings are related and I think there’s some pharmacological evidence to back that up). It’s nice to do it once to see how it is, maybe I’ll do it every few months if I get the occasion and I feel I need a mental boost… but I wouldn’t necessarily advise it or structure my life around it.
The other obvious reason is that it’s a status thing, the whole “I can do this thing better than you thus my rank in the hierarchy is higher”. But then… why is it so common with both genders, I’d see some reason for men to do this, because historically we’ve been doing it, but women competing in sports is a recent things, hardly “built into our nature” and most of the ones I know that practice things like climbing are among the most chilled out dudes I’ve ever meet.
The last reason might be that it’s about breaking a psychological barrier, the “Oh, I totally thought I couldn’t do that, but apparently I can”. But it seems to me like a very very bad way of doing that. I can think of many other safer better ways from solving a hard calculus problem to learning a foreign language in a month to forcing yourself to write an article every day… you know, things that have zero risks of paralysis and long term damage involved.
But I think at this point imitation alone is enough to keep it going.
The “real” reason if I take the outside view is probably that that’s how sports are supposed to be done and I just got stuck with a weird perspective because “I play things safe”.
Indeed, it seems that the whole relation to mortality basically breaks down if you look at top perform. Going from things like strongman competitions and american football where life expectancy is lower, to things like running and cycling where some would argue but evidence is lacking, to football and tennis where it’s a bit above average.
Yeah but elites athletes are at the tails, and The Tails Come Apart. I’d expect pro athletes to be sacrificing all sorts of things to get extreme performance in a particular sport, but that the average person who is working on general athletic performance won’t have that issue.
Handicap principle. Publicly burning off excess health is a better use of it from your genes perspective than just sitting on what is ultimately a depreciating asset.
This just boils down to “showing off” though. But this makes little sense considering:
a) both genders engage in bad practices. As in, I’d expect to see a lot of men doing cross fit, but it doesn’t make sense when you consider there’s a pretty even gender split. “Showing off health” in a way that’s harmful to health is not evolutionary adaptive for women (where it arguably pays off to live for a long time, evolutionarily speaking). This is backed up by other high-risk behaviors being mainly a men’s thing
b) sports are a very bad way to show off, especially the sports that come with high risk of injury and permanent degradation when practiced in their current extreme (e.g. weight lifting, climbing, gymnastics, rugby, hokey). The highest pay-off sports I can think of (in terms of social signaling) are football, american football, basketball and baseball… since they are popular and thus the competition is both intense and achieving high rank is rewarding. Other than american football they are all pretty physically safe as far as sports go… when there are risks, they come from other players (e.g. getting a ball to the head) not from over-training or over-performing.
So basically, if it’s genetic miss-firing then I’d expect to see it misfire almost only in men, and this is untrue.
If it’s “rational” behavior (as in, rational from the perspective of our primate ancestor) then I’d expect to see the more dangerous forms of showing off bring the most social gains rather than vice-versa.
Granted, I do think handicap principle can be partially to blame for “starting” the thing, but I think it continues because of higher level memes that have little to do with social signaling or genetics.
Note 1: Not a relevant analogy unless you use the StackExchange Network.
I think the stack overflow reputation system is a good analogous for the issues one encounters with a long-running monetary system.
I like imaginary awards, when I was younger I specifically liked the imaginary awards from stack overflow (Reputation) because I though they’d help me get some recognition as a developer (silly, but in my defense, I was a teenager).
However, it proved to be very difficult to find questions that nobody else had answered which I could answer and were popular enough to get more than one or two upvotes for said answer (upvotes generate reputation).
I got to like 500 reputation and I slowly started being less active on SO (now the only question I answer are basically my own, in case nobody provides and answer but I end up finding a solution).
I recently checked my reputation on SO and noticed I was close to 2000 point, despite not being active on the website in almost 4 years o.o Because reputation from “old questions” accumulate. I though “oh, how much would have young me loved to see this now-valueless currency reach such an arbitrarily high level”.
I think this is in many ways analogous to the issues with the monetary system. Money seems to loss its appeal as you get older, since it can buy less and less and you need less and less. All your social signaling and permanent possession needs are gone by the time you hit 60. All your “big dreams” now require too much energy, even if you theoretically have the capital to put them in practice.
At the same time Stack Exchange reputation gives you the power to judge others, you can gift reputation for a good answer, you can edit people’s answers and questions without approval, you can review questions and decide they are duplicates or don’t fit the community and reject them.
Again, something I’d be very good at when I was 19, and deeply passionate about software development. Something that I’m probably less good at now, since I haven’t the energy to care and probably have lost some of my “general” knowledge since I’ve specialized more.
Same thing applies to money, as you get old and accumulate you get the ability to invest in other people. Think an idea is wrong/right ? Now you have the capital to propel or damage it. But generally speaking, old people are probably in a worst position to understand the world and to understand which ideas would help/hinder our society and in which way. Young people might be equal clueless on a societal level, but at least they have some understanding on a personal level, they are involved, they have skin in the game.
Note 2: Obligatory disclaimer (since I assume I’m on an US-style leftist part of the internet) that I don’t mean this to be a communist manifesto based on poor empirical evidence on a vaguely related system. It’s just an interesting observation I felt like writing down.
I recently found it fun to think about the idea of whether or not there are separate consciousnesses in a single brain.
There’s the famous example of a Corpus callosotomy producing split-brain people, where seemingly tow rational but poorly-communicating entities exist within the brain. I think many people may get the intuition that it’s likely that both entities in this case (both hemispheres) are conscious in some way.
People also get the intuition that animals with brain processes far different from ours (rats, cats, cows… etc) may experience/produce something like consciousness.
Even more so, when comma patients wake up and tell story of being conscious during the comma, just unable to act, we usually think that this is also a form of experience similar to what most of all call consciousness (if not exactly the same).
Yet there doesn’t seem to be a commonly-shared intuition that our own brain might harbor multiple conscious entities, despite the fact that there’s nothing to indicate the contrary.
Indeed, I would say that if our intuitions go something like:
1. Larger than {x} CNS ⇒ consciousness
2. Splitting up CNS of size 2*{x} into two tightly linked bits ⇒ 2 consciousness
3. Consciousness does not require a define-able pattern to exist, or at least whatever pattern is required doesn’t seem to be a consistent opinion between people
I can see no reason why those intuitions couldn’t be strained to say that it is plausible and possibly even intuitive for there to be 2, or 3 or n conscious experiences going on within a brain at the same time.
Indeed, I would say it might even be more likely for my brain to have, say, 5 conscious experiences that don’t intersect going on at the same time, than for a rat with a brain much less developed than mine to have a single conscious experience.
Granted, I think functional mris data might have a few things to opine about the former being less likely than the later, but there’s still nothing fundamentally different about the two hypotheses. We are no more aware if a rat is conscious than we would be aware if our own brain had something in it that was conscious.
*Note: for the sake of argument I’m assuming we all share a definition of consciousness along the lines of “consciousness is what it is like to be and experience”. I’m also assuming a non-solipsistic viewpoint and one. I’m also assuming that, based on the fact that we are conscious, we can deduce other people are to, rather than being philosophical zombies.*
Having read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I’ve come up with a decent classification for them based on the fallacies they commit.
There’s the kind of alarmist that understands how machine learning works but commits the fallacy of assuming that data-gathering is easy and that intelligence is very valuable. The caricature of this position is something along the lines of “PAC learning basically proves that with enough computational resources AGI will take over the universe”.
But I think that my disagreement with this first class of alarmist is not very fundamental, we can probably agree on a few things such as:
1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases.
2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject.
These types of alarmists would probably agree with me that, if we found out a way to magically multiply two arbitrary tensors 100x times faster than we do now, for the same electricity consumption, that would constitute a great leap forward.
But the second kind are the ones that scare/annoy me most, because they are the kind that don’t seem to really understand machine learning. Which results in them being surprised by the fact that machine learning models are able to do, what has been uncontroversially established that machine learning models could do for decades.
The not-so-caricatured representation of this position is: “Oh no, a 500,000,000 parameters models designed for {X} can outperform a 20KB decision tree when trained for task {Y}, the end is nigh !”
And if you think that this caricature is unkind (well, maybe the “end is nigh” part is), I’d invite you to read the latest blog entry by Scott Alexander , a writer which I generally consider to be quite intelligent and rational, being amazed that a 1,500,000,000 parameters transform architecture can be trained to play chess poorly… a problem that is so trivial one could probably power its training using a well-designed potato battery and an array of P2SC’s… simulated in minecraft.
I’ve seen endless examples of this, usually boiling down to “Oh no, a very complex neural network can do a very simple task with about the same accuracy as an <insert basic sklearn classifier>” or “Oh no, a neural network can learn to compress information from arbitrary unlabeled data”. Which is literally what people have been doing with neural networks since like… forever. That’s the point of neural networks, they are usually inefficient and hard to tune but are highly generalizable.
I think this second viewpoint is potentially dangerous and I think it would be well worth-while to educate people enough so that they switch from it. Since it seems to engender an irrational religion-style fear in people and it shifts focus away from the real problems (e.g. giving models the ability to estimate uncertainty in their own conclusions)
Regarding MIRI/SIAI/Yudkowsky, I think you are considerably overestimating the extent to which the early AI safety movement took any notice of research. Early MIRI obsessed about stuff like AIXI, that AI researchers didn’t care about, and based a lot of their nighmare scenarious on “genie” style reasoning derived from fairy tales.
Having read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I’ve come up with a decent classification for them based on the fallacies they commit.
I feel similarly, except I think the flaws are a lack of clarity and jumping to conclusions, at times, rather than fallacies.
But I think that my disagreement with this first class of alarmist is no very fundamental, we can probably agree on a few things such as:
1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases.
2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject.
This is definitely not something you will find agreement on. Thinking that this is something that alarmists would agree with you on suggests you are using a different definition of AGI than they are, and may have other significant misunderstandings of what they’re saying.
being amazed that a 1,500,000,000 parameters transform architecture can be trained to play chess poorly… a problem that is so trivial one could probably power its training using a well-designed potato battery and an array of P2SC’s… simulated in minecraft.
Truly, a new standard for replication. Jokes aside, I wouldn’t have said ‘amazed’, just surprised.* The question around that is, how far can you go with just textual pattern matching?** What can’t be done that way? RTS games? ‘Actually playing an instrument’ rather than writing music for one?
*From the article:
Is any of this meaningful? How impressed should we be that the same AI can write poems, compose music, and play chess, without having been designed for any of those tasks? I still don’t know.
**Though for comparison, it might be useful to see how well other programs, or humans do on those tasks. Ideally this would be a novel task, which requires people who haven’t played chess or heard music, or using an unfamiliar notation.
Well, the “surprised” part is what I don’t understand.
As in, in principle, provided enough model complexity (as in, allowing it to model very complex functions) you can basically learn anything as long as you can format your inputs and outputs in such a way as to fit the model.
1.5B parameters is more than enough complexity to learn chess, provided that it’s been done via models with 0.01% the amount of parameters.
In general the only issue is that the less-fitted your model is for the task the more it takes to train it. Given that I can train a deep blue equivalent model on an RTX2080 would be counted in the minutes, the fact that you can train something worse than that using the GPT-2 architecture in a few hours of days is.… not really impressive, nor is it surprising. It would have been surprising if the training time was low than other bleeding-edge model or similar to them and the resulting equation could out-perform or match them.
As it stands the idea of a generalizable architecture that take a very long time to train is not very useful, since we already have quick ways of doing architecture search and hyper parameter search. The idea that a given architecture can learn to solve X/Y and Z efficiently (which in this case it isn’t) wouldn’t even be that impressive, unless you couldn’t get a good arhictecture search algorithm to solve X,Y and Z equally fast.
The idea that a given architecture can learn to solve X/Y and Z efficiently (which in this case it isn’t) wouldn’t even be that impressive, unless you couldn’t get a good [architecture] search algorithm to solve X,Y and Z equally fast.
Most people don’t have something like an architecture search algorithm on hand. (Aside from perhaps their brains, as you mentioned in ‘AGI is here’ post.)
Well, the “surprised” part is what I don’t understand.
In this case, surprise is a result of learning something. Yes, it’s surprising to you that not everyone has learned this already. (Though there are different ways/levels of learning things.) Releasing a good architecture search might help, or writing a post about this: “GPT-2 can do (probably) anything very badly that’s just moving symbols around. This might include rubix cubes, but not dancing*).”
*I assume. Also guessing that moving in general is hard (for non-custom hardware; things other than brains) and it has a big space that GPT-2 doesn’t have a shot at (like StarCraft/DotA/etc.).
The concern is that ‘GPT-2 is bad at everything, but better than random’, and people wondering, ‘how long until something that is good at everything comes along’? Will it be sudden, or will ‘bad’ have to be replaced by ‘slightly less bad’ a thousand times over the course of the next hundred/thousand years?
Most people don’t have something like an architecture search algorithm on hand
I’m not sure what you mean by this … ? Architecture search is fairly trivial to implement from scratch and take literally 2 lines of code with something like AX. Well, arguable if it’s trivial per-say, but I think most people would have an easier time coming up with, understanding and implementing architecture search than coming up with, understand and implementing a transformer (e.g. GPT-2) or any other attention-based network.
I assume. Also guessing that moving in general is hard (for non-custom hardware; things other than brains) and it has a big space that GPT-2 doesn’t have a shot at (like StarCraft/DotA/etc.).
Again,I’m not sure why GPT2 wouldn’t have a shot and stracraft or dota. The most basic fully connected network you could write, as long as it has enough parameters and the correct training environment, has shot at starcraft2, dota… etc. It’s just that it will learn slower than something build for those specific cases.
The concern is that ‘GPT-2 is bad at everything, but better than random’, and people wondering, ‘how long until something that is good at everything comes along’? Will it be sudden, or will ‘bad’ have to be replaced by ‘slightly less bad’ a thousand times over the course of the next hundred/thousand years?
Again, I’m not sure how “bad” and “good” are defined here. If you are defining them as “quick to train”, than again, something that’s “better at everything” than GPT-2 is already here since the 70s, dynamic architecture search *(ok, arguably only widely used in the last 6 years or so).
If you are talking about “able to solve”, then again, any architecture with enough parameters should be able to solve any problem that is solve-able given enough time to train, the time required to train it is the issue.
[Again, I’m] not sure why GPT2 wouldn’t have a shot [at] stracraft or dota. The most basic fully connected network you could write, as long as it has enough parameters and the correct training environment, has shot at starcraft2, dota… etc.
Moving has a lot of degrees of freedom, as do those domains. There’s also the issue of quick response time (which is not something it was built for), and it not being an economical solution (which can also be said for OpenAI’s work in those areas).
When things built for starcraft don’t make it to the superhuman level, something that isn’t built for it probably won’t.
It’s just that it will learn slower than something [built] for those specific cases.
The question is how long − 10 years? Solving chess via analyzing the whole tree would take too much time, so no one does it. Would it learn in a remotely feasible amount of time?
The question is how long − 10 years? Solving chess via analyzing the whole tree would take too much time, so no one does it. Would it learn in a remotely feasible amount of time?
Well yeah, that’s my whole point here. We need to talk about accuracy and training time !
If the GPT-2 model was trained in a few hours, and losses 99% of games vs a decision tree based model (ala deep blue) that was trained in a few minutes on the same machine, then it’s worthless. It’s exactly like saying “In theory, given almost infinite RAM and 10 years we could beat deep blue (or alpha chess or whatever the cool kids are doing nowadays) by just analyzing a very large subset of all possible moves + combinations and arranging them hierarchically”.
So you think people should only be afraid/excited about developments in AGI that
1) are more recent than 50 to arguably 6 years ago
2) could do anything/a lot of things well with a reasonable amount of training time?
3) Or that might actually generalize in the sense of general artificial intelligence, that’s remotely close to being on par with humans (w.r.t ability to handle such a variety of domains)?
In regards to 1), I don’t necessarily think that older developments that are re-emerging can’t be interesting (see the whole RL scene nowadays, which to my understanding is very much bringing back the kind of approaches that were popular in the 70s). But I do think the particular ML development that people should focus on is the one with the most potential, which will likely end up being newer. My grips with GPT-2 is that there’s no comparative proof that it has potential to generalize compared to a lot of other things (e.g. quick architecture search methods, custom encoders/heads added to a resnet), actually I’d say the sheer size of it and the issue one encounters when training it indicates the opposite.
I don’t think 2) is a must, but going back to 1), I think that training time is one of the important criterions to compare the approaches we are focusing on. Since training time on a simple task is arguably the best you can do to understand training time for a more complex task.
As for 3) and 4)… I’d agree with 3), I think 4) is too vague, but I wasn’t trying to bring either point across in this specific post.
Just an example of a library that can be used to do hyperparameter search quickly.
But again, there are many tools and methodologies and you can mix and match, this is one (methodology/idea of architecture search) that I found kinda of interesting for example: https://arxiv.org/pdf/1802.03268.pdf
Walking into a new country where people speak very little English reminds me of the dangers of over communication.
Going into a restaurant and saying: “Could I get the turkish coffee and an omelette with a.… croissant, oh, and a glass of water, no ice and, I know this is a bit weird, but I like cinnamon in my turkish coffee, could you add a bit of cinnamon to it ? Oh, actually, could you scratch the omelette and do poached eggs instead”
Is a recipe for failure, at best the waiter looks at you confused and you can be ashamed of your poor communication skills and start over.
At worst you’re getting an omelette, with a cinnamon bun instead of a croissant, two cups of turkish coffee, with some additional poached eggs and a room-temperature bottle of water.
Maybe a far fetched example, but the point is: The more instructions you give, the flourishes you put into your request, the higher the likelihood that the core of the requests gets lost.
If you can point at the items on the menu and hold a number of fingers in the air to indicate the quantity, that’s an ideal way to order.
But it’s curios that this sort of over communication never happens in, say, Japan. In places where people know very little to no English and where they don’t mind telling you that what you just said made no sense (or at least they get very visibly embarrassed, more so than their standard over-the-top anxiety, and the fact that it made no sense is instantly obvious to anyone).
It happens in the countries where people kinda-know English and where they consider it rude to admit to not understanding you.
Japanese and Taiwanese clerks, random pedestrians I ask for directions and servers, know about as much English as I know Japanese or Chinese. But we can communicate just fine via grunts, smiles, pointing, shaking of heads and taking out a phone to google translate if the interactions is baring close to the 30s mark with no resolution in sight.
The same archtypes in India and Lebanon speak close to cursive English though, give them 6-12 months in the UK or US plus a panache for learning and they’d be a native speaker (I guess it could be argued that many people in India speak 100% perfect English, but their own dialect, but for the intents and purposes of this post I’m referring to English as UK/US city English).
Yet it’s always in the second kind of country where I find my over communicative style fails me. Partially because I’m more inclined to use it, partially because people are less inclined to admit I’m not making any sense.
I’m pretty sure it’s this phenomenon is a very good metaphor or instantiation of a principle that applies in many other situations, especially in expert communication. Or rather, in how expert-layman vs expert-expert vs expert-{almost expert} communication works.
90% certainty that this is bs because I’m waiting for a flight and I’m sleep deprive, but:
For most people there’s not a very clear way or incentive to have a meta model of themselves in a certain situation.
By meta model, I mean one that is modeling “high level generators of action”.
So, say that I know Dave:
Likes peanut-butter-jelly on thin cracker
Dislikes peanut-butter-jelly in sandwiches
Likes butter fingers candy
A completely non-meta model of Dave would be:
If I give Dave a butter fingers candy box as a gift, he will enjoy it
Another non-meta model of Dave would be:
If I give Dave a box of Reese’s as a gift, he will enjoy it, since I thing they are kind of a combination between peantu-butter-jelly and butter fingers
A meta model of Dave would be:
Based on the 3 items above, I can deduce Dave likes things which are sweet, fatty, smooth with a touch of bitter (let’s assume peanut butter has some bitter to it) and crunchy but he doesn’t like them being too starchy (hence why he dislikes sandwiches).
So, if I give Dave a cup of sweet milk ice cream with bits of crunchy dark chocolate on top as a gift, he will love it.
Now, I’m not saying this meta-model is a good one (and Dave is imaginary, so we’ll never know). But my point is, it seems highly useful for us to have very good meta-models of other people, since that’s how we can predict their actions in extreme situations, surprise them, impress them, make them laugh… etc
On the other hand, we don’t need to construct meta-models of ourselves, because we can just query our “high level generators of action” directly, we can think “Does a cup of milk ice cream with crunchy dark chocolate on top sound tasty ?” and our high level generators of action will strive to give us an estimate which will usually seem “good enough to us”.
So in some way, it’s easier for us to get meta models of other people, out of simple necessity and we might have better meta models of other people than we have of our own self… not because we couldn’t construct a better one, but because there’s no need for it. Or at least, based on the fallacy of knowing your own mind, there’s no need for it.
I’d agree that this is useful to think on, but I tend to use “meta model” to mean “a model of how to build and apply models across distinct people”, and your example of abstracting Dave’s preferences is just another model for him, not all that meta.
I might suggest you call it an “abstract model” or an “explainable model”. In fact, if they make the same predictions, they’re equally powerful, but one is more compressible and easier to transmit (and examine in your head).
Hmh, I actually did not think of that one all-important bit. Yeap, what I described as a “meta model for Dave’s mind” is indeed a “meta model for human minds” or at least a “meta model for American minds” in which I plugged in some Dave-specific observations.
I’ll have to re-work this at some point with this in mind, unless there’s already something much better on the subject out there.
But again, I’ll excuse this with having been so tried when I wrote this that I didn’t even remember I did until your comment reminded me about it.
This shortform is a bit of a question/suggestion for anyone that might happen to read it.
It seems to me that public discussion has an obvious disadvantage of the added “signaling” one is doing when speaking to a public.
I wouldn’t blame the vast majority of LW I read articles from or interacted with of this, as in, I think most people go to great length not to aim their arguments towards social signaling. But on the other hand, social signaling is so ingrained in the brain it’s almost impossible **not** to do it. I have a high prior on the idea that even when thinking to yourself, “yourself” is partially the closest interpretation you have of a section of the outside world you are explaining your actions/ideas to.
However, it seems that there’s a lot of things that can reduce your tendency to socially signal, the 4 main ones I’ve observed are:
MDAM
Load of ethanol
Anonymity
Privacy (i.e. the one between a small group of people)
The problem with option 1 and 2 is that they are poisonous with frequent exposure, plus the fact that ethanol makes me think about sex and politics, and MDMA makes me think about how I could ever enjoy sex and politics more than I currently enjoy the myriad of tactile sensations I feel when gently caressing this patch of dirt. I assume most people have problems along these same lines with any drug-induced states of open communication.
Anonymity works, but it works in that it showcases just how vile human thoughts are when they are inconsequentially shouting over one another into a void (see 4chan, kiwifarm, reddit front page...etc).
Privacy seems to work best, I’ve had many interesting discussions with friends that I could have hardly replicate on the internet. However, I doubt that I’m alone in not having a friend that would be knowledgeable/opinionated/interested enough in any subject I’d fancy discussing.
So I’d argue it might be worth-while to try something like an internet discussion-topic form with an entry barrier, where people can get paired up to discuss two different sides of a topic (with certain restrictions, e.g. no discussing politics, so that it doesn’t automatically turn into a cesspool no matter what)
The question would be what the entry barrier should be. I.e. if LW opened such a form, and the entry barrier would just be “you must type the url into your search bar”, it might work for a bit, but it would have the potential of degenerating pretty fast (see anonymity issue).
I could see several solutions to this issue, which one could mix and match, each with their own specific downsides:
Use some sort of internet-points that denote someone’s positive involvement in the community as the entry barrier (e.g. karma one something like LW or reddit)
Use a significant but not outrageous amount of money (e.g. 100$), that are held in escrow by a moderator or an algorithm. The money is awarded to the other person if they discuss the topic with you at some length and provide satisfactory arguments, lost in the void (e.g. donated to an EA-picked charity) if this is arguably not the case or refunded if your counterpart was obviously discussing in bad faith or lacking relevant knowledge.
Use some sort of real-life identification, that is not public to anyone but the database and the people you are discussing with, but is used a verification and as a “threat” that vile conduct could be punished by the moderators making said identity public.
Use some sort of real-life credentials (e.g. PhD, proof of compensation received to work in a certain field, endorsements from members of a field almost everyone would consider respectable, history of work published in relevant journals and/or citation count… etc). This would lend itself well if you segment the discussion-search form into different fields of interest.
Have the tow parties meet IRL, or send physical letters, or some other thing which has a high cost of entry because the means of communication is inefficient and expensive.
I’m curios if something similar to this already exist, as in, something where one can find a quality of discussion similar to a place like LW, or a private chat with a research colleague, not something like reddit CMV.
Alternatively I wonder what the potential templates and downsides for this kind of environment might be and why one doesn’t exist yet.
This line of thinking links up (in my mind) with something slightly different that I’ve thought about before, which is how do you create a community where people aren’t afraid to be themselves, risk saying wrong things, and are willing to listen to others. I think there is some convergence with the signaling concern, because there much of signaling can come from trying to present a view to others that signals something that might not quite be true or authentic, or even if it is true emphasizes certain things more than others differently than the poster naturally would, creating a kind of mask or facade where the focus is on signaling well rather than being oneself, saying wrong things, etc..
I think the solution is generally what I would call “safety” or “psychological safety”: people often feel unsafe in a wide variety of situations, don’t always realize they have deep, hidden fear powering their actions, and don’t know how to ask for more safety without risking giving up the little bit they are already creating for themselves by signaling, being defensive, and otherwise not being themselves to protect themselves from threats real or merely perceived.
I’ve seen the amazing benefits of creating safety in organizations and the kind of collaboration and happiness it can enable, but I’m less sure about how to do it in a large, online community. I like this kind of exploration of potential mechanism for, as I think of it, creating enough safety to enable doing the things we really care about (being heard, collaborating, feeling happy to talk to others about our ideas, etc.).
I’m wondering if, in a competitive system with intelligent agents, regression to the mean is to be expected when one accumulates enough power.
Thinking about the business and investment strategies that a lot of rich people advocate, they seem kinda silly to me. In that they match the mental model of the economy of someone who never really bothered studying the market would have. It’s stuff like “just invest in safe index funds”, and other such strategies that will never get you rich (nowadays) if you start out poor. Indeed, you’d find more creativity and have better luck getting rich in the theories of a random shitposters on /r/wallstreetbets
Or take zero-sum~ish system, something like dating. I hear the wildest of ideas, models and plans from unusual wardrobe choices to texting strategies from people that are.… less than successful at attracting the other gender. But then when you talk to people that never in their life had an issue getting laid (i.e. pretty&charismatic people), they seem not to have spared a though about how to be attractive to the other gender or how to “pick up” someone or anything around those lines. They operate on a very “standard” model that’s basically “don’t worry to much about it, you’ll end up finding the right person”.
I think you can find many such example, to put a post-structuralist spin on it: “People with power in a given system will have a very standard view of said system”.
In a lot of systems the more power you hold, the easier it is to make the system work for you. The easier it is to make the system work for you, the less sophisticated or counter-intuitive your model of the system has to be, since you’re not looking for any “exploits”, you can just let things take their course and you will be well-positioned barring any very unusual events.
Whereas the less power you have, the more complex and unique your model of the system will have to be, since you are actively looking for said exploit to gain power in the system.
But now that I’m writing this out I’m curios as to whether or not this observation is to “obvious” to be interesting or has a glaring flaw in it.
regression to the mean is going to happen in any system with a large random (or anti-inductive and unpredictable) component. That doesn’t seem to be what you’re talking about. You seem to be talking about variance and declining marginal utility (or rather, exception cases where marginal utility is actually increasing).
Nobody got rich in retail investing. A lot of people stayed comfortable for longer than they otherwise would have, but to paraphrase the old saying, the best way to make a million in the stock market is to start with 50 million. Likewise for other investment/return decisions: the advice given by the successful applies mostly to the successful—they’re focused on preserving and utilizing the leverage they already have, not on getting it in the first place.
If you’re starting smaller (and still going for large results), you probably have to take more risks. In investing, this means accepting that you’ll lose it all most of the time, but you’ve got a small chance at the million. Nobody will give you that advice, because most people don’t actually have that utility curve, and because advisors can’t make much money on you. For dating, if you’re conventionally attractive (including social standing in your target circle), you shouldn’t risk much or be too outrageous. If you’re trying to attract attention on unusual dimensions, you’ll need to take a lot of risks, and suffer a lot of rejection.
Basically, if you’re happy with an average outcome, look for safe, low variance behaviors, where predictability is more important than return. If you want an outlier, look for high-variance choices, where you might lose badly, but might win a lot as well. Note that the average outcome is probably WORSE with higher-variance activities, but the best outcomes are better.
An example of this is in games like backgammon—it’s pretty close to binary in outcome (there’s a large gap between losing by a little and being gammoned), so you will have a different risk profile for individual moves when you’re ahead than when you’re behind. If you’re behind, you’ll take the risk of bumping your opponent even if it leaves you exposed. If you’re ahead, you’ll be more conservative and safe. You won’t hurt your opponent as much, but you also won’t have as much gap between a normal and a lucky roll by your opponent. This is because when you’re behind, it doesn’t matter how much you lose by, so there’s no harm in the risk. When you’re ahead, you don’t get anything by improving the win gap, and you lose a LOT if your opponent gets lucky and pulls out the win.
I’ve been thinking a lot about replacing statistics with machine learning and how one could go about that. I previously tried arguing that the “roots” of a lot of classical statistical approaches are flawed, i.e. they make too many assumptions about the world and thus lead to faulty conclusions and overly complex models with no real insight.
I kind of abandoned that avenue once I realized people back in the late 60s and early 70s were making that point and proposing what are now considered machine learning techniques as a replacement.
I find it interesting what kind of beliefs one needs to question and in which ways in order to get people angry/upset/touchy.
Or, to put it in more popular terms, what kind of arguments make you seem like a smart-ass when arguing with someone.
For example, reading Eliezer yudkowsky’s Rationality from AI to Zombies, I found myself generally speaking liking the writing style and to a karge extent the book was just reinforcing the biases I already had. Other then some of his poorly thought out metaphysics based on which he bases his ethics argument… I honestly can’t think of a single thing from that book I disagree with. Same goes for Inadequate Equilibria.
Going a level down and challenging a pillar of the opponent’s belief that was not being considered as part of the discussion.
E.g: “Arguing about whether or not climate change is a threat, going one level down and arguing that there’s not enough proof climate change is happening to being with”
You can make this pattern even more annoying by doing something like:
Arguing about a specific belief
Going a level down and challenging a pillar of the opponent’s belief that was not being considered as part of the discussion.
Not entertaining an opposite argument about one of your own pillars being shaky.
E.g.: After the previous climate change argument, not entertaining the idea that “Maybe acting upon climate change as if it were real and as if it were a threat, would actually result in positive consequences even if those two things were unture”
You can make this pattern even more annoying by doing something like:
Arguing about a specific belief
Going a level down and challenging a pillar of the opponent’s belief that was not being considered as part of the discussion.
Doing so with some evidence that the other party is unaware or cannot understand
E.g.: After the previous climate change argument, back up your point about climate change not being real by citing various studies that would take hours to fact check and might be out of reach knowledge-wise for either of you.
***
I think there’s other things that come into account.
For example there’s some specific fields which are considered more sacrosanct then others, trying to argue against a standard position in that field as part of your argument seems to much more easily put you into the “smartass” camp.
For example, arguing against commonly held religious or medical knowledge, seems to be almost impossible, unless you are taking an already-approved side of the debate.
E.g. You can argue ibuprofen against paracetamol as the go to for common cold since there’s authoritative claims for each, you can’t argue for a 3rd lesser backed NSAID or for using corticosteroids or no treatment instead of NSAIDs.
Other fields such as ethics or physics or computer science seem to be fair game and nobody really minds people trying to argue for an unsanctioned viewpoint.
***
There’s obviously the idea of politics being overall bad, and the more politicized a certain subject is the less you can change people’s minds about it.
But to some extent I don’t feel like politics really comes into play.
It seems that people are fairly open to having their minds changed about economic policy but not about identity policy.… no matter which side of the spectrum you are on. Which seem counter intuitive, since the issue of “should countries have open borders and free healthcare” seems like one much more deeply embedded in existing political agendas and of much more import than “What gender should transgender people be counted in when participating in the olympics”.
***
One interesting thing that I observed: I’ve personally been able to annoy a lot of people when talking with them online. However, IRL, in the last 4 years or so (since I actually begun explicitly learning how to communicate), I can’t think of a single person that I’ve offended.
Even though I’m more verbose when I talk. Even though the ideas I talk about over coffee are usually much more niche and questionable in their verity then the ones I write about online.
I wonder if there’s some sort of “magic oratory skill” I’ve come closer to attaining IRL that either can’t be attained on the internet or is very different… granted, it’s more likely it’s the inherent bias of the people I’m usually discussing with.
I wonder why people don’t protect themselves from memes more. Just to be clear, I mean meme in the broad memetic theory of spreading ideas/thoughts sense.
I think there’s almost an intuitive understanding, or at least one existed in the environment I was bought up in, that some ideas are virulent and useless. I think that from this it’s rather easy to conclude that those ideas are harmful, since you only have space for so many ideas, so holding useless ideas is harmful in the sense that it eats away at a valuable resource (your mind).
I think modern viral ideas also tend more and more towards the toxic side, toxic in the very literal sense of “designed to invoke a raise in cortisol and/or dopamine that makes them more engaging yet is arguably provably harmful to the human body. Though I think this is a point I don’t trust that much, speculation at best.
It’s rather hard to figure out what memes one should protect themselves from under these conditions, some good heuristics I’ve come up with is:
1. Memes that are new and seem to be embedded in the minds of many people, yet don’t seem to increase their performance on any metric you care about. (e.g. wealth, lifespan, happiness)
2. Memes that are old and seem to be embedded in the minds of many people, yet seem to decrease their performance on any metric you care about.
3. Memes that are being recommended to you in an automated fashion by a capable algorithm you don’t understand fully.
I think if a meme ticks one of these boxes, it should be taken under serious consideration as harmful. Granted, there’s memes that tick all 3 (e.g. wearing a warm coat during winter), but I think those are so “common” it’s pointless to bring them into the discussion, they are already deeply embedded in our minds, so it’s pointless to discuss them.
A few examples I can think of.
Crypot currency in 2017&2018, passes 2 and 3, passes or fails 1 depending on the people you are looking at, ⇒ Depends
All ads and recommendation on pop websties (e.g. reddit, medium, youtube). Obviously fail at 3, sometimes fail at 1 if the recommendation is “something that went viral”. ⇒ Avoid
Extremist “Western” Religions, passes 1 and 3. Usually fails at 2. ⇒ Avoid
Contemplative practices, passes 2 and 3, fails 1 depending on the people you are looking at in the case of modern practices, doesn’t fail 1 in the case of traditional practices. ⇒ Depends
Intermittent fasting, passes 2 and 3, very likely passes 1 ⇒ Ok
Foucault, passes 3, arguably passes 1⁄2, but it depends on where you draw the “old” line ⇒ Depends
Complex Analysis, passes 3 and 1, very easy to argue it passes 2 ⇒ Ok
Granted, I’m sure there are examples where these rules of thumb fail miserably, my brain is probably subconsciously coming up with ones where they works. Even more so, I think the heuristic here are kind of obvious, but they are also pretty abstract and hard to defend if you were to scrutinize them properly.
Still, I can’t help but wonder if it “safety measures” (taken by the individual, no political) against toxic memes shouldn’t be a subject that’s discussed more. I feel like it could bring many benefits and it’s such a low hanging fruit.
Then again, protecting ourselves against the memes we consider toxic might be something we all inherently do already and something we do pretty well. So my confusion here is mainly about how some people end up *not* considering certain memes to be toxic, rather than how they are unable to defend themselves from them.
I’m not sure what you mean with extremist Western religions. Mormonism that might be one of the more extreme Western religion correlates with longer life-span. In many cases it’s very hard to estimate outcomes on the metrics I care about.
Knowing things is hard.
When it comes to things like wearing a coat it’s very hard to know because the control group is quite small. The counter-examples I have in mind from people I personally know is one Wim Hof guy who shovels snow in a T-shirt. The other example is Julian Assange in his earlier years. There’s no example that comes to my mind of someone who went around in winter without a coat and who seems to be ineffective.
I mean, I’d argue the pro/against global warming meme isn’t worth holding either way, if you already hold the correct “Defer to overwhelming expert consensus in matters where the possible upside seem gigantic and the possible downside irrelevant” (i.e. switching from coal & oil based energy to nuclear, hydro, solar, geothermal and wind… which doesn’t bring severe downsides but has the obvious upside of possibly preventing global warming, having energy sources that are more reliable long-term, don’t pollute their surroundings and have better yield per resources spent… not to mention useable in a more decentralized way and useable in space).
So yeah, I’d argue both the global warming and the against global warming memes are at least pointless, since you are having the wrong f*** debate if you hold them. The debate should center around:
Upsides and Downsides of renewable energy (ignoring the potential effect of GM)
How to model the function of faith in expert consensus and what parameters should go into it.
#1 and #2 can both be combined into the same prescription: don’t learn new things if their knowledge doesn’t improve your life satisfaction in some way. This is basically a tautology, and if you’re a rationalist it’s restating the habit of making beliefs pay rent in anticipated experiences, since that’s their only utility.
#3 I think is hitting on something, and I think it’s that we should be broadly skeptical of arguments put forth by people or organizations genuinely capable of manipulating us.
I wouldn’t say #1 and #2 state the same thing, since #1 basically says “If a meme is a new, look for proof of benefits or lack thereof”, #2 says “If a meme is old, look for proof of harm or lack thereof”.
I could combine them in “The newer a wide-spread meme is, the more obvious it’s benefits should be”, but I don’t think your summary does justice to those two statements.
1. Memes that are new and seem to be embedded in the minds of many people, yet don’t seem to increase their performance on any metric you care about. (e.g. wealth, lifespan, happiness)
2. Memes that are old and seem to be embedded in the minds of many people, yet seem to decrease their performance on any metric you care about.
3. Memes that are being recommended to you in an automated fashion by a capable algorithm you don’t understand fully.
Is this crypto currency, or a shorthand for pot crypto currency?
All ads and recommendation on pop [websites] (e.g. reddit, medium, youtube). Obviously fail at 3, sometimes fail at 1 if the recommendation is “something that went viral”. ⇒ Avoid
The issue there seems to be continuity—a one shot probably isn’t bad, though the fixes are mostly the same. (Though that algorithm is probably based around something like engagement, and other circumstances might require more care.)
Contemplative practices
Complex Analysis
What are these? (And should info about them be in spoilers?)
A few examples I can think of.
This section could have made a nice table, though it might have been harder to read that way.
I wonder if Social Justice missionaries will become a thing.
It seems to me that the SJ value-set/religion is growing more powerful and widely accepted, and the more this happens the more you get outliers. Either trying to pander to it in order to unscrupulously raise their social standing at the cost of others or “true believers” which take it’s dictums to the extreme.
To some extent, it must be that the heavily religious Europe of the 13th to 18th century suffered from these same issues. It also seems plausible that outliers might have been swayed to become missionaries. Since “spreading the values” can be very profitable for those with few scrupules and very appealing to the extreme moralists which want to value-maximize as much as possible.
It seems like the kind of thing that could happen without centralized intervention, it’s almost a natural conclusion that can be reached once you get enough outliers and enough people supporting the religion that to them it seems “obvious” it must be spread.
People want the outliers to leave (but can’t make this an open preference, since they couldn’t defend it within the dogma of the religion) and the incentives needed for it to happen are very easy to produce. So missionaries start being socially accepted and praised.
Granted, I can’t see any ‘good’ examples of this happening yet, so maybe I’m just speculating on an imaginary foundation.
The missionaries will not travel in geography-space, but in subculture-space.
For a mostly online movement, the important distances are not the thousands of miles, but debating on different websites, having different conferences, etc. (Well, the conferences have the geographical aspect, too.)
But what about places that are closed-off from the “global” virtual space (e.g. China, Arabia and possibly various African countries once their dictators get up to speed with technology) ?
Should discomfort be a requirement for important experiences ?
A while ago I was discussing with a friend maligning about the fact that there doesn’t exist some sort of sublingual DMT, with an absorption profile similar to smoking DMT, but without the rancid taste.
(Side note, there are some ways to get sublingual DMT: https://www.dmt-nexus.me/forum/default.aspx?g=posts&t=10240 , but you probably won’t find it for sale at your local drug dealer and effects will differ a lot from smoking. In most experiences I’ve read about I’m not even convinced that the people are experiencing sublingual absorption rather than just slowly swallowing DMT with MAOIs and seeing the effects that way)
My point where something along the way of:
I wish there was a way to get high on DMT without going through the unpleasant experience of smoking it, I’m pretty sure that experience serves to “prime” your mind to some extent and leads to a worst trip.
My friend’s point was:
We are talking about one of the most reality-shattering experiences ever possible to a human brain that doesn’t involve death or permanent damage, surely having a small cost of entry for that in terms of the unpleasant taste is actually a desirable side-effect.
I kind of ended up agreeing with my friend and I think most people would find that viewpoint appealing
But
You could make the same argument for something like knee surgery (or any life-changing surgery, which is most of them).
You are electing to do something that will alter your life forever and will result in you experiencing severe side-effects for years to come… but the step between “decide to do it” and “support major consequences” has 0 discomfort associate to it.
That’s not to say knee surgery is good, much like a DMT trip, I have a lot of prior of it being good for people (well, in this case assuming that doctor recommends you to do it).
But I do find it a bit strange that this is the case with most surgery, even if it’s life altering, when I think of it in light of the DMT example.
But
If you’ve visited South Korea and seen the progressive note mutilation going on in their society (I’m pretty sure this has a fancier name… see some term they use in the study of super-stimuli, seagulls sitting on gigantic painted balls kinda king), I’m pretty sure the surgery example can become blurrier.
As in, I think it’s pretty easy to argue people are doing a lot of unnecessary plastic surgery, and I’m pretty sure some cost of entry (e.g. you must feel mild discomfort for 3 hours to get this done… equivalent to say, getting a tattoo on your arm), would reduce that number a lot and intuitively that seem like a good thing.
It’s not like you could do that though, as in, in practice you can’t really do “anesthesia with controlled pain level” it’s either zero or operating within a huge error range (see people’s subjective reports of pain after dental anesthesia with similar quantities of lidocaine).
I’m wondering if the idea of investing in “good” companies make sense from a purely self-centered perspective.
Assuming there’s two types of companies: A and B.
Assume that you think a future in which the vision of “A” comes true is a good future and future in which the vision of “B” comes true is a bad future.
You can think of A as being whatever makes you happy, some examples might be: longevity, symbolic AI, market healthcare, sustainable energy, cheap housing… thing that you are very certain you want in the future and that you are unlikely not to want (again, these are examples, I am NOT saying I think everyone or even a majority of people would agree with this, replace them with whatever company you think is doing what you want to see more of).
You can think of B as being neutral or bad, some examples might be: MIA companies, state-backed rent-seeking companies (e.g. student debt, US health insurance), companies exploiting resources which will become scarce in the long run… etc.
It seems intuitive that if you can find company A1 and company B1 with similar indicators as to their market performance, you would get better yield investing in A1 as opposed to B1. Since in the future scenario where B1 is a good long term investment, the future looks kinda bleak anyway and the money might not matter so much. In the future scenario where A1 is a good long term investment, the future is nice and has <insert whatever you like>, so you have plenty of nice things to do with said money.
Which would seem to give a clear edge to the idea of investing in companies doing things which you consider to be “good”, assuming they are indistinguishable from companies doing things which you consider to be “bad” in terms of relevant financial metrics. Since you’re basically risking the loss of money you could have in a hypothetical future you wouldn’t want to live in anyway, and you are betting said money to maximize your utility in the future you want to live in.
Then again, considering that a lot of “good” companies on many metrics are newer and thus more risky and possibly overpriced I’m not sure how easy this heuristic could be applied with success in the real world.
Since you’re basically risking the loss of money you could have in a hypothetical future you wouldn’t want to live in anyway, and you are betting said money to maximize your utility in the future you want to live in.
Actually, this is backwards; by investing in companies that are worth more in worlds you like and worth less in worlds you don’t, you’re increasing variance, but variance is bad (when investing at scale, you generally pay money to reduce variance and are paid money to accept variance).
Actually, this is backwards; by investing in companies that are worth more in worlds you like and worth less in worlds you don’t, you’re increasing variance
If you treat the “world you dislike” as one where you can still get about the same bang for you buck, yes.
But I think this wouldn’t be the case with a lot of good/bad visions of the future pairs.
Example:
BELIEF: You believe healthcare will advance past treating symptoms and move into epigenetically correcting the mechanisms that induce tissue degeneration.
a) You invest in this vision, it doesn’t come to pass. You die poor~ish and in horrible suffering at 70.
b) You invest in a company that would make money on the downside of this vision (e.g. palliative care focused company). The vision doesn’t come to pass. You die rich but still in less horrible but more prolonged suffering at 76 (since you can afford more vacations, better food and better doctors).
c) You invest in this vision, it does come to pass. You have the money to afford the new treatments as soon as they are out on the market, now at 70 you regain most functionality you had at 20 and can expect another 30-40 years of healthy life, you hope that future developments will extent this.
d) You invest in a company that would make money on the downside of this vision, it does come to pass. You die poor~ish and in horrible suffering at 80 (because you couldn’t afford the best treatment), with the added spite for the fact that other people get to live for much longer.
---
To put it more simply, money has more utility-buying power in “good” world than in “bad” world, assuming the “good” is created by the market (and thus purchasable).
I’m wondering if wealth-re-distribution in the “best case” scenario would have any positive effects.
So, assume a world in which the top-x wealthy have wealth that resides purely in the form of gold. Not stocks, or bond or whatever, so taking it away from them won’t de-stabilize the economy in the same sense that taking away stock from a CEO will break down the incentives chain that keep a company running properly.
Also, assume that everyone likes gold so much, that it will keep it’s inherent value once it’s re-distributed.
Would this have any effect besides a sudden~ish price inflation for stuff like food, housing, cars… etc, since everyone can now buy more/better, thus demand goes up at once with no modification in production.
As in, would this wealth redistribution be able to re-direct more resources into relevant industries to keep them more efficient ?
It seems to me that a lot of common-usage products are already heavily efficient in terms of production compared to luxury product, since the market is much broader.
Where, improvements can be made, the issue usually seems to be regulatory/ethical/consensus-related (e.g. zoning laws for housing, experimental ethics and drug-trail statistical power regulations for medical research).
So, for example, custom-cat production is very expensive not because the materials are expensive, but because a custom yacht requires loads of specialized artisan work. However, if all those artisans were to go out of business and be forced to get jobs at a mass-production facility… would that bring any benefits in terms of how the cars are designed to make production cheaper ? Even assuming 10 years pass and they are re-trained as highly-skilled mass-car producers, would that help ? Or are there physical limitations (e.g. cost of materials, time needed to ensamble an engine) or regulatory limitations (e.g. safety testing) that keep the prices of cars above a certain threshold (seems to be ~4,000euros) for the cheapest possible car one could make.
Intutition pump by thinking about this in the context of consumption curves in ones own life. i.e. is any utility gained by moving consumption forward or backward in time between selves?
for example, custom-cat production is very expensive not because the materials are expensive, but because a custom yacht requires loads of specialized artisan work.
Presumably a typo? Though I bet there is something like designer cats.
That’s a very weak man form of redistribution. If you redistribute into services, such as public health and education, you avoid the inflationary problem, and since both are labour intensive, you can create jobs.
That is actually a good point, I was focused too much on material goods and not thinking of service jobs.
Indeed, even if you take the real form of redistribution, which is closer to evening out social status than redistributing any form of real wealth. It would probably incentivize people to go into arguably useful service jobs more. (e.g. there are probably a lot of people which would be good medical researchers that become traders or “tech entrepreneurs” because in our current world it yields much more social status, even if the difference in actual material goods is not so great, wealth itself allows for signaling high status).
For some reason, despite reading socialist philosophers/activists which make these arguments… I’m just unable to stick them anywhere in my brain in such a way that I can remember them next time I even think about trying to argue a strong-man representation of redistribution.
I find it funny that Jordan Peterson is basically using his social power to now redefine the word postmodernism/postmodernist.
Taking your framing as given: It’s called “lying” or “name calling”.
So when you label someone as Nazi you are trying to create in/out group differentiation & distance by appealing to…
I disagree with the “you” in this sentence. (It may work as a question. )
Self reference in these cases is an opportunity to make new categories, carefully. For comparison: Is killing people wrong? If yes (in all cases), then it would be wrong both for people to try to kill you, and for you to kill them in self defense. ‘In all cases’ may be incorrect—this restriction can make the answer to a question neither yea nor nay.
I disagree with the “you” in this sentence. (It may work as a question. )
As in, with a question mark at the end ? That’s what I originally intended I believe, but I ended up thinking the phrasing already conveys the “questionnes” of it.
Stress-resistant habits is one thing that has been on my mind recently.
Take for example dietary habits, I think the results are pretty conclusive by this point that Intermittent fasting (IF) or periodic long-term fasting provide benefits similar to a very/relatively healthy diet in terms of insulin response, tissue regeneration and maintaining a “healthy” insulin/glucagon/leptin balance throughout the day.
However, the main advantage they have over a diet, is that they are much less fragile. I can chose to follow a relatively empirically & scientifically proven diet (think for example low sucrose, low fructose, loads of nuts and fish, various vegetables that play well with the digestive system), and it might provide the same benefits as periodic fasting, but it will be much more subjective to external circumstance:
Not enough money to buy salmon from a low-mercury source ? Though luck
Decided to have a few beers one night ? You’ll probably feel like shit the next day (as opposed to someone that drinks more often, who’s liver can readily mobilize alcohol dehydrogenase production)
Something sucks and you decided to drown your sorrows in chocolate ? Welp, things are going to get worst now.
IF (or periodic longer-term fasting) has the advantage of being easy to follow and not having any major downsides when you break the habit for a few days or even weeks.
I think the same is true for other habits as well. About 8 months ago, I’ve started training myself to only use a 15″ laptop, a keyboard and an ergonomic mouse. I think I might have been a bit more productive with 2x 4k screens… but after sufficient re-training I don’t miss it much. I do however notice an incredible different when traveling, it allows me much more flexibility as to where I can get work/writing done.
I think a view can be taken for stuff like playing music to some extent, the biggest reason why I prefer a multi-effects modeling preamp versus a tube amp plus an array of 6 to 12 pedals, is that I can put preamp in my backpack. I don’t have to initiate in a gear-moving ceremony every time I want to play somewhere that’s not my home.
Not to mention stuff like exercise, where things like swimming are certainly loads of fun, weight lifting is certainly very efficient, jogging can be socially-oriented and provide a breath of fresh air ,but calisthenics rule in terms of being able to them anywhere, anytime, with minimal amounts of effort.
I think almost by nature most habits that stick with people fall into this bucket. Because you can only have so many fragile habits until some external stress comes along and you are forced to drop them. But paradoxically, people will probably tend to talk about fragile habits more, since it’s easy to put the easy-to-follow ones in the background and not think of them as such.
[Deleted]
I agree, but I think this is using a narrower definition of “stress” than the one I prescribe in the article.
The way I see it getting a load of new work which you take on because it’s fun, is also stress.
Traveling to a new country is “stress”, but it’s fun.
The actual practical example of “stress” I had in mind is changing countries a lot, since I tend to that (voluntarily because I find it fun overall), but then I though it applies to other types of stress as well (e.g. death of friend/pet, shitty weather, physical injury, having to go to driving school, lost a job… etc)
Terminology sometimes used to distinguish between ‘good’ and ‘bad’ stress is “eustress” vs “distress”.
Physical performance is one thing that isn’t really “needed” in any sense of the word for most people.
For most people, the need for physical activity seems to boil down to the fact that you just feel better, live longer and overall get less health related issues if you do it.
But on the whole, I’ve seen very little proof that excelling in physical activity can help you with anything (other than being a professional athlete or trainer, that is). Indeed, it seems that the whole relation to mortality basically breaks down if you look at top perform. Going from things like strongman competitions and american football where life expectancy is lower, to things like running and cycling where some would argue but evidence is lacking, to football and tennis where it’s a bit above average.
If the subject interests you, I’ve personally looked into it a lot, and I think this is the definitive review: https://yorkspace.library.yorku.ca/xmlui/bitstream/handle/10315/32723/Lemez_Srdjan_2016_PhD.pdf
But it’s basically a bloody book, I personally haven’t read all of it, but I often go back to it for references.
Also, there’s the much more obvious problem with pushing yourself to the limits, injury. I think this is hard to quantify and there’s few studies looking at it. In my experience I know a surprising amount of “active” people that got injured in life-altering ways from things like skating, skying, snowboarding and even football (not in the paraplegic sense, more in the “I have a bar of titanium going through my spine and I can’t lift more than 15kg safely” sort of way). Conversely, 100% of my couch-dwelling buddies in average physical shape doesn’t seem to suffer from any chronic pain.
To some extent, this annoys me, though I wonder if poor studies and anecdotal evidence is enough to warrant that annoyance.
For example, I frequent a climbing gym. Now, if you look at climbing, it’s relatively safe, there’s two things people complain about most sciatica and “climbers back” (basically a very weird looking but not that harmful form of kyphosis).
I honestly found the idea rather weird… since one of the main reason I climb (besides the fact that it’s fun) is that it helps and helped me correct my kyphosis and basically got rid of any back/neck discomfort I felt from sitting too much at a computer.
I think this boils down to how people climb, especially how they do bouldering.
A reference as to how the extreme kind of bouldering looks like: https://www.youtube.com/watch?v=7brSdnHWBko
The two issues I see here is:
Hurling limbs at tremendous speeds to try and crab onto something tiny.
Falling on the mat, often and from large heights. Climbing goes two ways up and down, most people doing bouldering only care about up
Indeed, a typical bouldering run might look something like: “Climb carefully and skillfully as much as possible, hurl yourself with the last bit of effort you have hoping you reach the top, fall on the mat rinse and repeat”.
This is probably one of the stupidest things I’ve seen from a health perspective. You’re essentially praying for articulation damage, dislocating a shoulder/knee, tearing a muscle (doesn’t look pretty, I assume doesn’t feel nice, recovery times are long and sometimes fully recovering is a matter of years) and spine damage (orthopedics don’t agree on much, but I think all would agree the worst thing you can do for your spine is fall from a considerable height… repeatedly, like, dozens of time every day).
But the thing is, you can pretty much do bouldering without this, as in, you can be “decent” at it without doing any of this. Personally I approach bouldering as slowly and steadily climbing… to the top, with enough energy to also climb down + climbing down whenever I feel that I’m to exhausted to continue. Somehow, this approach to the sport is the one that give you strange looks. The people pushing themselves above the limits risking injury and getting persistent spine damage from falling… are the standard.
Another things I enjoy is weight lifting, I especially enjoy weighted squats. Weighted squats are fun, they wake you up in the morning, they are a lazy person exercise when you’ve got nothing else in during that day.
I’ve heard people claim you can get lower back pain and injury from weighted squats, again, this seems confusing to me. I actually used to have minor lower back pain on occasions (again, from sitting), the one exercise that seemed to have permanently fixed that is a squat. A squat is what I do when I feel that my back is a bit stiff and I need some help.
But I think, again, this is because I am “getting squats wrong”, my approach to a squat is “Let me load a 5kg ergonomic bar with 25kg, do a squat like 8 times, check my posture on the last 2, if I’m able to hold it and don’t feel tired, do 5-10 more, if I still feel nice and energetic after a 1 minute break, rinse and repeat”.
But the correct squat, I believe, looks something like this: https://www.youtube.com/watch?v=nLVJTBZtiuw
Loading a bar with a few hundred kg, at least 2.5x your body weight, putting on a belt so that your intestines don’t fall out and lowering it “ONCE”, because fuck me you’re not going to be able to do that twice in a day. You should at least get some nosebleed every 2 or 3 tries if you’re doing this stuff correctly.
I’ve seen this in gyms, I’ve seen this in what people recommend, if I google “how much weight should I squat”, the first thing I get is: https://www.livestrong.com/article/286849-normal-squat-weight/
To say this seems insane is far fetched, basically the advice around the internet seems to be “If you’ve never done this before, aim for 40-60kg, if you’ve been to the gym a few times, go for 100+”
Again, it’s hard to find data on this, but as someone that’s pretty bloody tall who has been using weight to train for years, the idea of starting with 50kg for a squat as an average person seem insane. I do 45kg from time to time to change things up, I’d never squat anything over 70kg even if you paid me… I can feel my body during the move, I can feel the tentative pressure on my lower back if my posture slips for a bit… that’s fine if you’re lifting 30kg, that seems dangerous as heck if you’re lifting more than your body weight, it even feels dangerous at 60kg.
But again, I’m not doing squats correctly, I am in the wrong here as far as people doing weight training are concerned.
I’m also wrong when it comes to every sport. I’m a bad runner because I give up once my lungs are burning for 5 minutes straight. I’m a horrible swimmer because I alter styles and stick with low-speed ones that are overall better for toning all muscles and have less risk of injury… etc
Granted, I don’t think that people are too pushy about going to extremes. The few times people tell me some version of “try harder” phrased as a friendly encouragement. I finish what I’m doing, say thanks and lie to them that I have a slight injury and I’d rather not push it.
But deep inside I have a very strong suspicion that I’m not wrong on this thing. That somehow we’ve got ourselves into a very unhealthy memetic loop around sports, where pushing yourself is seen as the natural thing to do, as the thing you should be doing every day.
A very dangerous memetic loop, dangerous to some extent in that it causes injury, but much more dangerous because it might be discouraging people from sports. Both in that they try once, get an injury and quit. Or in that they see it, they think it’s too hard (and, I think it is, the way most people do it) and they never really bother.
I’m honestly not sure why it might have started…
The obvious reason is that it physically feels good to do it, lifting a lot of running more than your body tells you that you should is “nice”. But it’s nice in the same way that smoking a tiny bit of heroine before going about your day is nice (as in, quite literally, it seems to me the feelings are related and I think there’s some pharmacological evidence to back that up). It’s nice to do it once to see how it is, maybe I’ll do it every few months if I get the occasion and I feel I need a mental boost… but I wouldn’t necessarily advise it or structure my life around it.
The other obvious reason is that it’s a status thing, the whole “I can do this thing better than you thus my rank in the hierarchy is higher”. But then… why is it so common with both genders, I’d see some reason for men to do this, because historically we’ve been doing it, but women competing in sports is a recent things, hardly “built into our nature” and most of the ones I know that practice things like climbing are among the most chilled out dudes I’ve ever meet.
The last reason might be that it’s about breaking a psychological barrier, the “Oh, I totally thought I couldn’t do that, but apparently I can”. But it seems to me like a very very bad way of doing that. I can think of many other safer better ways from solving a hard calculus problem to learning a foreign language in a month to forcing yourself to write an article every day… you know, things that have zero risks of paralysis and long term damage involved.
But I think at this point imitation alone is enough to keep it going.
The “real” reason if I take the outside view is probably that that’s how sports are supposed to be done and I just got stuck with a weird perspective because “I play things safe”.
Yeah but elites athletes are at the tails, and The Tails Come Apart. I’d expect pro athletes to be sacrificing all sorts of things to get extreme performance in a particular sport, but that the average person who is working on general athletic performance won’t have that issue.
Handicap principle. Publicly burning off excess health is a better use of it from your genes perspective than just sitting on what is ultimately a depreciating asset.
This just boils down to “showing off” though. But this makes little sense considering:
a) both genders engage in bad practices. As in, I’d expect to see a lot of men doing cross fit, but it doesn’t make sense when you consider there’s a pretty even gender split. “Showing off health” in a way that’s harmful to health is not evolutionary adaptive for women (where it arguably pays off to live for a long time, evolutionarily speaking). This is backed up by other high-risk behaviors being mainly a men’s thing
b) sports are a very bad way to show off, especially the sports that come with high risk of injury and permanent degradation when practiced in their current extreme (e.g. weight lifting, climbing, gymnastics, rugby, hokey). The highest pay-off sports I can think of (in terms of social signaling) are football, american football, basketball and baseball… since they are popular and thus the competition is both intense and achieving high rank is rewarding. Other than american football they are all pretty physically safe as far as sports go… when there are risks, they come from other players (e.g. getting a ball to the head) not from over-training or over-performing.
So basically, if it’s genetic miss-firing then I’d expect to see it misfire almost only in men, and this is untrue.
If it’s “rational” behavior (as in, rational from the perspective of our primate ancestor) then I’d expect to see the more dangerous forms of showing off bring the most social gains rather than vice-versa.
Granted, I do think handicap principle can be partially to blame for “starting” the thing, but I think it continues because of higher level memes that have little to do with social signaling or genetics.
Note 1: Not a relevant analogy unless you use the StackExchange Network.
I think the stack overflow reputation system is a good analogous for the issues one encounters with a long-running monetary system.
I like imaginary awards, when I was younger I specifically liked the imaginary awards from stack overflow (Reputation) because I though they’d help me get some recognition as a developer (silly, but in my defense, I was a teenager).
However, it proved to be very difficult to find questions that nobody else had answered which I could answer and were popular enough to get more than one or two upvotes for said answer (upvotes generate reputation).
I got to like 500 reputation and I slowly started being less active on SO (now the only question I answer are basically my own, in case nobody provides and answer but I end up finding a solution).
I recently checked my reputation on SO and noticed I was close to 2000 point, despite not being active on the website in almost 4 years o.o Because reputation from “old questions” accumulate. I though “oh, how much would have young me loved to see this now-valueless currency reach such an arbitrarily high level”.
I think this is in many ways analogous to the issues with the monetary system. Money seems to loss its appeal as you get older, since it can buy less and less and you need less and less. All your social signaling and permanent possession needs are gone by the time you hit 60. All your “big dreams” now require too much energy, even if you theoretically have the capital to put them in practice.
At the same time Stack Exchange reputation gives you the power to judge others, you can gift reputation for a good answer, you can edit people’s answers and questions without approval, you can review questions and decide they are duplicates or don’t fit the community and reject them.
Again, something I’d be very good at when I was 19, and deeply passionate about software development. Something that I’m probably less good at now, since I haven’t the energy to care and probably have lost some of my “general” knowledge since I’ve specialized more.
Same thing applies to money, as you get old and accumulate you get the ability to invest in other people. Think an idea is wrong/right ? Now you have the capital to propel or damage it. But generally speaking, old people are probably in a worst position to understand the world and to understand which ideas would help/hinder our society and in which way. Young people might be equal clueless on a societal level, but at least they have some understanding on a personal level, they are involved, they have skin in the game.
Note 2: Obligatory disclaimer (since I assume I’m on an US-style leftist part of the internet) that I don’t mean this to be a communist manifesto based on poor empirical evidence on a vaguely related system. It’s just an interesting observation I felt like writing down.
I recently found it fun to think about the idea of whether or not there are separate consciousnesses in a single brain.
There’s the famous example of a Corpus callosotomy producing split-brain people, where seemingly tow rational but poorly-communicating entities exist within the brain. I think many people may get the intuition that it’s likely that both entities in this case (both hemispheres) are conscious in some way.
People also get the intuition that animals with brain processes far different from ours (rats, cats, cows… etc) may experience/produce something like consciousness.
Even more so, when comma patients wake up and tell story of being conscious during the comma, just unable to act, we usually think that this is also a form of experience similar to what most of all call consciousness (if not exactly the same).
Yet there doesn’t seem to be a commonly-shared intuition that our own brain might harbor multiple conscious entities, despite the fact that there’s nothing to indicate the contrary.
Indeed, I would say that if our intuitions go something like:
1. Larger than {x} CNS ⇒ consciousness
2. Splitting up CNS of size 2*{x} into two tightly linked bits ⇒ 2 consciousness
3. Consciousness does not require a define-able pattern to exist, or at least whatever pattern is required doesn’t seem to be a consistent opinion between people
I can see no reason why those intuitions couldn’t be strained to say that it is plausible and possibly even intuitive for there to be 2, or 3 or n conscious experiences going on within a brain at the same time.
Indeed, I would say it might even be more likely for my brain to have, say, 5 conscious experiences that don’t intersect going on at the same time, than for a rat with a brain much less developed than mine to have a single conscious experience.
Granted, I think functional mris data might have a few things to opine about the former being less likely than the later, but there’s still nothing fundamentally different about the two hypotheses. We are no more aware if a rat is conscious than we would be aware if our own brain had something in it that was conscious.
*Note: for the sake of argument I’m assuming we all share a definition of consciousness along the lines of “consciousness is what it is like to be and experience”. I’m also assuming a non-solipsistic viewpoint and one. I’m also assuming that, based on the fact that we are conscious, we can deduce other people are to, rather than being philosophical zombies.*
Having read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I’ve come up with a decent classification for them based on the fallacies they commit.
There’s the kind of alarmist that understands how machine learning works but commits the fallacy of assuming that data-gathering is easy and that intelligence is very valuable. The caricature of this position is something along the lines of “PAC learning basically proves that with enough computational resources AGI will take over the universe”.
<I actually wrote an article trying to argue against this position, the LW corsspost of which gave me the honor of having the most down-voted featured article in this forum’s history>
But I think that my disagreement with this first class of alarmist is not very fundamental, we can probably agree on a few things such as:
1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases.
2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject.
These types of alarmists would probably agree with me that, if we found out a way to magically multiply two arbitrary tensors 100x times faster than we do now, for the same electricity consumption, that would constitute a great leap forward.
But the second kind are the ones that scare/annoy me most, because they are the kind that don’t seem to really understand machine learning. Which results in them being surprised by the fact that machine learning models are able to do, what has been uncontroversially established that machine learning models could do for decades.
The not-so-caricatured representation of this position is: “Oh no, a 500,000,000 parameters models designed for {X} can outperform a 20KB decision tree when trained for task {Y}, the end is nigh !”
And if you think that this caricature is unkind (well, maybe the “end is nigh” part is), I’d invite you to read the latest blog entry by Scott Alexander , a writer which I generally consider to be quite intelligent and rational, being amazed that a 1,500,000,000 parameters transform architecture can be trained to play chess poorly… a problem that is so trivial one could probably power its training using a well-designed potato battery and an array of P2SC’s… simulated in minecraft.
I’ve seen endless examples of this, usually boiling down to “Oh no, a very complex neural network can do a very simple task with about the same accuracy as an <insert basic sklearn classifier>” or “Oh no, a neural network can learn to compress information from arbitrary unlabeled data”. Which is literally what people have been doing with neural networks since like… forever. That’s the point of neural networks, they are usually inefficient and hard to tune but are highly generalizable.
I think this second viewpoint is potentially dangerous and I think it would be well worth-while to educate people enough so that they switch from it. Since it seems to engender an irrational religion-style fear in people and it shifts focus away from the real problems (e.g. giving models the ability to estimate uncertainty in their own conclusions)
Regarding MIRI/SIAI/Yudkowsky, I think you are considerably overestimating the extent to which the early AI safety movement took any notice of research. Early MIRI obsessed about stuff like AIXI, that AI researchers didn’t care about, and based a lot of their nighmare scenarious on “genie” style reasoning derived from fairy tales.
And the thing I said that isn’t factually correct is...
(This is arguably testable.)
The only thing factually incorrect is your implied assumption that voting has anything to do with truth assessment here ;)
I feel similarly, except I think the flaws are a lack of clarity and jumping to conclusions, at times, rather than fallacies.
This is definitely not something you will find agreement on. Thinking that this is something that alarmists would agree with you on suggests you are using a different definition of AGI than they are, and may have other significant misunderstandings of what they’re saying.
Would you care to go into more details ?
If there’s different definitions of AGI then that’s quite a barrier to understanding generally. Never mind my confusions as a curious newbie.
This feels a really good time to jump in and ask for a working definition of AGI. (simple words, no links to essays please.)
Truly, a new standard for replication. Jokes aside, I wouldn’t have said ‘amazed’, just surprised.* The question around that is, how far can you go with just textual pattern matching?** What can’t be done that way? RTS games? ‘Actually playing an instrument’ rather than writing music for one?
*From the article:
**Though for comparison, it might be useful to see how well other programs, or humans do on those tasks. Ideally this would be a novel task, which requires people who haven’t played chess or heard music, or using an unfamiliar notation.
Well, the “surprised” part is what I don’t understand.
As in, in principle, provided enough model complexity (as in, allowing it to model very complex functions) you can basically learn anything as long as you can format your inputs and outputs in such a way as to fit the model.
1.5B parameters is more than enough complexity to learn chess, provided that it’s been done via models with 0.01% the amount of parameters.
In general the only issue is that the less-fitted your model is for the task the more it takes to train it. Given that I can train a deep blue equivalent model on an RTX2080 would be counted in the minutes, the fact that you can train something worse than that using the GPT-2 architecture in a few hours of days is.… not really impressive, nor is it surprising. It would have been surprising if the training time was low than other bleeding-edge model or similar to them and the resulting equation could out-perform or match them.
As it stands the idea of a generalizable architecture that take a very long time to train is not very useful, since we already have quick ways of doing architecture search and hyper parameter search. The idea that a given architecture can learn to solve X/Y and Z efficiently (which in this case it isn’t) wouldn’t even be that impressive, unless you couldn’t get a good arhictecture search algorithm to solve X,Y and Z equally fast.
Most people don’t have something like an architecture search algorithm on hand. (Aside from perhaps their brains, as you mentioned in ‘AGI is here’ post.)
In this case, surprise is a result of learning something. Yes, it’s surprising to you that not everyone has learned this already. (Though there are different ways/levels of learning things.) Releasing a good architecture search might help, or writing a post about this: “GPT-2 can do (probably) anything very badly that’s just moving symbols around. This might include rubix cubes, but not dancing*).”
*I assume. Also guessing that moving in general is hard (for non-custom hardware; things other than brains) and it has a big space that GPT-2 doesn’t have a shot at (like StarCraft/DotA/etc.).
The concern is that ‘GPT-2 is bad at everything, but better than random’, and people wondering, ‘how long until something that is good at everything comes along’? Will it be sudden, or will ‘bad’ have to be replaced by ‘slightly less bad’ a thousand times over the course of the next hundred/thousand years?
I’m not sure what you mean by this … ? Architecture search is fairly trivial to implement from scratch and take literally 2 lines of code with something like AX. Well, arguable if it’s trivial per-say, but I think most people would have an easier time coming up with, understanding and implementing architecture search than coming up with, understand and implementing a transformer (e.g. GPT-2) or any other attention-based network.
Again,I’m not sure why GPT2 wouldn’t have a shot and stracraft or dota. The most basic fully connected network you could write, as long as it has enough parameters and the correct training environment, has shot at starcraft2, dota… etc. It’s just that it will learn slower than something build for those specific cases.
Again, I’m not sure how “bad” and “good” are defined here. If you are defining them as “quick to train”, than again, something that’s “better at everything” than GPT-2 is already here since the 70s, dynamic architecture search *(ok, arguably only widely used in the last 6 years or so).
If you are talking about “able to solve”, then again, any architecture with enough parameters should be able to solve any problem that is solve-able given enough time to train, the time required to train it is the issue.
Moving has a lot of degrees of freedom, as do those domains. There’s also the issue of quick response time (which is not something it was built for), and it not being an economical solution (which can also be said for OpenAI’s work in those areas).
When things built for starcraft don’t make it to the superhuman level, something that isn’t built for it probably won’t.
The question is how long − 10 years? Solving chess via analyzing the whole tree would take too much time, so no one does it. Would it learn in a remotely feasible amount of time?
Well yeah, that’s my whole point here. We need to talk about accuracy and training time !
If the GPT-2 model was trained in a few hours, and losses 99% of games vs a decision tree based model (ala deep blue) that was trained in a few minutes on the same machine, then it’s worthless. It’s exactly like saying “In theory, given almost infinite RAM and 10 years we could beat deep blue (or alpha chess or whatever the cool kids are doing nowadays) by just analyzing a very large subset of all possible moves + combinations and arranging them hierarchically”.
So you think people should only be afraid/excited about developments in AGI that
1) are more recent than 50 to arguably 6 years ago
2) could do anything/a lot of things well with a reasonable amount of training time?
3) Or that might actually generalize in the sense of general artificial intelligence, that’s remotely close to being on par with humans (w.r.t ability to handle such a variety of domains)?
4) Seem actually agent-like.
In regards to 1), I don’t necessarily think that older developments that are re-emerging can’t be interesting (see the whole RL scene nowadays, which to my understanding is very much bringing back the kind of approaches that were popular in the 70s). But I do think the particular ML development that people should focus on is the one with the most potential, which will likely end up being newer. My grips with GPT-2 is that there’s no comparative proof that it has potential to generalize compared to a lot of other things (e.g. quick architecture search methods, custom encoders/heads added to a resnet), actually I’d say the sheer size of it and the issue one encounters when training it indicates the opposite.
I don’t think 2) is a must, but going back to 1), I think that training time is one of the important criterions to compare the approaches we are focusing on. Since training time on a simple task is arguably the best you can do to understand training time for a more complex task.
As for 3) and 4)… I’d agree with 3), I think 4) is too vague, but I wasn’t trying to bring either point across in this specific post.
?
https://github.com/facebook/Ax
Just an example of a library that can be used to do hyperparameter search quickly.
But again, there are many tools and methodologies and you can mix and match, this is one (methodology/idea of architecture search) that I found kinda of interesting for example: https://arxiv.org/pdf/1802.03268.pdf
Errata:
two?
fixed those mistakes
Walking into a new country where people speak very little English reminds me of the dangers of over communication.
Going into a restaurant and saying: “Could I get the turkish coffee and an omelette with a.… croissant, oh, and a glass of water, no ice and, I know this is a bit weird, but I like cinnamon in my turkish coffee, could you add a bit of cinnamon to it ? Oh, actually, could you scratch the omelette and do poached eggs instead”
Is a recipe for failure, at best the waiter looks at you confused and you can be ashamed of your poor communication skills and start over.
At worst you’re getting an omelette, with a cinnamon bun instead of a croissant, two cups of turkish coffee, with some additional poached eggs and a room-temperature bottle of water.
Maybe a far fetched example, but the point is: The more instructions you give, the flourishes you put into your request, the higher the likelihood that the core of the requests gets lost.
If you can point at the items on the menu and hold a number of fingers in the air to indicate the quantity, that’s an ideal way to order.
But it’s curios that this sort of over communication never happens in, say, Japan. In places where people know very little to no English and where they don’t mind telling you that what you just said made no sense (or at least they get very visibly embarrassed, more so than their standard over-the-top anxiety, and the fact that it made no sense is instantly obvious to anyone).
It happens in the countries where people kinda-know English and where they consider it rude to admit to not understanding you.
Japanese and Taiwanese clerks, random pedestrians I ask for directions and servers, know about as much English as I know Japanese or Chinese. But we can communicate just fine via grunts, smiles, pointing, shaking of heads and taking out a phone to google translate if the interactions is baring close to the 30s mark with no resolution in sight.
The same archtypes in India and Lebanon speak close to cursive English though, give them 6-12 months in the UK or US plus a panache for learning and they’d be a native speaker (I guess it could be argued that many people in India speak 100% perfect English, but their own dialect, but for the intents and purposes of this post I’m referring to English as UK/US city English).
Yet it’s always in the second kind of country where I find my over communicative style fails me. Partially because I’m more inclined to use it, partially because people are less inclined to admit I’m not making any sense.
I’m pretty sure it’s this phenomenon is a very good metaphor or instantiation of a principle that applies in many other situations, especially in expert communication. Or rather, in how expert-layman vs expert-expert vs expert-{almost expert} communication works.
90% certainty that this is bs because I’m waiting for a flight and I’m sleep deprive, but:
For most people there’s not a very clear way or incentive to have a meta model of themselves in a certain situation.
By meta model, I mean one that is modeling “high level generators of action”.
So, say that I know Dave:
Likes peanut-butter-jelly on thin cracker
Dislikes peanut-butter-jelly in sandwiches
Likes butter fingers candy
A completely non-meta model of Dave would be:
If I give Dave a butter fingers candy box as a gift, he will enjoy it
Another non-meta model of Dave would be:
If I give Dave a box of Reese’s as a gift, he will enjoy it, since I thing they are kind of a combination between peantu-butter-jelly and butter fingers
A meta model of Dave would be:
Based on the 3 items above, I can deduce Dave likes things which are sweet, fatty, smooth with a touch of bitter (let’s assume peanut butter has some bitter to it) and crunchy but he doesn’t like them being too starchy (hence why he dislikes sandwiches).
So, if I give Dave a cup of sweet milk ice cream with bits of crunchy dark chocolate on top as a gift, he will love it.
Now, I’m not saying this meta-model is a good one (and Dave is imaginary, so we’ll never know). But my point is, it seems highly useful for us to have very good meta-models of other people, since that’s how we can predict their actions in extreme situations, surprise them, impress them, make them laugh… etc
On the other hand, we don’t need to construct meta-models of ourselves, because we can just query our “high level generators of action” directly, we can think “Does a cup of milk ice cream with crunchy dark chocolate on top sound tasty ?” and our high level generators of action will strive to give us an estimate which will usually seem “good enough to us”.
So in some way, it’s easier for us to get meta models of other people, out of simple necessity and we might have better meta models of other people than we have of our own self… not because we couldn’t construct a better one, but because there’s no need for it. Or at least, based on the fallacy of knowing your own mind, there’s no need for it.
I’d agree that this is useful to think on, but I tend to use “meta model” to mean “a model of how to build and apply models across distinct people”, and your example of abstracting Dave’s preferences is just another model for him, not all that meta.
I might suggest you call it an “abstract model” or an “explainable model”. In fact, if they make the same predictions, they’re equally powerful, but one is more compressible and easier to transmit (and examine in your head).
Hmh, I actually did not think of that one all-important bit. Yeap, what I described as a “meta model for Dave’s mind” is indeed a “meta model for human minds” or at least a “meta model for American minds” in which I plugged in some Dave-specific observations.
I’ll have to re-work this at some point with this in mind, unless there’s already something much better on the subject out there.
But again, I’ll excuse this with having been so tried when I wrote this that I didn’t even remember I did until your comment reminded me about it.
This shortform is a bit of a question/suggestion for anyone that might happen to read it.
It seems to me that public discussion has an obvious disadvantage of the added “signaling” one is doing when speaking to a public.
I wouldn’t blame the vast majority of LW I read articles from or interacted with of this, as in, I think most people go to great length not to aim their arguments towards social signaling. But on the other hand, social signaling is so ingrained in the brain it’s almost impossible **not** to do it. I have a high prior on the idea that even when thinking to yourself, “yourself” is partially the closest interpretation you have of a section of the outside world you are explaining your actions/ideas to.
However, it seems that there’s a lot of things that can reduce your tendency to socially signal, the 4 main ones I’ve observed are:
MDAM
Load of ethanol
Anonymity
Privacy (i.e. the one between a small group of people)
The problem with option 1 and 2 is that they are poisonous with frequent exposure, plus the fact that ethanol makes me think about sex and politics, and MDMA makes me think about how I could ever enjoy sex and politics more than I currently enjoy the myriad of tactile sensations I feel when gently caressing this patch of dirt. I assume most people have problems along these same lines with any drug-induced states of open communication.
Anonymity works, but it works in that it showcases just how vile human thoughts are when they are inconsequentially shouting over one another into a void (see 4chan, kiwifarm, reddit front page...etc).
Privacy seems to work best, I’ve had many interesting discussions with friends that I could have hardly replicate on the internet. However, I doubt that I’m alone in not having a friend that would be knowledgeable/opinionated/interested enough in any subject I’d fancy discussing.
So I’d argue it might be worth-while to try something like an internet discussion-topic form with an entry barrier, where people can get paired up to discuss two different sides of a topic (with certain restrictions, e.g. no discussing politics, so that it doesn’t automatically turn into a cesspool no matter what)
The question would be what the entry barrier should be. I.e. if LW opened such a form, and the entry barrier would just be “you must type the url into your search bar”, it might work for a bit, but it would have the potential of degenerating pretty fast (see anonymity issue).
I could see several solutions to this issue, which one could mix and match, each with their own specific downsides:
Use some sort of internet-points that denote someone’s positive involvement in the community as the entry barrier (e.g. karma one something like LW or reddit)
Use a significant but not outrageous amount of money (e.g. 100$), that are held in escrow by a moderator or an algorithm. The money is awarded to the other person if they discuss the topic with you at some length and provide satisfactory arguments, lost in the void (e.g. donated to an EA-picked charity) if this is arguably not the case or refunded if your counterpart was obviously discussing in bad faith or lacking relevant knowledge.
Use some sort of real-life identification, that is not public to anyone but the database and the people you are discussing with, but is used a verification and as a “threat” that vile conduct could be punished by the moderators making said identity public.
Use some sort of real-life credentials (e.g. PhD, proof of compensation received to work in a certain field, endorsements from members of a field almost everyone would consider respectable, history of work published in relevant journals and/or citation count… etc). This would lend itself well if you segment the discussion-search form into different fields of interest.
Have the tow parties meet IRL, or send physical letters, or some other thing which has a high cost of entry because the means of communication is inefficient and expensive.
I’m curios if something similar to this already exist, as in, something where one can find a quality of discussion similar to a place like LW, or a private chat with a research colleague, not something like reddit CMV.
Alternatively I wonder what the potential templates and downsides for this kind of environment might be and why one doesn’t exist yet.
This line of thinking links up (in my mind) with something slightly different that I’ve thought about before, which is how do you create a community where people aren’t afraid to be themselves, risk saying wrong things, and are willing to listen to others. I think there is some convergence with the signaling concern, because there much of signaling can come from trying to present a view to others that signals something that might not quite be true or authentic, or even if it is true emphasizes certain things more than others differently than the poster naturally would, creating a kind of mask or facade where the focus is on signaling well rather than being oneself, saying wrong things, etc..
I think the solution is generally what I would call “safety” or “psychological safety”: people often feel unsafe in a wide variety of situations, don’t always realize they have deep, hidden fear powering their actions, and don’t know how to ask for more safety without risking giving up the little bit they are already creating for themselves by signaling, being defensive, and otherwise not being themselves to protect themselves from threats real or merely perceived.
I’ve seen the amazing benefits of creating safety in organizations and the kind of collaboration and happiness it can enable, but I’m less sure about how to do it in a large, online community. I like this kind of exploration of potential mechanism for, as I think of it, creating enough safety to enable doing the things we really care about (being heard, collaborating, feeling happy to talk to others about our ideas, etc.).
I’m wondering if, in a competitive system with intelligent agents, regression to the mean is to be expected when one accumulates enough power.
Thinking about the business and investment strategies that a lot of rich people advocate, they seem kinda silly to me. In that they match the mental model of the economy of someone who never really bothered studying the market would have. It’s stuff like “just invest in safe index funds”, and other such strategies that will never get you rich (nowadays) if you start out poor. Indeed, you’d find more creativity and have better luck getting rich in the theories of a random shitposters on /r/wallstreetbets
Or take zero-sum~ish system, something like dating. I hear the wildest of ideas, models and plans from unusual wardrobe choices to texting strategies from people that are.… less than successful at attracting the other gender. But then when you talk to people that never in their life had an issue getting laid (i.e. pretty&charismatic people), they seem not to have spared a though about how to be attractive to the other gender or how to “pick up” someone or anything around those lines. They operate on a very “standard” model that’s basically “don’t worry to much about it, you’ll end up finding the right person”.
I think you can find many such example, to put a post-structuralist spin on it: “People with power in a given system will have a very standard view of said system”.
In a lot of systems the more power you hold, the easier it is to make the system work for you. The easier it is to make the system work for you, the less sophisticated or counter-intuitive your model of the system has to be, since you’re not looking for any “exploits”, you can just let things take their course and you will be well-positioned barring any very unusual events.
Whereas the less power you have, the more complex and unique your model of the system will have to be, since you are actively looking for said exploit to gain power in the system.
But now that I’m writing this out I’m curios as to whether or not this observation is to “obvious” to be interesting or has a glaring flaw in it.
regression to the mean is going to happen in any system with a large random (or anti-inductive and unpredictable) component. That doesn’t seem to be what you’re talking about. You seem to be talking about variance and declining marginal utility (or rather, exception cases where marginal utility is actually increasing).
Nobody got rich in retail investing. A lot of people stayed comfortable for longer than they otherwise would have, but to paraphrase the old saying, the best way to make a million in the stock market is to start with 50 million. Likewise for other investment/return decisions: the advice given by the successful applies mostly to the successful—they’re focused on preserving and utilizing the leverage they already have, not on getting it in the first place.
If you’re starting smaller (and still going for large results), you probably have to take more risks. In investing, this means accepting that you’ll lose it all most of the time, but you’ve got a small chance at the million. Nobody will give you that advice, because most people don’t actually have that utility curve, and because advisors can’t make much money on you. For dating, if you’re conventionally attractive (including social standing in your target circle), you shouldn’t risk much or be too outrageous. If you’re trying to attract attention on unusual dimensions, you’ll need to take a lot of risks, and suffer a lot of rejection.
Basically, if you’re happy with an average outcome, look for safe, low variance behaviors, where predictability is more important than return. If you want an outlier, look for high-variance choices, where you might lose badly, but might win a lot as well. Note that the average outcome is probably WORSE with higher-variance activities, but the best outcomes are better.
An example of this is in games like backgammon—it’s pretty close to binary in outcome (there’s a large gap between losing by a little and being gammoned), so you will have a different risk profile for individual moves when you’re ahead than when you’re behind. If you’re behind, you’ll take the risk of bumping your opponent even if it leaves you exposed. If you’re ahead, you’ll be more conservative and safe. You won’t hurt your opponent as much, but you also won’t have as much gap between a normal and a lucky roll by your opponent. This is because when you’re behind, it doesn’t matter how much you lose by, so there’s no harm in the risk. When you’re ahead, you don’t get anything by improving the win gap, and you lose a LOT if your opponent gets lucky and pulls out the win.
I’ve been thinking a lot about replacing statistics with machine learning and how one could go about that. I previously tried arguing that the “roots” of a lot of classical statistical approaches are flawed, i.e. they make too many assumptions about the world and thus lead to faulty conclusions and overly complex models with no real insight.
I kind of abandoned that avenue once I realized people back in the late 60s and early 70s were making that point and proposing what are now considered machine learning techniques as a replacement.
So instead I’ve decided to just focus any further anger at bad research and people using nonsensical constructs like p-value on trying to popularize better approaches based on predictive modeling.
I find it interesting what kind of beliefs one needs to question and in which ways in order to get people angry/upset/touchy.
Or, to put it in more popular terms, what kind of arguments make you seem like a smart-ass when arguing with someone.
For example, reading Eliezer yudkowsky’s Rationality from AI to Zombies, I found myself generally speaking liking the writing style and to a karge extent the book was just reinforcing the biases I already had. Other then some of his poorly thought out metaphysics based on which he bases his ethics argument… I honestly can’t think of a single thing from that book I disagree with. Same goes for Inadequate Equilibria.
Yet, I can remember a certain feeling popping up in my head fairly often when reading it, one that can be best described in an image: https://i.kym-cdn.com/entries/icons/facebook/000/021/665/DpQ9YJl.jpg
***
One seeming pattern for this is something like:
Arguing about a specific belief
Going a level down and challenging a pillar of the opponent’s belief that was not being considered as part of the discussion.
E.g: “Arguing about whether or not climate change is a threat, going one level down and arguing that there’s not enough proof climate change is happening to being with”
You can make this pattern even more annoying by doing something like:
Arguing about a specific belief
Going a level down and challenging a pillar of the opponent’s belief that was not being considered as part of the discussion.
Not entertaining an opposite argument about one of your own pillars being shaky.
E.g.: After the previous climate change argument, not entertaining the idea that “Maybe acting upon climate change as if it were real and as if it were a threat, would actually result in positive consequences even if those two things were unture”
You can make this pattern even more annoying by doing something like:
Arguing about a specific belief
Going a level down and challenging a pillar of the opponent’s belief that was not being considered as part of the discussion.
Doing so with some evidence that the other party is unaware or cannot understand
E.g.: After the previous climate change argument, back up your point about climate change not being real by citing various studies that would take hours to fact check and might be out of reach knowledge-wise for either of you.
***
I think there’s other things that come into account.
For example there’s some specific fields which are considered more sacrosanct then others, trying to argue against a standard position in that field as part of your argument seems to much more easily put you into the “smartass” camp.
For example, arguing against commonly held religious or medical knowledge, seems to be almost impossible, unless you are taking an already-approved side of the debate.
E.g. You can argue ibuprofen against paracetamol as the go to for common cold since there’s authoritative claims for each, you can’t argue for a 3rd lesser backed NSAID or for using corticosteroids or no treatment instead of NSAIDs.
Other fields such as ethics or physics or computer science seem to be fair game and nobody really minds people trying to argue for an unsanctioned viewpoint.
***
There’s obviously the idea of politics being overall bad, and the more politicized a certain subject is the less you can change people’s minds about it.
But to some extent I don’t feel like politics really comes into play.
It seems that people are fairly open to having their minds changed about economic policy but not about identity policy.… no matter which side of the spectrum you are on. Which seem counter intuitive, since the issue of “should countries have open borders and free healthcare” seems like one much more deeply embedded in existing political agendas and of much more import than “What gender should transgender people be counted in when participating in the olympics”.
***
One interesting thing that I observed: I’ve personally been able to annoy a lot of people when talking with them online. However, IRL, in the last 4 years or so (since I actually begun explicitly learning how to communicate), I can’t think of a single person that I’ve offended.
Even though I’m more verbose when I talk. Even though the ideas I talk about over coffee are usually much more niche and questionable in their verity then the ones I write about online.
I wonder if there’s some sort of “magic oratory skill” I’ve come closer to attaining IRL that either can’t be attained on the internet or is very different… granted, it’s more likely it’s the inherent bias of the people I’m usually discussing with.
I wonder why people don’t protect themselves from memes more. Just to be clear, I mean meme in the broad memetic theory of spreading ideas/thoughts sense.
I think there’s almost an intuitive understanding, or at least one existed in the environment I was bought up in, that some ideas are virulent and useless. I think that from this it’s rather easy to conclude that those ideas are harmful, since you only have space for so many ideas, so holding useless ideas is harmful in the sense that it eats away at a valuable resource (your mind).
I think modern viral ideas also tend more and more towards the toxic side, toxic in the very literal sense of “designed to invoke a raise in cortisol and/or dopamine that makes them more engaging yet is arguably provably harmful to the human body. Though I think this is a point I don’t trust that much, speculation at best.
It’s rather hard to figure out what memes one should protect themselves from under these conditions, some good heuristics I’ve come up with is:
1. Memes that are new and seem to be embedded in the minds of many people, yet don’t seem to increase their performance on any metric you care about. (e.g. wealth, lifespan, happiness)
2. Memes that are old and seem to be embedded in the minds of many people, yet seem to decrease their performance on any metric you care about.
3. Memes that are being recommended to you in an automated fashion by a capable algorithm you don’t understand fully.
I think if a meme ticks one of these boxes, it should be taken under serious consideration as harmful. Granted, there’s memes that tick all 3 (e.g. wearing a warm coat during winter), but I think those are so “common” it’s pointless to bring them into the discussion, they are already deeply embedded in our minds, so it’s pointless to discuss them.
A few examples I can think of.
Crypot currency in 2017&2018, passes 2 and 3, passes or fails 1 depending on the people you are looking at, ⇒ Depends
All ads and recommendation on pop websties (e.g. reddit, medium, youtube). Obviously fail at 3, sometimes fail at 1 if the recommendation is “something that went viral”. ⇒ Avoid
Extremist “Western” Religions, passes 1 and 3. Usually fails at 2. ⇒ Avoid
Contemplative practices, passes 2 and 3, fails 1 depending on the people you are looking at in the case of modern practices, doesn’t fail 1 in the case of traditional practices. ⇒ Depends
Intermittent fasting, passes 2 and 3, very likely passes 1 ⇒ Ok
Foucault, passes 3, arguably passes 1⁄2, but it depends on where you draw the “old” line ⇒ Depends
Instagram, passes 2, fails 3 and arguably fails 1 ⇒ Avoid
New yet popular indie movies and games, pass 2 and 3, arguably fails at 1 ⇒ Avoid (pretty bad conclusion I’d say)
Celebrity worshiping, passes 2, kinda fails 3, certainly fails 1 ⇒ Avoid
Complex Analysis, passes 3 and 1, very easy to argue it passes 2 ⇒ Ok
Granted, I’m sure there are examples where these rules of thumb fail miserably, my brain is probably subconsciously coming up with ones where they works. Even more so, I think the heuristic here are kind of obvious, but they are also pretty abstract and hard to defend if you were to scrutinize them properly.
Still, I can’t help but wonder if it “safety measures” (taken by the individual, no political) against toxic memes shouldn’t be a subject that’s discussed more. I feel like it could bring many benefits and it’s such a low hanging fruit.
Then again, protecting ourselves against the memes we consider toxic might be something we all inherently do already and something we do pretty well. So my confusion here is mainly about how some people end up *not* considering certain memes to be toxic, rather than how they are unable to defend themselves from them.
I’m not sure what you mean with extremist Western religions. Mormonism that might be one of the more extreme Western religion correlates with longer life-span. In many cases it’s very hard to estimate outcomes on the metrics I care about.
Knowing things is hard.
When it comes to things like wearing a coat it’s very hard to know because the control group is quite small. The counter-examples I have in mind from people I personally know is one Wim Hof guy who shovels snow in a T-shirt. The other example is Julian Assange in his earlier years. There’s no example that comes to my mind of someone who went around in winter without a coat and who seems to be ineffective.
The global warming meme isn’t spreading far enough ..or is the denialism meme spreading to much?
I mean, I’d argue the pro/against global warming meme isn’t worth holding either way, if you already hold the correct “Defer to overwhelming expert consensus in matters where the possible upside seem gigantic and the possible downside irrelevant” (i.e. switching from coal & oil based energy to nuclear, hydro, solar, geothermal and wind… which doesn’t bring severe downsides but has the obvious upside of possibly preventing global warming, having energy sources that are more reliable long-term, don’t pollute their surroundings and have better yield per resources spent… not to mention useable in a more decentralized way and useable in space).
So yeah, I’d argue both the global warming and the against global warming memes are at least pointless, since you are having the wrong f*** debate if you hold them. The debate should center around:
Upsides and Downsides of renewable energy (ignoring the potential effect of GM)
How to model the function of faith in expert consensus and what parameters should go into it.
#1 and #2 can both be combined into the same prescription: don’t learn new things if their knowledge doesn’t improve your life satisfaction in some way. This is basically a tautology, and if you’re a rationalist it’s restating the habit of making beliefs pay rent in anticipated experiences, since that’s their only utility.
#3 I think is hitting on something, and I think it’s that we should be broadly skeptical of arguments put forth by people or organizations genuinely capable of manipulating us.
I wouldn’t say #1 and #2 state the same thing, since #1 basically says “If a meme is a new, look for proof of benefits or lack thereof”, #2 says “If a meme is old, look for proof of harm or lack thereof”.
I could combine them in “The newer a wide-spread meme is, the more obvious it’s benefits should be”, but I don’t think your summary does justice to those two statements.
Fad, parasite, contagion. (Cancer, mosquito, evil plan.)
Is this crypto currency, or a shorthand for pot crypto currency?
The issue there seems to be continuity—a one shot probably isn’t bad, though the fixes are mostly the same. (Though that algorithm is probably based around something like engagement, and other circumstances might require more care.)
What are these? (And should info about them be in spoilers?)
This section could have made a nice table, though it might have been harder to read that way.
I wonder if Social Justice missionaries will become a thing.
It seems to me that the SJ value-set/religion is growing more powerful and widely accepted, and the more this happens the more you get outliers. Either trying to pander to it in order to unscrupulously raise their social standing at the cost of others or “true believers” which take it’s dictums to the extreme.
To some extent, it must be that the heavily religious Europe of the 13th to 18th century suffered from these same issues. It also seems plausible that outliers might have been swayed to become missionaries. Since “spreading the values” can be very profitable for those with few scrupules and very appealing to the extreme moralists which want to value-maximize as much as possible.
It seems like the kind of thing that could happen without centralized intervention, it’s almost a natural conclusion that can be reached once you get enough outliers and enough people supporting the religion that to them it seems “obvious” it must be spread.
People want the outliers to leave (but can’t make this an open preference, since they couldn’t defend it within the dogma of the religion) and the incentives needed for it to happen are very easy to produce. So missionaries start being socially accepted and praised.
Granted, I can’t see any ‘good’ examples of this happening yet, so maybe I’m just speculating on an imaginary foundation.
The missionaries will not travel in geography-space, but in subculture-space.
For a mostly online movement, the important distances are not the thousands of miles, but debating on different websites, having different conferences, etc. (Well, the conferences have the geographical aspect, too.)
That’s actually a good point.
But what about places that are closed-off from the “global” virtual space (e.g. China, Arabia and possibly various African countries once their dictators get up to speed with technology) ?
Should discomfort be a requirement for important experiences ?
A while ago I was discussing with a friend maligning about the fact that there doesn’t exist some sort of sublingual DMT, with an absorption profile similar to smoking DMT, but without the rancid taste.
(Side note, there are some ways to get sublingual DMT: https://www.dmt-nexus.me/forum/default.aspx?g=posts&t=10240 , but you probably won’t find it for sale at your local drug dealer and effects will differ a lot from smoking. In most experiences I’ve read about I’m not even convinced that the people are experiencing sublingual absorption rather than just slowly swallowing DMT with MAOIs and seeing the effects that way)
My point where something along the way of:
I wish there was a way to get high on DMT without going through the unpleasant experience of smoking it, I’m pretty sure that experience serves to “prime” your mind to some extent and leads to a worst trip.
My friend’s point was:
We are talking about one of the most reality-shattering experiences ever possible to a human brain that doesn’t involve death or permanent damage, surely having a small cost of entry for that in terms of the unpleasant taste is actually a desirable side-effect.
I kind of ended up agreeing with my friend and I think most people would find that viewpoint appealing
But
You could make the same argument for something like knee surgery (or any life-changing surgery, which is most of them).
You are electing to do something that will alter your life forever and will result in you experiencing severe side-effects for years to come… but the step between “decide to do it” and “support major consequences” has 0 discomfort associate to it.
That’s not to say knee surgery is good, much like a DMT trip, I have a lot of prior of it being good for people (well, in this case assuming that doctor recommends you to do it).
But I do find it a bit strange that this is the case with most surgery, even if it’s life altering, when I think of it in light of the DMT example.
But
If you’ve visited South Korea and seen the progressive note mutilation going on in their society (I’m pretty sure this has a fancier name… see some term they use in the study of super-stimuli, seagulls sitting on gigantic painted balls kinda king), I’m pretty sure the surgery example can become blurrier.
As in, I think it’s pretty easy to argue people are doing a lot of unnecessary plastic surgery, and I’m pretty sure some cost of entry (e.g. you must feel mild discomfort for 3 hours to get this done… equivalent to say, getting a tattoo on your arm), would reduce that number a lot and intuitively that seem like a good thing.
It’s not like you could do that though, as in, in practice you can’t really do “anesthesia with controlled pain level” it’s either zero or operating within a huge error range (see people’s subjective reports of pain after dental anesthesia with similar quantities of lidocaine).
I’m wondering if the idea of investing in “good” companies make sense from a purely self-centered perspective.
Assuming there’s two types of companies: A and B.
Assume that you think a future in which the vision of “A” comes true is a good future and future in which the vision of “B” comes true is a bad future.
You can think of A as being whatever makes you happy, some examples might be: longevity, symbolic AI, market healthcare, sustainable energy, cheap housing… thing that you are very certain you want in the future and that you are unlikely not to want (again, these are examples, I am NOT saying I think everyone or even a majority of people would agree with this, replace them with whatever company you think is doing what you want to see more of).
You can think of B as being neutral or bad, some examples might be: MIA companies, state-backed rent-seeking companies (e.g. student debt, US health insurance), companies exploiting resources which will become scarce in the long run… etc.
It seems intuitive that if you can find company A1 and company B1 with similar indicators as to their market performance, you would get better yield investing in A1 as opposed to B1. Since in the future scenario where B1 is a good long term investment, the future looks kinda bleak anyway and the money might not matter so much. In the future scenario where A1 is a good long term investment, the future is nice and has <insert whatever you like>, so you have plenty of nice things to do with said money.
Which would seem to give a clear edge to the idea of investing in companies doing things which you consider to be “good”, assuming they are indistinguishable from companies doing things which you consider to be “bad” in terms of relevant financial metrics. Since you’re basically risking the loss of money you could have in a hypothetical future you wouldn’t want to live in anyway, and you are betting said money to maximize your utility in the future you want to live in.
Then again, considering that a lot of “good” companies on many metrics are newer and thus more risky and possibly overpriced I’m not sure how easy this heuristic could be applied with success in the real world.
Actually, this is backwards; by investing in companies that are worth more in worlds you like and worth less in worlds you don’t, you’re increasing variance, but variance is bad (when investing at scale, you generally pay money to reduce variance and are paid money to accept variance).
If you treat the “world you dislike” as one where you can still get about the same bang for you buck, yes.
But I think this wouldn’t be the case with a lot of good/bad visions of the future pairs.
Example:
BELIEF: You believe healthcare will advance past treating symptoms and move into epigenetically correcting the mechanisms that induce tissue degeneration.
a) You invest in this vision, it doesn’t come to pass. You die poor~ish and in horrible suffering at 70.
b) You invest in a company that would make money on the downside of this vision (e.g. palliative care focused company). The vision doesn’t come to pass. You die rich but still in less horrible but more prolonged suffering at 76 (since you can afford more vacations, better food and better doctors).
c) You invest in this vision, it does come to pass. You have the money to afford the new treatments as soon as they are out on the market, now at 70 you regain most functionality you had at 20 and can expect another 30-40 years of healthy life, you hope that future developments will extent this.
d) You invest in a company that would make money on the downside of this vision, it does come to pass. You die poor~ish and in horrible suffering at 80 (because you couldn’t afford the best treatment), with the added spite for the fact that other people get to live for much longer.
---
To put it more simply, money has more utility-buying power in “good” world than in “bad” world, assuming the “good” is created by the market (and thus purchasable).
I’m wondering if wealth-re-distribution in the “best case” scenario would have any positive effects.
So, assume a world in which the top-x wealthy have wealth that resides purely in the form of gold. Not stocks, or bond or whatever, so taking it away from them won’t de-stabilize the economy in the same sense that taking away stock from a CEO will break down the incentives chain that keep a company running properly.
Also, assume that everyone likes gold so much, that it will keep it’s inherent value once it’s re-distributed.
Would this have any effect besides a sudden~ish price inflation for stuff like food, housing, cars… etc, since everyone can now buy more/better, thus demand goes up at once with no modification in production.
As in, would this wealth redistribution be able to re-direct more resources into relevant industries to keep them more efficient ?
It seems to me that a lot of common-usage products are already heavily efficient in terms of production compared to luxury product, since the market is much broader.
Where, improvements can be made, the issue usually seems to be regulatory/ethical/consensus-related (e.g. zoning laws for housing, experimental ethics and drug-trail statistical power regulations for medical research).
So, for example, custom-cat production is very expensive not because the materials are expensive, but because a custom yacht requires loads of specialized artisan work. However, if all those artisans were to go out of business and be forced to get jobs at a mass-production facility… would that bring any benefits in terms of how the cars are designed to make production cheaper ? Even assuming 10 years pass and they are re-trained as highly-skilled mass-car producers, would that help ? Or are there physical limitations (e.g. cost of materials, time needed to ensamble an engine) or regulatory limitations (e.g. safety testing) that keep the prices of cars above a certain threshold (seems to be ~4,000euros) for the cheapest possible car one could make.
Same question goes for all mass-produced items.
Intutition pump by thinking about this in the context of consumption curves in ones own life. i.e. is any utility gained by moving consumption forward or backward in time between selves?
Presumably a typo? Though I bet there is something like designer cats.
Yes, I meant to say yacht, but honestly I think the typo might be better.
I’m informed there is such a thing as designer cats, so the example still holds in principle.
I’m not sure the price of food, etc. would inflate—the price of gold might drop instead.
That’s a very weak man form of redistribution. If you redistribute into services, such as public health and education, you avoid the inflationary problem, and since both are labour intensive, you can create jobs.
That is actually a good point, I was focused too much on material goods and not thinking of service jobs.
Indeed, even if you take the real form of redistribution, which is closer to evening out social status than redistributing any form of real wealth. It would probably incentivize people to go into arguably useful service jobs more. (e.g. there are probably a lot of people which would be good medical researchers that become traders or “tech entrepreneurs” because in our current world it yields much more social status, even if the difference in actual material goods is not so great, wealth itself allows for signaling high status).
For some reason, despite reading socialist philosophers/activists which make these arguments… I’m just unable to stick them anywhere in my brain in such a way that I can remember them next time I even think about trying to argue a strong-man representation of redistribution.
Thanks for pointing this out.
Retracted, as it made allusions to subjects with too much emotional charge behind them.
Taking your framing as given: It’s called “lying” or “name calling”.
I disagree with the “you” in this sentence. (It may work as a question. )
Self reference in these cases is an opportunity to make new categories, carefully. For comparison: Is killing people wrong? If yes (in all cases), then it would be wrong both for people to try to kill you, and for you to kill them in self defense. ‘In all cases’ may be incorrect—this restriction can make the answer to a question neither yea nor nay.
As in, with a question mark at the end ? That’s what I originally intended I believe, but I ended up thinking the phrasing already conveys the “questionnes” of it.
Retracted, as I can’t delete it and the content was *really* bad.