norswap
Truly inspiring! Are you not afraid you risk falling out of the bandwagon implemeting so many changes in your life simulatenously? I’m not doubting your ability or plan, just going from personal experience that trying to change too many at once has been too ambitious for my time budget in the past.
Might be, but you ought to provide an explanation for the mechanics of it instead of a blank admonition.
The number of validators is irrelevant (well, you want it to be large enough so that a few players can’t collude to control a majority of the validating power) - what’s important is their scale, i.e. how much does one need to stake in order to acquire a majority of the validating power and take over the network.
It is technically possible. “Materially”, Bitcoin is nothing if a record of transactions and the wallet balances it implies. Anybody can come and fork Bitcoin into a new chain which preserves the transactions but changes the implementation and/or the features (as happened with Bitcoin Cash).
The real question is what people believe to be valuable. In this case, what they perceive to be the “real Bitcoin” (currency is a social contract, yadda yadda).
A transition from PoW to PoS must thus find broad community consensus, in particular leadership consensus (but broader consensus is also needed so as to prevent the value of Bitcoin from collapsing).
So who controls the decentralized “uncensorable” currency? Mostly, the miners. The Bitcoin foundation also has power, but as I understand the two are (or at least were) fairly enmeshed.
Of course, miners don’t want to transition to PoS, as they would lose their golden goose.
Therefore, it’s likely that a transition from PoW would only happen if it is necessary to fend off an existential thread to Bitcoin (which would make mining equally worthless). That could be regulation on PoW, or simply the Ethereum narrative taking over. The point is that the writing has to be on the wall that if Bitcoin doesn’t transition, then it is headed to zero, even if it’s on a longer timeframe.Being much more speculative, I’m going to venture and say that a long-decline of Bitcoin is unlikely. At best, there will be an ambiguous uncertainty period where the Bitcoin faith machine will churn faster than ever, but some proposals for PoS transition will start to emerge and be taken seriously (currently, they’re quite unpopular). The ambiguous period will either end with PoS adoption, or with a rapid collapse.
A transition to PoS would probably entail a long phase-out of mining, probably by means of decreased rewards so that established players can amortize their hardware investments.It’s also possible that a PoS bitcoin fork is created (just like Bitcoin Cash) and ends up overtaking the original as the “real Bitcoin” in people’s mind. I wouldn’t be surprised if such a fork already exists, but timing & support is crucial. A 8-year old forks in 5 years is not going to cut it — it needs to arrive on the scene with as much publicity as possible.
I’m strongly endorsing this, having done the same thing you did (spent two evenings looking at this stuff) and having come up pretty much exactly with the same picture, and the same set of questions/uncertainties.
Something I found very interesting is the fact that Ethereum is poised to move from proof-of-work (miners who solve a cryptographically hard problem to verify transactions, minting new coins in the process) to proof-of-stake (where one “stakes” coins for a chance to verify transactions, earning interest in the process — I’m not entirely comfortable with the ideas yet, but here’s an article on the topic by Ethereum’s creator: https://vitalik.ca/general/2021/04/07/sharding.html).
If Ethereum successfully transitions to proof-of-stake, this should theoretically greatly lower the transaction costs and make the whole ecosystem more viable.
I can’t judge because I didn’t follow the course, but I’d like to share my a priori reaction:
One should not have to modify a working program. One should be able to add to it to implement new functionality or to adjust old functions for new requirements. We call this additive programming.
That does sound like a terrible idea. It’s often used to justify horrendous abstractions and over-architecturing, for instance. Now this can make sense when third-parties depend on the code. But if you can change the code, it’s often better to do so.
The other school that I am aware on this is to use programming paradigms where you can forcibly add to existing code (for instance, Aspect Programming). This hasn’t generally been very successful, and it’s not incredibly difficult to understand why: the existing code makes some assumptions, which it does not document (and which are liable to change) - it’s easy for the “injected” code to break these assumptions.
I think I’ve heard of “additive programming” in another context with a slightly different definition. If you can isolate in advance the class of things that can be added (say you have a role playing game, and you know you might add new character classes, and new dungeons for instance). Then making it so that when these things are added, they can be added by writing code in a single location is “additive”, whereas having to modify things all over the place is not. I think this is a excellent idea, but it is by it’s very nature restricted to the kind of changes you can predict.
Anyhow, I’m curious to see how the course holds up to the promise (who knows, maybe it does hold the grail). If you have been following the course and you want to share your perspective, I’ll be grateful.
This explanation misses one major piece of the whole affair: it was not only a short squeeze (mostly, it was at first), but also a gamma squeeze (or gamma trap). It has to do with the hedging of option sales.
Here is a short explanation I wrote for a colleague:gamma trap: most options are sold by market makers (e.g. investment banks), and they hedge the options they sell by purchasing (or selling) stock in order to be “delta neutral”
so if they sell one call at the money (strike price = current price), the delta is 0.5 (if the stock price increases by 1$, the call price will increase by 0.5$), and they will buy 50 shares to hedge (now if the price increases by 1$, they are up 50$ on the stock, down 50$ on the call, and they still profit by pocketing the premium)
as an option gets in the money (strike price < current stock price), its delta increases, meaning the market maker must increase the number of stock it purchases to continue being delta-neutral
here what happened is that redditors purchased a lot of cheap out of the money call options (low delta), and as the stock price rose and rose, these ended up far in the money, meaning the market makers had to purchase 100 stocks for each of these calls, driving the stock price higher, and pushing all calls farther in the moneyah, and gamma = the rate of change of delta depending on the stock price (so in particular here the fact that delta increases when an option gets in the money)
I think a productive way to look at it is to look for absence of evidence, which is evidence of absence.
Much has been said about “the western diet” that is killing tons of people, but in reality, what we really know is that being obese is bad for you, as is having severe nutrient deficiencies. Otherwise not a whole lot much is sure.
Let’s take an example. Studies on meat consumption barely find a significant effect on all-cause mortality. But most often they fail to control for things as basic as pre-existing obesity or caloric intake. And if you step back one second, it’s rather obvious: people who voluntarily reduce their meat consumption (or abstain completely) tend to be quite health-focused. All the people with a junk diet are on the other side. That ought to tip the scales, but even then, the finding is minuscule.
This absence of conclusive evidence really does tell you something: diet is much less impactful (to your health) than people give it the credit for. Consider, for contrast, that being partnered adds, on average, years to your life (I suspect this finding might also have a control problem, but the magnitude of the finding is in another league).
Something where diet really has an impact is your day-to-day well-being. I don’t have a solid proof, but it seems to me that if your diet makes you feel like crap, it might not be that great in the long run, and vice-versa (beware deficiencies though, which take a long time to show up).
Unsolicited diet advice:
- Control your calories (track! I guarantee you will be surprised and learn something)
- Eat enough veggies / supplement to avoid deficiencies. You don’t need that much, and you don’t need to hit the RDAs necessarily—they’re incredibly hard to hit with food only, so I supplement to be on the safe side. Also, animal products do actually have a ton of nutrients, compared to what popular wisdom sometimes seems to imply.
- Eat enough proteins. It’s really hard to eat too much proteins, it’s incredibly good for a ton of things, and it’s generally filling besides.
- Eat enough fibers (the better argument for eating more veggies). Personally, my digestive system is weird and I actually supplement this as well (as psyllium), makes a huge difference, but I expect this is quite personal.
- Don’t sweat the rest, and enjoy your food!
Imho, these are the 80-20 (or really 98-2) rules of nutrition. I’ve never seen any evidence that any of the intricate fluff makes any difference.
… or at least at the population level. It might worth experimenting with your own potential intolerances/needs (e.g. like my need for a ton of extra fibers). But that’s not something you’ll get as general advice. I do think there must exist books or resources on the subject however.
I agree, but I think the converse point is also true: employers will attempt to pay you less (under industry standards) if the job incurs any kind of side-effect that you might be proud about, or is in a glamorous industry.
I think this is a more important point.The “it’s not just about the money, but also about X, Y, Z” (freedom, cool working conditions, social impact, …) is almost a platitude. I’ve had multiple employers using on me, and it really it wasn’t warranted at all (the jobs were in niche sectors, but weren’t glamorous, impactful to society, nor did they have extremely desirable working conditions).
The argument you make has been completely co-opted by HR departments.
The truth is that the job market is a market. It is a function of supply and demand. Jobs that are glamorous (in which I include impactful job) face more labor supply compared to other jobs, and so wages can be lower.These jobs will naturally be picked by people whose utility function values the glamour highly compared to money. So your point remains true (they might be more motivated).
Will your high-glamour low-wages job seeker will be more productive than your low-glamour high-wages job seeker? Maybe, but I’m not so convinced. I think a key different is that glamour-preference is relatively inelastic, and there are fewer venues to gratify it. Whereas it’s easy to jump ship when money is your only object, just find a company that pays more (not that this is a great strategy for money maximization, but it’s easy). Another fact to consider is that there aren’t that many truly high-paying job (or at least, high-paying enough such that the utility of the high-wage seeker would equal the utility of the glamour-seeker).
I also think a key driver of this “drive” in glamour-seeker is that, beyond low wages, employers tend to press their advantage by wringing out more from the employees. The video game industry is a prime example of this. I’ve also heard that many non-profits have notoriously bad working conditions. Mostly anecdotal evidence, but it adds up.Finally, I doubt public defenders are more skilled than other lawyers—but they did clearly pick a different trade-off.
Nominating because the idea that rationalists should win (which we can loosely defined as “be better at achieving their goals than non-rationalists”) has been under fire in the community (see for instance Scott’s comment on this post).
I think this discusses the concern nicely, and shows what rational self-improvement may look like in practice, re-framing expectations.
While far from the only one, this was one important influence in my own self-improvement journey. It’s certainly something that comes to mind whenever I think of my own self-improvement philosophy, and when it comes to trying to convince other to do similarly.
It worked! Also now that my interpretation has been confirmed, I can bask in the warm afterglow of rightness. What a day.
I assumed this was some kind of pastiche of the judgy-overarchiever trope, and I was quite entertained under that reading. But now I’ve come to the comments and everyone seems to interpret the post earnestly. I’m confused.
The question pops up regularly. Jacob (Jacobian on here) wrote an answer here: https://putanumonit.com/2019/12/08/rationalist-self-improvement/
One issue I see is the narrow definition of winning used here. I think that people reflective enough to embrace rationality would also be more likely to reconsider the winning criteria not to just be “become filthy rich and/or famous”. Consider that maybe the prize is not worth the price. I’d be more interested into people that have become wealthy/established/successful in their fields (without becoming a rock star I mean, just plain old successful, enough to be free of worries and pursue one’s one direction).
This pays lip services to the sequences, but I don’t really see a condensed version of what it teaches in the proposed materials. Not that I have a proposal for that either, but maybe someone has?
He is not overstating.
To summarize the two main points, which other people already made:
Any wealth tax is on top of inflation. One cannot ban inflation without disastrous economic consequences (or so I’m lead to believe).
Invested capital tends to appreciate along with inflation, which makes sense if you think about it, otherwise it means it’s losing value. Non-inflation adjusted returns on the stock market are much higher when the inflation is high. Also there is no reason not to stash all your money in the safest possible asset to avoid inflation.
It very much is a non-quantitative argument—since it’s a matter of principle. The principle being not to let outside perceptions dictate the topic of conversations.
I can think of situations were the principles could be broken, or unproductive. If upholding it would make it impossible to have these discussions in the first place (because engaging would mean you get stoned, or something) and hiding is not an option (or still too risky), then it would make sense to move conversations towards the overton window.
Said otherwise, the quantity I care about is “ability to have quote rational unquote conversations” and no amount of outside woke prevalence can change that *as long as they don’t drive enough community member away*. It will be a sad day for freedom and for all of us if that ends up one day being the case.
I assume you are not trying to date homeless women. I also assume that women who try to find a date usually don’t go to homeless shelters.
I’m not entirely certain about that assumption, to be honest.
I have an extremely negative emotional reaction to this.
More seriously. While LW can be construed as “trying to promote something” (i.e. rational thinking), in my opinion it is mostly a place to have rational discussions, using much stronger discursive standards than elsewhere on the internet.
If people decide to judge us on cherry pickings, that is sad, but it is much better than having them control what topics are or are not allowed. I am with Ben on this one.
About your friend in particular, if they have to be turned off of the community because of some posts and the fact we engage with idea at the object-level instead of yucking-out socially awkward ideas, then she might not yet be ready to receive rationality in her heart.
Totally. But it’s cool to want to teach things, and kids actually like to learn when it’s fun. So offer to teach, don’t impose your teaching. Be ready to jettison your plans and go with whatever your daughter finds interesting. This is what seems to work best in practice (from remembered anecdotal evidence).
I would highly suggest that anyone interested in sleep check the first few episodes of the Huberman Labs podcast, which are focused on this very issue: https://www.youtube.com/watch?v=nm1TxQj9IsQ&list=PLPNW_gerXa4Pc8S2qoUQc5e8Ir97RLuVW&index=28
(Confusingly, the playlist is in reverse order.)
The take-away are likely to be different for different people (a lot of mechanisms and techniques are covered), but for me they were:
1. cold showers in the morning—those really wake me up and flush away the grogginess that normally persists for a long time
2. step outside and get sunlight in your eyes in the morning, which helps maintain your waking time and/or shift it earlier (I’m naturally prone to go to bed later and later)
3. it’s better to have a consistent sleep duration than to make up for short sleep by sleeping long
The third point of advice in particular was something I’d never heard of and that ran counter to the gist of what I’d heard before. But empirically, it seems to be true—I’m always more tired when I suddenly start sleeping more than I did in the previous days.
Implementing mostly these three points, I got to the point where I could get on with 6 hours of sleep per night with what feels like normal productivity, something that would be absolutely unattainable previously (I think 6 hours is too short, and I want to aim for 7:30 of sleep for about 8 hours in bed in the long run).
I also benefitted a lot from using earplugs and a face mask. Something that I started to do one month or two before applying the advice from the podcast, and might have contributed to the good results.