You can add Black Death to the list. Popular theory is that disease killed so many people (around 1⁄3 of Europe’s population) that few remaining workers could negotiate higher salaries which made work-saving innovations more desirable and planted the seeds of industrial development.
artemium
This is very underrated newsletter, thank you for writing this. Events in KrioRus are kind of crazy. I cannot imagine a business where it is more essential to convince customers of robustness in the long run than cryonics and yet...ouch.
Also, Russia deployed lasers Peresvet which blind American satellites used to observe nuclear missiles.
I thought Peresvet is more of a tactical weapon?
https://en.wikipedia.org/wiki/Peresvet_(laser_weapon)Are there any updates on nuclear powered missile, Burevestnik?
Even worse, that kind of move would just convince the competitors that AGI is far more feasible, and incentivize them to speed up their efforts while sacrificing safety.
If blocking Huwaei failed to work a couple of years ago with an unusually pugnacious American presidency, I doubt this kind of move would work in the future where the Chinese technological base would be probably stronger.
In a funny way, even if someone is stuck in a Goodhart trap doing Language Models it is probably better to Goodhart performance on Winograd Schemas than just adding parameters.
I am not an expert in ML but based on some conversations I was following, I heard WuDao’s LAMBADA score (an important performance measure for Language Models) is significantly lower than GPT-3. I guess a number of parameters isn’t everything.
Strong upvote for a healthy dose of bro humor which isn’t that common on LW. We need more “people I want to have a beer with” represented in our community :D.
Thats interesting. Can you elaborate more?
None: None of the above; TAI is created probably in the USA and what Asia thinks isn’t directly relevant. I say there’s a 40% chance of this.
I would say it might still be relevant in this case. For example, given some game-theoretical interpretations, China might conclude that doing a nuclear first strike might be a rational move if the US creates the first TAI and suspects that will give their enemies an unbeatable advantage. Asian AI risk hub might successfully convince Chinese leadership to not do that if they have information that US TAI is built in a way that would prevent usage just for the interest of its country of origin.
Not sure about anti-gay laws in Singapore, but from what I gathered from the recent trends, the LGTB situation is starting to improve there and in East Asia in general.
OTOH the anti-drug attitudes are still super strong (for example you can still get the death penalty for dealing harder drugs), therefore I presume it’s an even bigger deal-breaker giving the number of people who are experimenting with drugs in the broader rationalist community.
Not to mention a pretty brutal Anti-Drug laws.
What would be the consequence of Belarus joining the western military alliance in terms of Russia’s nuclear strategy? Let’s say that in the near future Belarus joins NATO, and gives the US free hand in installing any offensive or defensive (ABM) Nuclear weapon system on Belarus territory. Would this dramatically increase the Russian fear of a successful nuclear first strike by the US?
Excellent question! Was thinking about it myself lately, especially after GPT-3 release. IMHO, it is really hard to say as it is not clear which commercial entity will bring us over the finish line, and if there will be an investment opportunity at the right moment. It also quite possible that even the first company that does it might even bungle its advantage and investing there might be a wrong move (seems to be a common pattern in the history of technology).
My idea is just to play it safe and save money as much as possible until there is a clear example we arrived at the AGI level (when AI completely surpasses humans on Winograd schemas for example), and if there won’t be any FOOM try to find the companies that are mostly focused on the practical application where you get the biggest bang for the buck.
But honestly, at the point where you will have AGI widely available its quite possible that the biggest opportunity is just learning to utilize it properly. If you have access to AGI you can just ask it yourself: “how to benefit from AGI given my current circumstances?” and it will probably give you the best answer.
We haven’t managed to eliminate romantic travails
Ah! Then, it isnt utopia in my definition :-) .
Love it. It is almost like anti-Black Mirror episode where humans are actually non-stupid.
Amazing post!
Would be useful to mention examples of contemporary ideas that could be analogues of heliocentrism in its time. I would suggest String Theory to be one possible candidate. The part when Geocentrist is challenging Heliocentrist to provide some proof while Heliocentrist is desperately trying to explain away lack of experimental evidence kinda reminds me of debates between string theorist and their sceptics. (it doesn’t mean String Theory is true just there seems to be a similar state of uncertainty).
This is great. Thanks for posting it. I will try to use this example and see if I can find some people who would be willing to do the same. Do you know of any new remote group that is recruiting members?
This is a good idea that should definitely be tested. I completely agree with the Duncan that modern society, and especially our community is intrinsically to allergic to authoritarian structure despite strong historical proof that this kind of organisations can be quite effective.
would consider joining in myself but given my location that isn’t an option.
I do think that in order to build successful organisation based on authority the key factor are personal qualities and charisma of the leader and rules play smaller part.
As long as project is based on voluntary participation, I don’t see why anyone should find it controversial. Wish you all the best.
fixed.
We would first have to agree on what “cutting the enemy” would actually mean. I think liberal response would be keeping our society inclusive, secular and multicultural at all costs. If that is the case than avoiding certain failure modes like becoming intolerant militaristic societies and starting unnecessary wars could be considered as successful cuts against potential worse world-states.
Now that is liberal perspective, there are alternatives, off course.
I don’t think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.
It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe any advanced society with mind uploading technology would be so petty to use this in such horrible way . At that point I expect they would treat bad behaviour as a software bug.
I think there could be a steelman why this post is LW-relevant (or at least possible variants of the post). If this Canadian precedent becomes widely adopted in the West everyone should probably do some practical preparation to ensure the security of their finances.
P.S: I live in Sweden which is an almost completely cashless society, so a similar type of government action would be disastrous.