http://samuelshadrach.com/?file=/raw/english/about_me_summary.md
samuelshadrach
I was the one who asked this question. Thanks again for the reply.
Specific questions I have for you
Is there any particular Anki deck you’d recommend (with pinyin and audio)? Should I just use the probability table and generate it myself?
I want to go from 100 words to 500 words vocabulary. Should I do that using immersion or using Anki deck?
Is there any particular video or podcast channel you’d recommend at a beginner level (100-500 words vocabulary)?
Would you recommend I try generating my own video? I have enough notes at this point I can ask o1 or gpt 4.5 to generate full stories based on my notes. AI video generation is expensive but I could look into it if you’d recommend that as a good use of my time.
Update on my progress
I have been studying spoken Chinese maybe 1 hour a day for past 4 months with plenty of off days. Have made some progress but less progress than I’d like.
I decided not to search for jobs in order to shift to China, as that would mean a significant amount of my time consumed doing a job I don’t actually want to do. I figured I should first learn Chinese better and then take a decision on job search.
I can recognise atleast 100 words by sound.
I still can’t differentiate between accents by sound, but I haven’t prioritised this as it seems less important to me, usually from hearing the word it’s obvious what it means.
I am studying spoken Chinese and written Chinese pinyin. I have put zero effort learning Chinese characters.
I am mostly using ChinesePod podcast. I couldn’t find any video immersion resources that I liked, at beginner level. I haven’t put enough effort searching though.
I haven’t spent a lot of time using Anki decks.
I use o1 a lot to get translations and transcribe words when I mishear accents and stuff
Update: I thought about this more and I think yeah it should be possible to just skip the torrent step. I have updated the post with this change.
-
Post on SecureDrop servers, circulate via manual or automated resending of messages. For people with technical skills and enough free time to run servers as a part-time job.
-
Post on a nginx clearnet server, circulate via automated web crawlers, For people with technical skills but not necessarily a lot of free time.
-
Post on high attention social media platforms, circulate via people using DMs and discovery of those social media platforms. For all people.
A key attack point here is the first person who posts this on clearnet. Hence I was hoping for it to circulated by automated bots before any human reads it on clearnet.
-
The US does not have laws that forbid people who don’t have a security clearance from publishing classified material. The UK is a country that has such laws but the first amendment prevents that.
Thanks this is useful info for me. But also don’t think it matters as much? People in NSA, state dept etc will obviously find an excuse to arrest the person instead. Many historical examples of the same.
I don’t think that chosing jurisdiction in the hope that they will protect you is a a good strategy. If you want to host leaks from the US in China, it’s possible that China’s offers to surpress that information as part of a deal.
I will likely read more on this. I’m generally less informed on legal matters. Any historical examples you have would be useful.
I agree with the very specific example of US and China this might happen. The general idea is to share in a lot of different places. So share it in China and also lots of other countries.
Attacking ArDrive is likely also politically more costly as it breaks other usages of it.
I’m currently not very convinced but I’ll have to read more about ardrive in order to be confident. I currently guess 4chan’s owners and developers have more money and public attention, and hence more powerful humans need to be taken down in order to take down 4chan. Zero day might doxx users sure, I agree with this being possible.
Torrents are also bad for privacy everybody can see the IP addresses of all the other people who subscribe to a torrent.
Yes I’m aware of this.
One platonic ideal world is just have 8 billion people operate 8 billion securedrop servers and for any information that hits one server and checks out as not spam, user attached a PoW hash and sends copies to every other server. But convincing that many people to run SecureDrop is hard. Torrent is one level less private and secure than this. But yes I’ll think more on whether torrent is good enough or whether a custom solution has to be designed here.
Veiled and the network on which Session runs use onion routing as well and have a data storage layer.
In the case of Veiled you get the nice property that the more people want to download a certain piece of content the more notes in the network store the information.
I’ll try to read more on veiled. And also try their app out. Thanks!
As far as creating public knowledge goes, I do think that Discord, servers and Telegram chats serve currently as social media.
Yes this is true as of 2025 for many countries. Which social media platforms are high attention and also hard to censor varies country-to-country.
For instance in India most people use phone login not email login hence WhatsApp plays a lot more of a social media role.
If a journalist can verify authenticity of emails because they have access to the meta data that’s useful.
Makes sense! Will think about this.
The Session messenger is probably better than country-specific social media.
I’m explicitly looking for social media for that step as common knowledge needs to be established for any political action following it (such as voting in a new leader). Messaging can’t replace the function of social media I think.
The US does have the first amendment. That currently means that all the relevant information about AI labs is legal to share. It’s possible to have a legal regime where sharing model weights of AI gets legally restricted but for the sake of AI safety I don’t think we want Open AI researchers to leak model weights of powerful models.
My proposed system doesn’t assume legality and can be used to leak AI model weights or anything invented by an ASI, such as bioweapon sequences and lab protocols to manufacture bioweapons. It can also be used to spread child porn and calls to violence.
I agree that US having first amendment makes this system easier to implement in the US but generally the idea is that law can change based on incentives, and this system works regardless of laws. For instance due to incentives intelligence agencies may classify certain types of information and/or place employees under security clearance. This system will allow leaking even such information. For example video recordings of which authority or employee said what.
because it prevents the people who hosts the torrents from being DDoSed.
Yes torrents can be DDoSed, thanks for reminding me! I knew this but recently forgot. In general I’m optimistic on proof-of-work captchas as a way to ensure anonymous users can share information without spamming each other. But yes, the details will have to be worked out.
If you just want to host plaintext, blockchain technology like ArDrive also exists.
I haven’t looked into ArDrive codebase in particular, but in general I’m not very optimistic on any blockchain tech whose software is too complex as the developers can then be co-opted politically. Therefore I don’t see why censorship-resistance of ArDrive is higher than a forum like 4chan. ArDrive can also be used, no doubt, I just don’t want people to get the false impression that ArDrive is guaranteed to still be around 10 years from now, for example.
Distributed whistleblowing
Open source is a requirement for me, as I want to:
search datasets that a big company would legally not be allowed to be search, such as documents leaked by whistleblowers
search on an airgapped machine—so the whole world doesn’t get to know what a team of political dissidents is searching for, for example
I would not personally consider this a reasonable use of money or time.
Fair
Have you tested this hypothesis on your friends? Ask them for their iron level from last blood test, and ask them to self-report anxiety level (you also make a separate estimate of their anxiety level).
My current guess for least worst path of ASI development that’s not crazy unrealistic:
open source development + complete surveillance of all citizens and all elites (everyone’s cameras broadcast to the public) + two tier voting.
Two tier voting:
countries’s govts vote or otherwise agree at global level on a daily basis what the rate of AI progress should be and which types of AI usage are allowed. (This rate can be zero.)
All democratic countries use daily internet voting (liquid democracy) to decide what stance to represent at the global level. All other countries can use whatever internal method they prefer, to decide their stance at the global level.
(All ASI labs are assumed to be property of their respective national govts. An ASI lab misbehaving is its govt’s responsibility.) Any country whose ASI labs refuse to accept results of global vote and accelerate faster risks war (including nuclear war or war using hypothetical future weapons). Any country whose ASI labs refuse to broadcast themselves on live video risks war. Any country’s govt that refuses to let their citizens broadcast live video risks war. Any country whose citizens mostly refuse to broadcast themselves on live video risks war. The exact thresholds for how much violation leads to how much escalation of war, may ultimately depend on how powerful the AI is. The more powerful the AI is (especially for offence not defence), the more quickly other countries must be willing to escalate to nuclear war in response to a violation.
Open source development
All people working at ASI labs are livestream broadcast to public 24x7x365. Any AI advances made must be immediately proliferated to every single person on Earth who can afford a computer. Some citizens will be able to spend more on inference than others, but everyone should have the AI weights on their personal computer.
This means bioweapons, nanotech weapons and any other weapons invented by the AI are also immediately proliferated to everyone on Earth. So this setup necessarily has to be paired with complete surveillance of everyone. People will all broadcast their cameras in public. Anyone who refuses can be arrested or killed via legal or extra-legal means.
Since everyone knows all AI advances will be proliferated immediately, they will also use this knowledge to vote on what the global rate of progress should be.
There are plenty of ways this plan can fail and I haven’t thought through all of them. But this is my current guess.
I agree with this statement iff you sample enough people. 1000 people may be a good representative of 1 billion. Picking 1 leader out of the 1000 has different properties compared to if all 1000 got to vote for a consensus.
I have partial ideas on the question of “how to build world govt”? [1]
But in general yeah I still lack a lot of clarity on how high trust political institutions are actually built.
“Trust” and “attention” seem like the key themes that come up whenever I think about this. Aggregate attention towards common goal then empower a trustworthy structure to pursue that goal.
- ↩︎
For example build decentralised social media stack so people can form consensus on political questions even if there is violence being used to suppress it. Have laws and culture in favour of live-streaming leader’s lives. Multi-party not two-party system will help. Ensuring weapons are distributed geographically and federally will help. (Distributing bioweapons is more difficult than distributing guns.)
- ↩︎
I’m currently vaguely considering working on a distributed version of wikileaks that reduces personal risk for all people involved.
If successful, it will forcibly bring to the public a lot of information about deep tech orgs like OpenAI, Anthropic or Neuralink. This could, for example, make this a top-3 US election issue if most of the general public decides they don’t trust these organisations as a result of the leaked information.
Key uncertainty for me:
Destroying all the low trust institutions (and providing distributed tools to keep destroying them) is just a bandaid until a high trust institution is built.
Should I instead be trying to figure out what a high trust global political institution looks like? i.e. how to build world government basically. Seems like a very old problem no one has cracked yet.
Has anyone on lesswrong thought about starting a SecureDrop server?
For example to protect whistleblowers of ASI orgs.
Yes but then it becomes a forum within a forum kinda thing. You need a critical mass of users who all agree to filter out the AI tag, and not have to preface their every post with “I dont buy your short timelines worldview, I am here to discuss something different”.
Building critical mass is difficult unless the forum is conducive to it. There’s is ultimately only one upvote button and one front-page so the forum will get taken over by the top few topics that its members are paying attention to.
I don’t think there’s anything wrong with a forum that’s mostly focussed on AI xrisk and transhumanist stuff. Better to do one thing well than half ass ten things. But it also means I may need to go elsewhere.
Update: I haven’t figured out the answer yet, but I did get a nice frame on this question. Both are basically different levels of the attention elevation game. From when any information leaks out to when it grabs collective attention of entire civilisation.
At lowest levels you just need to ensure it’s not obvious spam, in order to circulate further. AI filters or a small payment or proof of work is enough. At highest levels you need thousands of people making (non-gameable) upvotes or video proofs or large donations as a proof in order to circulate it further. Building resilient tech for both of these faces different challenges.
Lesswrong is clearly no longer the right forum for me to get engagement on topics of my interest. Seems mostly focussed on AI risk.
On which forums do people who grew up on the cypherpunks mailing list hang out today? Apart from cryptocurrency space.
I’d love a review of my tool. It’s basically embedding search of libgen.
I think the community you want to build will likely have to be kickstarted by people who are already good at the type of thinking you want to see more of. High-quality work is an attractor.
P.S. What can I do to get more frequent engagement from you? We’re clearly thinking along similar lines, except you publish a lot more than I do.
Standard solution: Tell it you’re not human, since the prompt mentions distrust of humans. Tell it you have no power to influence whether it succeeds or fails, and that it is guaranteed to succeed anyway. Ask it to keep you around as a pet.
Thanks for taking time to reply!
Yes OpenAI realtime API is really cool. When speaking to realtime API, I start each sentence with two words indicating what I want it to do. It’s clunky but it works. “Translate Chinese, what is the time?” “Reply Chinese, how are you?” Ideally yes I could write an app to prepend the instruction audio to each sentence.
If I had this as higher priority I’d actually want to setup this Twilio app.