If I knew the specific bs, I’d be better at making successful applications and less intensely frustrated.
Kabir Kumar
Could it be that meditation is doing some of the same job as sleep? I’d be curious what the amount of time spent meditating vs amount of sleep need reduced.
Could also reduce unrest/time waiting to sleep.
all of the above, then averaged :p
prob not gonna be relatable for most folk, but i’m so fucking burnt out on how stupid it is to get funding in ai safety. the average ‘ai safety funder’ does more to accelerate funding for capabilities than safety, in huge part because what they look for is Credentials and In-Group Status, rather than actual merit.
And the worst fucking thing is how much they lie to themselves and pretend that the 3 things they funded that weren’t completely in group, mean that they actually aren’t biased in that way.At least some VCs are more honest that they want to be leeches and make money off of you.
the average ai safety funder does more to accelerate capabilities than they do safety, in part due to credentialism and looking for in group status.
this runs into the “assumes powerful ai will be low/non agentic” fallacy
or “assumes ai’s that can massively assist in long horizon alignment research will be low/non agentic”
“Short Timelines means the value of Long Horizon Research is prompting future AIs”
Would be a more accurate title for this, imo
In sixth form, I wore a suit for 2 years. Was fun! Then, got kinda bored of suits
Why does it seem very unlikely?
The companies being merged and working together seems unrealistic.
the fact that good humans have been able to keep rogue bad humans more-or-less under control
Isn’t stuff like the transatlantic slave trade, genocide of native americans, etc evidence that the amount isn’t sufficient??
pauseai, controlai, etc, are doing this
Helps me decide which research to focus on
Both. Not sure, its something like lesswrong/EA speak mixed with the VC speak.
What I liked about applying for VC funding was the specific questions.
“How is this going to make money?”
“What proof do you have this is going to make money”
and it being clear the bullshit that they wanted was numbers, testimonials from paying customers, unambiguous ways the product was actually better, etc. And then standard bs about progress, security, avoiding weird wibbly wobbly talk, ‘woke’, ‘safety’, etc.
With Alignment funders, they really obviously have language they’re looking for as well, or language that makes them more and less willing to put more effort into understanding the proposal. Actually, they have it more than the VCs. But they act as if they don’t.
it’s so unnecessarily hard to get funding in alignment.
they say ‘Don’t Bullshit’ but what that actually means is ‘Only do our specific kind of bullshit’.
and they don’t specify because they want to pretend that they don’t have their own bullshit
I would not call this a “Guide”.
It’s more a list of recommendations and some thoughts on them.
What observations would change your mind?
You can split your brain and treat LLMs differently, in a different language. Rather, I can and I think most people could as well
I think this is the case for most in AI Safety rn
Thanks! Doing a bunch of stuff atm, to make it easier to use and a larger userbase.