Take into account exponential compute scaling and exponential efficiency scaling.
osten
NP-hard problems have no known efficient algorithms, strong evidence suggests none exist
Can you back up this claim? I find the evidence I have seen pretty weak in a mathematical sense.
Yes, but it still makes sense to me. RL and ‘llm thinking’ are pretty basic heuristics at this point and I don’t expect them to circumvent exponential running time requirements for many problems they implicitly solve at the same time.
Ok, but why do you think that AIs learn skills at a constant rate? Might it be that higher level skills need more time to learn because compute scales exponentially with time but for higher level skills data is exponentially more scarce and needs linearly in task length more context, that is, total data processed scales superexponentially with task level?
I think this is relevant: https://slatestarcodex.com/2017/07/24/targeting-meritocracy/ . ‘Prevalence and tendency implies morality’ - I don’t think that’s an argument that people gestured at here try to make.
Orange peel is a standard ingredient in Chinese cooking. Just be careful with pesticides.
Why wouldn’t AI agents or corporations led by AIs develop their own needs and wants, the way corporations do that as well currently? An economy has no need for humans, it only needs needs and scarce potential to meet those needs.
Agree, but not sure what you are implying. Is it, Sam is not as concerned about risks because the expected capabilities are lower than he publicly lets on, timelines are longer than indicated and hence we should be less concerned as well?
On the one hand this is consistent with Sam’s family planning. On the other hand, other OpenAI employees that are less publicly involved and perhaps have less marginal utility from hype messaging have consistent stories (e.g. roon, https://nitter.poast.org/McaleerStephen/status/1875380842157178994#m).
the only way to appropriately address [long term risk from AI systems of incredible capabilities] is to ship product and learn.
True, but nitpicking about the memorability: The long-term value may not be in the short-term value of the conversation itself. It may be in the introduction to someone by someone you briefly got to know in an itself low-value conversion, by the email for a job getting forwarded to you etc. You wouldn’t necessarily say the conversation was memorable, but the value likely wouldn’t have been realized without it.
It doesn’t need to be a singular high-value conversation. I’d say the long-term value of conversations is heavy tailed and so it may pay to have lots of conversations of low expected value. https://www.sciencedirect.com/science/article/abs/pii/B9780124424500500250
The rationalist term is ’front running steel man, for German Claude suggests Replikationsmangeleinsichtsanerkennung (‘acknowledgement of the insight despite lack of replication’):
Tess: There should be a German word that means “I see where you’re going with this, and while I agree with the point you will eventually get to, the scientific study you are about to cite doesn’t replicate.”
I would interpret Replikationsmangeleinsichtsanerkennung as ‘positive recognition that sb. changed his mind and accepts that sth. doesn’t replicate’. Perhaps replikationsmangelunbeschadete Zustimmung would be better. Yes, adjectives compound with compound nouns.
Both require an appropriate Overton Window.
Two complications perhaps?: Earth surface curvature + hills, does it work at night?
Relatedly, I schedule all my todos and my todo list contains only the ones scheduled to within the last week. If a todo is at risk of lapsing from the list, I will either have to actively reschedule it, in which case it is probably important to me, or it just drops into a list of stale todos. Occasionally I remember stale todos and can reschedule them or when I have enough time I browse the list of stale todos to see if there is anything still interesting.
Edit: This method helps because I get overwhelmed and anxious from overly long todo lists.
This gave me an idea: You could have a website where bidders upload a problem description, public and private data, an optimization goal (in the form of a solution-evaluation algorithm), and a bid like ‘for a solution at least x good I pay y’. Takers can submit algorithms that produce solutions. They get run against the data with time and memory limit as specified by the bidder and if they match the solution quality the taker gets paid.
Is there something like this around?
Thanks, I appreciate it. It must be very difficult. Please don’t lose your patience.
So will the maximally curious AI be curious about what would happen if you genetically modified all humans to become unicorns?
Have we just discovered one of the actual reasons for the peer-review process sticking around? It forces CoTs to be legible unlocking secondary value?