I agree, it should be (mp)^2. But you will just need 1+s>mp for the delay to be an advantage, which in steady state means we just need s to be positive.
Sune
I would not expect the next reviews to mention bees, when bees are not part of the name. Instead, I would assume the author of the first review had been unlucky and seen a few bees (or maybe even misidentified wasps) and was exagerating. Alternativly, the problem could have been solved/appeared between the visit of reviewer 1 and the other reviewers.
Based on the title, I was expecting a post about forecasting about your personal life. Not sure how to formulate a more accurate title.
Trustpilot is another site you can leave a bad review. Not sure if it is popular in the US.
For millennia, cats have made humans worship and pamper them. If this idea takes it one step further and leads to humans accidentally building an AI that fills the universe with happy cats, I have to say: well played, cats!
Spread the meme that for each second the AI catastrophe is delayed, the 7.75 billion people of the world experience a total of 245 years. For each day its delayed, we get 21 million years.
The meme probably won’t save the world, but might make some AI researchers hesitate/procastinate/work less and hence give us all a few days more to live.
Would you say a self-driving car is a tool AI or agentic AI? I can see how the self-driving car is a bit more agentic, but as long as it only drives when you tell it to, I would consider it a tool. But I can also see that the border is a bit blurry.
If self-driving cars are not considered agentic, do you have examples of people attempting to make agent AIs?
I identified several other strong candidates for ways to apply software engineering skills to making the world better. If you’re thinking of making a similar move, let me know and I’d be happy to give you an overview of what I found and potentially give introductions.
Is there any reason for you not to publish these candidates?
Same, it’s what I tried to ask here, just elaborated better than I could have done myself.
tldw: corporation are as slow/slower than humans, AIs can be much faster
Can you be more specific about what you mean by “intellectual landmines”?
+1 for the word agentic AI. I think that is what I was looking for.
However, I don’t believe that gradient descent alone can turn an AI agentic. No matter how long you train a language model, it is not going to suddenly want to acquire resources to get better at predicting human language (unless you specifically ask it questions about how to do that, and then implement the suggestions. Even then you are likely to only do what humans would have suggested, although maybe you can make it do research similar to and faster than humans would have done it).
Can we control the blind spots of the agent? For example, I could imaging that we could make a very strong agent, that is able to explain acausal trade but unable to (deliberately) participate in any acausal trades, because of the way it understands counterfacuals. Could it be possible to create AI with similar minor weaknesses?
I am surprised that I need to write this, but if killing the humans will decrease P(shutdown) by more than 1e-4, then continuing to refrain from killing the humans is going to worry and weigh on the AI more than a 1e-4 possibility it is in a simulation. (For simplicity, assume that the possibility of shutdown is currently the dominant danger faced by the AI.)
This is assuming that the AI only care about being alive. For any utility function, we could make a non-linear transformation of it to make it risk adverse. E.g. we can transform it such that it can never take a value above 100, and that the default world (without the AI) has a value of 99.999. If we also give the case where an outside observer disapproves of the agent a value of 0, the AI would rather be shut down by humans than do something it know would be disapproved by the outside observer.
A language model is in some sense trying to generate the “optimal” prediction for how a text is going to continue. Yet, it is not really trying: it is just a fixed algorithm. If it wanted to find optimal predictions, it would try to take over computational resources and improve its algorithm.
Is there an existing word/language for describing the difference between these two types of optimisation? In general, why can’t we just build AGIs that does the first type of optimisations and not the second?
Why do we assume that any AGI can meaningfully be described as a utility maximizer?
Humans are the some of most intelligent structures that exist, and we don’t seem to fit that model very well. If fact, it seems the entire point in Rationalism is to improve our ability to do this, which has only been achieved with mixed success.
Organisations of humans (e.g. USA, FDA, UN) have even more computational power and don’t seem to be doing much better.
Perhaps an intelligence (artificial or natural) cannot necessarily, or even typically be described as optimisers? Instead we could only model them as an algorithm or as a collection of tools/behaviours executed in some pattern.
- Jun 15, 2022, 9:08 AM; 1 point) 's comment on AGI Safety FAQ / all-dumb-questions-allowed thread by (
- Jun 10, 2022, 3:44 PM; 1 point) 's comment on why assume AGIs will optimize for fixed goals? by (
A few thoughts about the old guard problem:
In a population without changes in lifespan, the median birthtime will increase by one year for every year that passes. E.g. say for simplicity that life expectancy is 80 years and noone died before 40, then the median birth year in 2020 is 1980, and 2030 it is 1990 and so on. If life expectancy increases, this with go slower. I dont know the numbers, but I would think median birthtime increases with 10-11 months per year in the developed world today. If people stopped dying and new people were born at constant rate, median birthtime would still be increasing with around 6 months a year. So in a democracy, I would expect moral development to continue, although at a somewhat slower rate.
Of course, if we look at the birthtime of the 10% oldest, that would only increase with around 1⁄10 year per year, so there might develop an old elite. Still, people born before, say, 2100 would become a smaller and smaller minority, and the same can be said for any other year.
“By 1967, 80% of the population in every country was vaccination.” vaccination → vaccinated
When you say you sleep 7-7.5 hours on weekdays, do you mean as measured by a wearable such as Oura or Apple Watch, or do you really mean time in bed? There can easily be one hour difference, even if you think you don’t have problems sleeping.
Downvoting for click-baity title and for not getting to the point. Much of the internet is optimised for catching my attention and wasting my time, and I come to lesswrong to avoid that.