You mean in a positive or negative way? Harmful? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5615097/ , and/or useless? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1447210/
Anders Lindström
Some tough love: The only reason a post about seed oil could garner so much interest in a forum dedicated to rational thinking is because many of you are addicted to unhealthy heavily processed crap food that you want to find a rational to keep on eating.
If this were the 50′s a post titled “How many dry martinis are optimal to drink before lunch” would probably have been elicited the same type of speculative wishful thinking in the comment section as this post. You all know what the answer is today to the dry martini question, its: “Zero. If you feel the need to drink alcohol on a daily basis, seek help”
The solution is very simple. Stop eating things you are not suppose to eat instead of hoping for the miracle that your Snickers bar will turn out to be a silver bullet for longevity. If you can not stopping eating things you are not suppose to eat, seek professional help to kick your addiction(s).
Glad to hear you are doing better!
Ok, that is an interesting route to go. Let “us” know how it goes if you feel for sharing your journey
Hey Sable, I am sorry about your situation. Perhaps I am pointing out the obvious, but you just achieved something. You wrote a post and people are reading it. Keep ’em coming!
Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an “AI-washing” attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is “aligned” to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:
“AI will not make humans redundant.”
“AI is not an existential risk.”
...
I am not so sure that Xi would like to get to AGI any time soon. At least not something that could be used outside of a top secret military research facility. Sudden disruptions in the labor market in China could quickly spell the end of his rule. Xi’s rule is based on the promise of stability and increased prosperity so I think that the export ban of advanced GPU’s is a boon to him at time being.
The Paper Clip
Scene: The earth
Characters: A, an anti-humanist
B, a pro-humanist
A: “We need to reduce the population by 90-95% to not deplete all resources and destroy the ecosystem”
B: “We need a larger population so we get more smart people, more geniuses, more productive people”(Enter ASI)
ASI: “Solved. What else can I help you with today?”
Imagine having a context window that fits something like PubMed or even The Pile (but that’s a bit into the future...), what would you be able to find in there that no one could see using traditional literature review methods? I guess that today a company like Google could scale up this tech and build a special purpose supercomputer that could handle a 100-1000 millions token context window if they wanted, or perhaps they already have one for internal research? its “just” 10x+ of what they said they have experimented with, with no mentions of any special purpose built tech.
Dagon thank you for follow up on my comment,
yes, they are in some ways oranges and apples but both of them put a limit on your possibility to create things. One can argue that immaterial rights have been beneficial for humanity as a whole, but it is at the same time criminalizing one of our most natural instincts which is to mimic and copy what other humans do to increase our chance of survival. Which lead to the next question, would people stop innovate and create if they could not protect it?
Dagon, yes that seems like a reasonable setup. Its pretty amazing that world and life altering inventions gets a protection for a maximum of 20 years from the filing date where as if someone doodles something on a paper get a protection that lasts the life of the author plus 70 years. But… maybe the culture war is more important to win than the technology war?
Anyways, with the content explosion on the internet I would assume that pretty much every permutation of everything that you can think of is now effectively copyrighted well into the foreseeable future. Will that minefield prove to be the reason to reform copyright law so that it fits into a digital mass creation age?
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.
Logan Zoellner thank you for your question,
In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything.I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly?
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Gerald Monroe thank you again clarifying you thoughts,
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
Logan Zoellner thank you for highlighting one of your previous points,
You asked me to agree to your speculation that GPT5 will not destroy the world. I will not agree with your speculation because I have no idea if GPT5 will do that or not. This does not mean that I agree with the statement that GPT5 WILL destroy earth. It just means that I do not know.
I would not use Manifold as any data point in assessing the potential danger of future AI.
Gerald Monroe, thank you for expanding your previous comments.
You propose building these sub-human machines in order to protect humanity from anything like nuclear war to street violence. But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
Thank you Gerald Monroe for your comments,
My interpretation of your writing is that we should relentlessly pursue the goal of AGI because it might give us some kind of protection against a future alien invasion of which we have no idea what we are dealing with or will even happen? Yes, the “aliens” could be switched for AGI but it makes the case even stranger to me, that we should develop A(G)I to protect us from AGI.
We could speculate that AGI gives an 10x improvement there and 100x here and so on. But we really do not have any idea. What if AGI is like turning on a light switch, that you from one model to the next get a trillion fold increase in capability, how will the AI safety bots deal with that? We have no idea how to classify intelligence in terms of levels. How much smarter is a human compared to a dog? Or a snake? Or a chimpanzee? Assume for the sake of argument that a human is twice as “smart” as a chimpanzee on some crude brain measure scale thingy. Are humans than twice as capable than chimpanzees? We are probably close to infinitely more capable even if the raw brain power is NOT millions or billions or trillions times that of a chimpanzee.
We just do not have any idea what just a “slightly smarter” thing than us is capable of doing, it could be just a tiny bit better than us or it could be close to infinitely better than us.
Logan Zoellner thank you for further expanding on your thoughts,
No, I will not agree that GPT5 will not destroy the world, cause I have no idea what it will be capable of.
I do not understand you assertion that we would be better fending off aliens if we have access to GPT5 than if we do not. What exactly do you think GPT5 could do in that scenario?
Why do you think that having access to powerful AI’s would make AGI less likely to destroy us?
If anything, I believe that the Amish scenario is less dangerous than the slow take off scenario you described. In the slow take off scenario there will be billions of interconnected semi-smart entities that a full blown AGI could take control over. In the Amish scenario there would be just one large computer somewhere that is really really smart, but that does not have the possibility to hijack billions of devices, robots and other computers to reek havoc.My point is this. We do not know. Nobody knows. We might create AGI and survive, or we might not survive. There are no priors and everything going forward from now on is just guesswork.
Logan Zoellner, thank you for clarifying the concept.
However, it is possible to argue about semantics but since no one knows when AGI will happen if you increase the compute and or deploy new models, all take offs are equally dangerous. I think a fair stance by all AI researcher and companies trying to get to AGI is to admit that they have zero clue when AGI will be achieved, how that AI will behave and what safety measures are needed that can keep it under control.
Can anyone with certainty say that for instance a 100x in compute and model complexity over the state of the art today does not constitute an AGI? A 100x could be achieved within 2-3 years if someone poured a lot of money into it i.e. if someone went fishing for trillions in venture capital...
We are on a path for takeoff. Brace for impact.
Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike’s comment and observing Open AI’s behavior.