Physics student and former philosophy student interested in AI safety
Rennes, France
Physics student and former philosophy student interested in AI safety
Rennes, France
Would it be a good idea to make stickers about AI alignment and stick them in the streets, to spread the idea?
I’m thinking about something simple : A nice logo, and something like “Align AI” under it, maybe with a site like safe.ai in small letters under it, or even a QR code that links to a relevant site, for example the Statement on AI risk (https://www.safe.ai/work/statement-on-ai-risk). The sticker could even be a copy of the statement, as it is quite simple yet striking. Maybe something else would be better.
The thing is that most people don’t even know the problem of alignment is a thing, and have never heard of it. Most stickers often refer to things people aren’t necessarily interested in but already know something about (political parties/groups, events, even political ideas like global warming and such). This would be different as most people wouldn’t even know what this refers to.
This could eventually, maybe after they encounter the term a couple time through various means, make some people think : “but what is this alignment thing?” and google it. The AI aligment idea could speak to anyone, as it seems not related to any political side/particular ideology.
I’m thinking such a mean of spreading the idea may be simple yet efficient. It could be beneficial, as the lack of consideration in politics partly comes from people not knowing/talking about it.
Also, a sticker wouldn’t really degrade street furniture and risk giving a bad image to the idea.
Let me know what you think about this. I live in a city in France and I’d be up for taking my bike and sticking stuff around if you think it’d be a good idea !
Interesting,I like the way you progressed through the different hypotheses in an experimental way. I won’t be able to give interesting feedback in more depht though
Thanks for your comment. §1 : Okay, you mean something like this, right? I think you’re right, maybe the game of life wasn’t the best example then.
§2 : I think I agree, but I can’t really get why would one need to know which configuration gave rise to our universe.
§ 3 : I’m not sure if i’m answering adequately, but I meant many chaotic phenomena, which probably include stars transforming into supernovae. In that case we arguably can’t precisely predict the time of the transformation without fully computing “low level phenomena”. But still I can’t see why we would need to “distinguish our world from others”.
For now I’m not sure to see where you’re going after that, i’m sorry ! Maybe i’ll think about it again and get it later.
Oh, what is this cold fusion flap thing? I didn’t knw about it, and can’t really find information about it.
I agree, however what I had in mind was that they probably don’t just want to make simulations in which individuals can’t say wheter they are simulated or not. They may want to run these simulations in accurate and precise way, that gives them predictions. For example they might wonder what would happen if they apply such and such initial conditions to the system, and want to know how it will behave? In that case, they don’t want buge either, even though individuals in the simulation won’t be able to tell the difference.
As I’ve said in previous comments I’m inclined to think that simulations need that amount of detail to accurately use them to predict the future. In that case maybe that means that I had in mind a different version of the argument, whereas the original Simulation argument does not asume that the capacity to accurately predict the future of certain systems is an important incentive for simulations to be created.
About the quantum part… Indeed it may be impossible to determine the state of the system if it entails quantum randomness. As I wrote :
“let’s note that we did not take quantum randomness into account [...]. However, we may be justified not taking this into account, as systems with such great complexity and interactions make these effects negligeable due to quantum decoherence” .
I am quite unsure about that part though.
However I’m not sure I understood well the second quote. In “Simulating the entire universe down to the quantum level is obviously infeasible”, i’m not sure if he talks about what I just wrote or if it is about physical/technological limitations to achieve so much computing power and such detail.
Familiar indeed and I agree with that, but in order to predict the future state of this universe, they will arguably need to compute it beyond what we could see ourselves: I mean, to reliably determine the state of the universe in the future, they would need to take into account parts of the system that nobody is aware of, because it is still part of the the causal system that will eventually lead to this future state we want predictions about, right?
Thanks for the comment. I mostly agree, but I think I use “reducible” in a stronger sense than you do (maybe I should have specified it then) : by reducible I mean something that would not entail any loss of information (like a lossless compression). In the examples you give, some information—that is considered negligeable—is deleted. The thing is that I think these details could still make a difference in the state of the system at time t+Δt, due to how sensitiveness to initial conditions in “real” systems can in the end make big differences. Thus, we have to somehow take into account a high level of detail, if not all detail. (I guess the question of whether it is possible or not to achieve a perfect amount of detail is another problem). If we don’t, the simulation would arguably not be accurate.
So, yes I agree that “the extremely vast majority of physical phenomena in our own universe” are reducible, but only in a weak sense that will make the simulation unable to reliably predict the future, and thus non-sims won’t be incentivized to build them.
Exactly, that’s what I was thinking about too, and I’d say the impact would most likely be positive too.