Less melatonin production during night makes it easy to get up?
avturchin
One interesting observation: If I have two variant of future life – go to live in Miami or in SF, – both will be me from my point of view now. But from the view of Miami-me, the one who is in SF will be not me.
There is a similar idea with an opposite conclusion – that more “complex” agents are more probable here https://arxiv.org/abs/1705.03078
One way of not being suicide is not live alone. Stay with 4 friends.
I will lower the possible incentive of the killers by publishing all I know—and make it in such legal way that it can be used in court even if I am dead (affidavit?)
Whist-blowers commit suicide rather often. First, they are already stressed by the situation. Second, they may want to attract attention to their problem if they feel being ignored or even punish the accused organization.
OpenAI whistleblower found dead in San Francisco apartment.
Suchir Balaji, 26, claimed the company broke copyright law.
Thanks.
The text looks like AI-generated without mentioning it.
The main idea is similar to the one discussed in “Permutation city” by Igen, but this didn’t mentioned.
It also fails to mention Almond’s idea of universal resurrection machine based on quantum randomness generator.
I observed an effect of “chatification”: my LLM-powered mind model tell me stories about my life and now I am not sure what is my original style of telling such stories.
If not AGI, it will fail without enough humans. If AGI, it is just an example of misalignment.
When green and red qualia are exchanged, all functions that point to red now point to green, so no additional update is needed. I say “green” when I see RED and I say “I like green” when I see RED (here capital letters are used to denote qualia).
If we use a system of equations as an example, when I replace Y with Z in all equations while X remains X, it will still be functionally the same system of equations.
If you assume sentience cut-off as a problem, it boils down to the Doomsday argument: why are we so early in the history of humanity? Maybe our civilization becomes non-sentient after the 21st century, either because of extinction or non-sentient AI takeover.
If we agree with the Doomsday argument here, we should agree that most AI-civilizations are non-sentient. And as most Grabby Aliens are AI-civilizations, they are non-sentient.TLDR: If we apply anthropics to the location of sentience in time, we should assume that Grabby Aliens are non-sentient, and thus the Grabby Alien argument is not affected by the earliness of our sentience.
Need to be proved as x-risk. For example, if population fails below 100 people, then regulation fails first.
It is still a functional property of qualia—whether they will be more beautiful or not.
In my view, qualia are internal variables which do not affect the output. For example, x² + x + 1 = 0 and y² + y + 1 = 0 are the same equation but with different variables. Moreover, these variables originate not from the mathematical world, but from the Greek alphabet. So, by studying the types of variables used, we can learn something about the Greeks.
You still use function aspect of qualia here – will red be more beautiful than blue.
In my view, qualia are internal variables which are not affecting the result of computations. For example, equation x^2+x+1=0 is the same as y^2+y+1=0. They use x or y as internal variables
The problem of “humans hostile to humans” has two heavy tails: nuclear war and biological terrorism, which could kill all humans. A similar problem is the main AI risk: AI killing everyone for paperclips.
The central (and not often discussed) claim of AI safety is that the second situation is much more likely: it is more probable that AI will kill all humans than that humans will kill all humans. For example, by advocating for pausing AI development, we assume that the risks of nuclear war causing extinction are less than AI extinction risks.
If AI is used to kill humans as just one more weapon, it doesn’t change anything stated above until AI evolves into an existential weapon (like a billion-drone swarm).
We can suggest a Weak Zombie Argument: It is logically possible to have a universe where all qualia of red and green are inverted in the minds of its inhabitants, while all physical things remain the same. This argument supports epiphenomenalism as well as the previous zombie argument but cannot be as easily disproved.
This is because it breaks down the idea of qualia into two parts: the functional aspect and the qualitative aspect. Functionally, all types of “red” are the same and are used to represent red color in the mind.
Zombies are not possible because something is needed to represent red in their minds. However, the most interesting qualitative aspect of that “something” is still arbitrary and doesn’t have any causal effects.
Returning here after 3 years after reading about ghost drones flap.
One thing that occurred to me is that evolutionary dynamic in vast unbounded spaces is different from the one in confined spaces. When toads were released in Australia, they were selected for longer legs and quicker jumping ability. These long-legged toads reached farther parts of Australia.Another direction of evolution of ‘space animals’ involves stability over very long time and mimicry in the case of “dark forrest”.
Does it mean that the optimal size of the model will be around 4.17Tb?