Well, for one did you ever notice how people act differently in different situations? (for example among family, friends, work, acquaintances at the gym, or online) If you limit yourself to a single situation, there is not any person on earth that you could ‘reconstruct’ sufficiently well.
Bernhard
The 50% annual revenue growth that they’ve averaged over the last 9 years shows no signs of stopping
What makes you think that?
I am of the completely opposite opinion, and would be amazed if they are able to repeat that even for a single year longer.
All the “creative” bookkeeping only work for so long, and right now seems to be the moment to pop bubbles, no?
A sufficiently detailed record of a person’s behavior
What you have in mind is “A sufficiently detailed record of a person’s behavior when interacting with the computer/phone”
How is that sufficient to any reasonable degree?
Why?
Because of perverse, counterproductive and wrong monetary incentives.
There are a few complexities:
There is only really one, and that is not accounted for.
You want to generate electricity that you actually use.
I’m no expert on your part of the world, but in Central Europe electricity prices sometimes turn negative, because eletricity is generated that nobody needs. So large producers have to pay money, to get rid of it. Taking a pickaxe to your solar panel would be net positive in that situation.
Why? because everybody maximizes electricity produced and not electricity that can actually be used.
Typically the theoretical solution is simple: turn your solar panel somewhat towards the setting sun. Generate less solar energy at noon, when no one needs it, and generate more during the evening, when everybody is at home, cooking and watching TV, and consumption is spiking.
Sadly, no one does this, because of the wrong incentives.
Are you familiar with ergodicity economics?
https://twitter.com/ole_b_peters/status/1591447953381756935?cxt=HHwWjsC8vere-pUsAAAA
I recommend Ole Peters’ papers on the topic. That way you won’t have to construct your epicycles upon the epicicles commonly know as utility calculus.
We are taught to always maximize the arithmetic mean
By whom?
The probable answer is: By economists.
Quite simply: they are wrong. Why?
That’s what ergodicity economics tries to explain.
In brief, economics typically wrongly assumes that the average over time can be substituted with the average over an ensemble.
Ergodicity economics shows that there are some +EV bets, that do not pay off for the individual
For example you playing a bet of the above type 100 times is assumed to be the same than 100 people each betting once.
This is simply wrong in the general case. For a trivial example, if there is a minimum bet, then you can simply go bankrupt before playing 100 games
Interestingly however, if 100 people each bet once, and then afterwards redistribute their wealth, then their group as a whole is better off than before. Which is why insurance works
And importantly, which is exactly why cooperation among humans exists. Cooperation that, according to economists, is irrational, and shouldn’t even exist.
Anyway I’m butchering it. I can only recommend Ole Peters’ papers
To me this is a good example of a too theoretic discussion, and as the saying goes: In theory, there is no difference between theory and practice. (But in practice there is).
My counterargument is a different one, and I kind of already have to interrupt you right at the start:
If there is no death, [,,,]
Putting “immortal animals” into any search engine gives lots of examples of things that get pretty close. So we can talk about reality, no need to talk only about Gedankenexperimente. So the first question cannot be: “Why is the counterargument wrong”?
Instead it should be: “Why are there no immortal living beings that dominate. Why are all of them more or less unimportant in the grand scheme of things?”
And the answer is pretty obvious I think. Because it’s simply unfavorable being immortal. It may be due to evolutionary bottlenecks, or due to energetic ones (inefficiency of repair vs reproduction for example), or a myriad of other ones. I don’t think you get to simply state that obviously being immortal is better, when all of the observable evidence (as opposed to theoretical arguments) points in the opposite direction.
So what is clear is, that if you want to be immortal, you have to pay some kind of tax, some extra cost. And if you cannot, you will be of marginal importance, just like all other (near-)immortal beings.
Incidentally, I think this is why only rich people ever talk about immortality (In my experience). To them, it’s clear that they will always be able to pay for this overhead, and simply don’t worry about it.
I would actually be interested if I am mistaken on that last point. Please speak up, if you are a person that is strongly interested in immortality, and you are not rich (for example when you went to school, you knew you were obviously different because your parents couldn’t afford X). I would really be interested to learn what you see differently.
The most powerful one is probably The Financial System. Composed of stock exchanges, market makers, large and small investors, (federal reserve) banks, etc...
I mean that in the sense that an anthill might be considered intelligent, while a single ant will not.
Most of the trading is done algorithmically, and the parts that are not might as well be random noise for the most part. The effects of the financial system on the world at large are mostly unpredictable and often very bad.
The financial system is like “hope” according to one interpretation of the myth of Pandora’s box. Hope escaped (together with all other evil forces) as the box was opened, and released upon the world. But humanity mistook hope for something good, and clings to it, while in fact it is the most evil force of them all.
Okay I may have overdone it a little bit now, but I hope you get the point
Very good idea
I did not do it. My argument would be that the impetus is not my own, it is external, your written word.What stops you from making increasingly outlandish claims (“Your passphrase is actually this (e.g illegal/dangerous/lethal) action, not a simple thought” Where to draw the line?
Just as a point of reference, as a kid I regularly thought thoughts of the kind: “I know you’re secretly spying on my thoughts but I don’t care lalalala.....” I never really specified who “you” was, I just did it so I could catch “them” unawares, and thereby “win”. Just in case.
The difference is hard to define cleanly, but back then I was of the opinion that I did it of my own free will (Nowadays, with nonstop media having the influence it has, I would be less sure. Also I’m older, and a lot less creative)
Just for completeness, I found [this paper](http://dx.doi.org/10.1016/j.neuron.2021.07.002), where they try to simulate the output of a specific type of neuron, and for best results require a DNN of 5-8 layers (with widths of ~128)
We Live in a Post-Scarcity Society
Do you mean “we Americans”? or “we, the people living on the
EastWest Coast”? Because it certainly is not true on a national/worldwide level.For example, in a “magical post-scarcity society” , you would probably be okay to be (born as) really anybody. You shouldn’t really care as much as you might during medieval times for example.
How about right now? Do you care? Are you willing to trade places? I certainly am not.
Furthermore, you picked the worst possible timing for this post. One should not characterize a society based on a single point in time (particularly at the height of the peak). It would be more robust to pick a time of great stress to pass judgement (c.f. how the character of some people strongly changes the worse times get).
For example, would you be indifferent to your geographical or social position this coming winter? (I’m asking, because prices for everything are on the rise. Particularly interesting for this discussion are prices for natural gas, electricity and fertilizer)
Can’t live in a post-scarcity society without heating and food...
Anyway, I guess we’ll see in one years time how our respective positions aged and/or changed.
I guess that would be one way to frame it. I think a simpler way to think of it (Or a way that my simpler mind thinks of it) is that for a given number of parameters (neurons), more complex wiring allows for more complex results. The “state-space” is larger if you will.
3+2, 3x2 and 3² are simply not the same.
From my limited knowledge (undergraduate-level CS knowledge), I seem to remember, that typical deep neural networks use a rather small number of hidden layers (maybe 10? certainly less than 100?? (please correct me if I am wrong)). I think this choice is rationalized with “This already does everything we need, and requires less compute”
To me this somewhat resembles a Chesterton’s fence (Or rather its inverse). If we were to use neural nets of sufficient depths (>10e3), then we may encounter new things, but before we get there, we will certainly realize that we still have a ways to go in terms of raw compute.
First of all, kudos to you for making this public prediction.
To keep this brief: 1 (95%), 2 (60%), 3 (75%), 4(<<5%), 5 (<<1%)
I don’t think we are in a hardware overhang, and my argument is the following:
Our brains are composed of ~10^11 neurons, and our computers of just as many transistors, so in a first approximation, we should already be there.
However, our brains have approximately 10^3 to 10^5 synapses per cell, while transistors are much more limited (I would guess maybe 10 on average?).
Even assuming that 1 transistor is “worth” one neuron, we come up short.
I remember learning that a perceptron with a single hidden layer of arbitrary width can approximate any function, and thereby any perceptron with finite width, but with more hidden layer. (I think this is called the “universal approximaten theorem”?)
After reading your post, I kept trying to find some numbers of how many neurons are equivalent to an additional layer, but came up empty.
I think the problem is basically that each additional layer contributes superlinearly to “complexity” (however you care to measure that). Please correct me if I’m wrong, I would say this point is my crux. If we are indeed in a territory where we have available transistor counts comparable to a “single-hidden-layer-perceptron-brain-equivalent”, then I would have to revise my opinion.
I’m personally very interested in this highly parallel brain architecture, and if I could, I would work on ways to investigate/build/invent ways to create similar structures. However, besides self-assembly (as in living growing things), I don’t yet see how we could build things of a similar complexity in a controlled way.
I have a similar objection. Not particularly because they are Refugees, but instead because they are Foreigners.
I actually quite liked the presented idea, but I think it is very heavily slanted towards ideas that are oriented on the political left (which I generally favor if I were forced to choose). Still, some concepts from the political right are important here, particularly those of culture, and personal responsibility.
Summarized very briefly (and therefore certainly wrong)
Culture, according to the left is something that is imposed from above, and can be exchanged like a pair of socks, while the right thinks that culture is a shared consensus, that depends on all the participants (and particularly on their relative number. An example would be the “culture” of burning cars in Sweden, which is a quite new occurrence)
With personal responsibility on the other hand I mean that you certainly bias your sample of immigrants depending on how you select them. Are all of them people looking for handouts? Or are they people that want to actively change their surroundings (In the latter case, it may be possible that they stubbornly reject leaving their own country for example. The opposite might just as well be true, if they define “their surroundings” to be limited to their close family, kids etc.)
In any case, I think these ideas at least have to be taken into account, otherwise this all sounds like some unfinished idea of a utopian fairytale.
“Probably most ambitious people are starved for the sort of encouragement they’d get from ambitious peers”
If you were to substitute “intelligent” for “ambitious”, I would agree. Some kind of dialog is needed to flourish, and a dialog between equals is strongly preferred. Or said another way, when training, it makes no sense to train with to little weight.
The smartest people tend to be ambitious.
I strongly disagree. Assuming a certain bias regarding the selection of examples, this is just a tautology: Highly visible people are highly visible. Successful people are visible. Stupid people are on average less successful. Non-ambitious people are less visible. Some counterexamples would be Grigorij Perelman, or Steve Wozniak (I know basically nothing of these people, and am willing to be proven wrong)
we would task AGI with optimizing
I see, that kind of makes sense. I still don’t like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a “Utopia”.
this was assuming a point in the future when we don’t have to worry about existential risk
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
I read the original post, and kind of liked it, but I also very much disagreed with it.
I am somewhat befuddled by the chain of reasoning in that post, as well as that of the community in general.
In mathematics, you may start from some assumptions, and derive lots of things, and if ever you come upon some inconsistencies, you normally conclude that one of your assumptions is wrong (if your derivation is okay).
Anyway, here it seems to me, that you make assumptions, derive something ludicrous, and then tap yourself on the shoulder and conclude, that obviously everything has to be correct. To me, that does not follow.
If you assume an omnipotent basilisk (if you multiply by infinity), then obviously you can derive anything you damn well please.
One concrete example (There were many more in the original post):
we’d have a precise enough understanding of emotions and their fulfilment space to recognize local maxima
The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I’d very much like to sell it myself if you don’t mind.
Another is that if humans ever become “content” with boredom, we cut off all possibility of further growth (however small).
> Yeah, that is a downside.
I would argue that is the most important point in fact. You assume that you are looking for an optimum in a static potential landscape. The dinosaurs kind of did the same.
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
A simple example: Kids during puberty kind of seem to be doing the opposite of whatever their parents tell them. Why? Because they know (somehow) that there are other, better minima in reach (even if your parents are the god-kings of the earth) (Who wants to be a carpenter, when you can be a Youtuber, famous for Idontreallycare...)
Anyway, in my opinion, boredom is a solution for the same class of problem, just not intergenerational, but instead more in a day-to-day manner.
I pretty much agree with you. Human intelligence may be high because it is used to predict/interpret the behaviour of others. Consciousness may be that same intelligence turned inward. But:
3. Given enough computational power and a compatible architecture, the agent will develop consciousness if and only if it needs to interact with other agents of the same kind, or at least of similar level of intelligence.
This does not automatically follow I think. There may be other ways that can lead to the same result.
An existing example would be cephalopods (octopus, squid & co.) From what I understand, they are highly intelligent, yet live very short lives, are not social (don’t live in large groups, like humans), and have no “culture” (tricks that are taught from generation to generation)[1].
Instead, their intelligence seems to be related to their complex bodies, which requires lots of processing power.
Which is why I think that interaction with other similar entities is not needed for consciousness to emerge. I think the interaction just has to be complex (which is more general than your requirement of interaction with complex beings) For example, a sufficient number of “simple” input/output channels (lots of suckers) can be just as complex as for example human language. Because it is efficient to model/simplify this complexity, intelligence and then consciousness may emerge.
I am therefore of the opinion that either octopi are already conscious, or that if you were to increase the number of their arms n, for n → infty they sooner or later should be.
In any case, they may dream
[1] This may not be completely correct. There seems to be some kind of hunting tactics, that involve 1 octopus and 1 grouper (fish), where they each drive prey towards the other in turn. The grouper, being longer lived, may teach this to to others?
each UV photon that hits exactly in the right spot will cause permanent DNA changes that eventually lead to cancer
Pretty sure this is incorrect. It’s not the damage that causes cancer, but the failure of the body to heal/repair it. Such failures can be caused for example by you being very old, and therefore healing slower, or by getting a sunburn (= too much exposure in a short time, overwhelming repair capability).
I think the most important thing here is that things scale very much not linearly.
See also this, which argues/claims that more sun exposure (without getting sunburnt) actually leads to less cancer than getting less UV total, but with sunburns:
Lots of interesting answers, and all of them correct (most of the time anyway). One I haven’t seen mentioned, is the one described in this preprint titled “How to Increase Global Wealth Inequality for Fun and Profit”.
In short:
If you buy x shares of something, the price goes up slightly (This is strictly true for X-> infinity)
If you sell, the price drops
If you do both in quick succession (a full circle), the price should not change (Otherwise you invented the economic version of a perpetuum mobile)
Bid-Ask spread exists. Sellers want to sell for as high as possible, while buyers want to buy cheap.
Empirical observation tells us this spread is larger when the market opens, than when it closes
Because of this asymmetry, prices are in fact influenced (in the normal case towards higher prices, but the opposite is also possible).
The author argues that this effect explains the surplus growth that tends to get wiped out in bubbles (That part which is not backed by things in the “real” world)
Oh they’ll scale just fine.
It’s just that nobody will buy all those cars. They are already not selling them all, and we are about to enter the biggest recession of many of our lifetimes