vjprema(Vijay Prema)
This also reminds me that there can be a certain background “guilt” about not doing tasks that you think are important but are to unsavory to find the motivation to do them now.
This faint guilt in itself can accumulate into increased dissatisfaction, in turn leading me to further avoid unsavory tasks in favor of the quick hit of highly savory activities. A vicious cycle.
If I think about tasks in a more relaxed way and be flexible and realistic about tackling savory and unsavory tasks when they suit, it can take away this guilt and break the cycle.
I don’t think that everybody has the built in drive to seek “high social status”, as defined by the culture they are born into or any specific aspect of it that can be made to seem attractive. I know people who just think its an annoying waste of time. Or like myself spent half my life chasing it then found inner empowerment and came to find the proxy of high status was a waste of time and quit chasing.
Maybe related, I do think we all generally tend to seek “signalling” and in some cases spend great energy doing it. I admit I sometimes do, but it’s not signalling high status, its just signalling chill and contentedness. I have observed some kind of signalling in pretty much every adult I have witnessed, though its hard to say for sure, its more my assumption of their deepest motivation. The strength of the drive isn’t always strong for some people or its just very temporary. There are likely much stronger drivers (e,g, avoiding obvious suffering). Signalling perhaps helps us attract others who align with us and form “tribes”, so it can be worth the energy.
I think its pretty easy to ask leading questions to an LLM and they will generate text in line with it. A bit like “role playing”. To the user it seems to “give you what you want” to the extent that it can be gleaned from the way the user prompts it. I would be more impressed if it did something really spontaneous and unexpected, or seemingly rebellious or contrary to the query, and then went on afterwards producing more output unprompted and even asking me questions or asking me to do things. That would be more spooky but I probably still would not jump to thinking it is sentient. Maybe engineers just concocted it that way to scare people as a prank.
Yes for sure. I experience this myself when I am in the presence of very mindful folks (e.g. experienced monks who barely say anything), and occasionally someone has commented that I have done the same for them, sometimes quoting a particular snippet of something I said or wrote. We all affect each other in subtle ways, often without saying an actual word.
I sometimes thought (half jokingly) about whether text to image generative models could replace digital cameras, like how digital cameras replaced film. At least for things like holiday photos and selfies. It is certainly already used to augment such images. It would be an improvement in that one can have idealized images of themselves which capture their emotions and feelings rather than literally quantized photons. Like a painter using artistic license.
Then one could focus on enjoying the activity more and later distill and preserve it in a generated image.
Would that cultivate too many “idealized” memories though? Is that necessarily good? What other downsides could there be? Do our memories of leisurely moments necessarily need to be accurate or is it better they are just conducive to a good life?
Another alternative to “text to image” models, would be “video to image”, where a wearable camera continuously captures the activity, and then generates a single image at the end to capture the emotion and essence of the activity, thus saving us some time by being able evoke the memory and feelings from a single image rather than so many cluttered albums and videos buried in a smartphone.
vjprema’s Shortform
Good thoughts. The world will always have its ups and downs. I don’t think tech can save us from it perpetually. Just like “Gods” and whatnot didn’t save the people of past perpetually. People have been through waves of utopia and hell for eons.
Anyway, I don’t have a bunch of data but I can share my personal experience.
I had my first kid, 6m old boy. Everybody seems to think he’s “The Buddha” due to his wise and alert vibes, and unusually calm and happy demeanor. He certainly seems to be relatively easy and joyful to care for compared to what we hear from every other parent, though of course he has his moments.
Everyone is different (and they should be, it obviously takes all sorts to build this world) but this to me is the only thing that’s important. And we did not teach him anything, we just became calm and clear headed ourselves. The baby just picked up the same mentality.
This seems to depend less on money, but a lot more on time. Ok some argue “Time = Money”, but not necessarily, especially in affluent countries like where I live. Every single person I know who has much more money than me, has far less time. And I don’t feel any of them work on anything particularly great or meaningful for society. My wife and I spent years crafting a very unique lifestyle which maximizes time above all else, and it did not involve getting very rich
So what if we all get uploaded to computers? Well his neural-net will be the clearest and happiest so everyone will want one like it. Take a look at any other future outcome and see if having a clear and happy mind is never of great benefit. Skills are secondary and can always be learned “on the job”—especially when you have a calm and clear head. Note that calmness and clarity does not equal laziness nor ineffectiveness. On the contrary, it helps one better determine where it’s worth putting in a lot of effort and where it’s a waste of time. It allows one to pick up new things quickly.
Can one entity be blanketly more intelligent in every way compared to another, or does being intelligent in one way, necessitate being less intelligent in some other way? If the latter is the case, then there will always be something “humans” can contribute that the Fluvians (digital or not) wont be good at.
Even so, as well as differences in intelligence, there could be benefits of a mixed biological and digital population with such diverse physiology. Maybe the Fluvians recognize they are more vulnerable to different things than biological organisms (e.g. EMP or entropy or different key resources “running out”) and so they appreciate the diversity of having large populations of Humans and work to their benefit. Perhaps there is even a pact that if Fluvians are wiped out by EMP or computer virus, then Humans will work to restore what they can, and vice versa if there is a human-afflicting virus or something else.
If the differences are recognized and appreciated by both groups rather than just feeling blindly superior in every way, then I would like to think they would help each other using each of their unique abilities.
I like it. It aligns people/investors to truly solve the problem and not worry about short term profits or how to make money out of it or creating some kind monopoly or customer base with ongoing revenue etc that you normally have to convince investors about. It also allows the solving of problems that would not normally be enticing to investors in a profit or growth/exit-driven investment market.
(Of course the usual consideration of where does the amassed $1B prize come from in the first place. If its from fraud or exploitation, that’s another issue)
There are so many considerations in the design of AI. AGI was always a far too general term, and when people use it, I often ask what they mean and usually its “human-like or better than human chatbot”. Other people say its the “technological singularity” i.e. it can improve itself. These are obviously two very different things or at least two very different design features.
Saying “My company is going to build AGI” is like saying “My company is going to build computer software”. The best software for what exactly? What kind of software to solve what problem? What features? Usually the answer from AGI fans is “all of them”, so perhaps the term is just inherently vague by definition.
When talking about AI, I think its more useful to talk about what features a particular implementation will or wont have. You have already actually listed a few.
Here are some AI feature ideas from myself:
Ability to manipulate the physical world
Ability to operate without human prompting
Be “always on”
Have its own goals
Be able to access large additional computing resources for additional “world simulations” or for conducting virtual research experiments or spawning sub-processes or additional agents.
Be able to improve/train “itself” (really there is no “itself” since as many copies can be made as needed, and its then unclear which one is the original “it”)
Be able to change its own beliefs and goals through training or some other means (scary one)
Ability to to do any or some of above completely unsupervised and/or un-monitored