Diesnt that hypothesis run counter to observed health benefits and lower obesity in say japan and countries that could broadly be described as engaging with the mediteranean diet?
Both with lots of linoleic acid / PUFA’s
Diesnt that hypothesis run counter to observed health benefits and lower obesity in say japan and countries that could broadly be described as engaging with the mediteranean diet?
Both with lots of linoleic acid / PUFA’s
Regarsing your example , Vacalav Smil has a lot of interesting books about the energy economy or resources as a whole (natural).
Exposing yourself to his ideas might open up some correlates or advise a heuristic you hadn’t thought of in that context. “The energy economy of rationalist based thought process” or something like that.
Are you in any sort of psychotherapy for it specifically?
That seems like exactly something that could be worked on with empirically supported OCD specific methods.
Cognitive behavioral therapy for what appears to be fairly severe underlying anxiety?
Rebt in particular might apply as you seem to overwhelm yourself with the thought of the thing more than the thing.
Undiagnosed ADD comes to mind as “existential crisis doing chores” comes up a lot to describe it when I talk to adults.
Unified mindfulness would also be a suggestion, you can use the opportunity of the hated chores to wire up a more peaceful sensory experience and relationship to your body and mind.
We also have a “someone elses problem” milieu. So the ER’s cant turn away the homeless but they onpy need to “stabilize” them.
Same with you or me really. So things that could be completely resolved with an inpatient stay dont end up with an admission because “cost”. Nothing is definitively “solved” in a timely manner because of managed care.
So thongs are left to stew and get worse (in your case a proper holistic evaluation initially might jave involved exploring your diet vs years and multiple visits to all sorts of docs).
It ends up “costing” more but no one with decision power sees the cost because its spread over time and to different hospitals or communities etc
So the first person to see someone has no incentive to spend the resources to dig deep and then to actually solve the problem.
Its just biology so it isn’t applicable to AI. “Neoteny” is you want to dig deeper , to have a baby born with above 50% adult brain size would require another three months in the womb and the birthing of such a cranium would be pretty deadly to the mother.
Humans also have a few notable “pruning” episodes through childhood which correlate and are hypothesized to be involved with both autism spectrum disorder and schizophrenia , pruning that also has no logical bearing on how an LLM / ASI might develop.
I had similar hpusing related “i’m the smartest guy in the room” belief some years back.
I was looking at broad amounts people were retiring on (not enough) in the US and then extrapolated that these older folks would have to sell or get second mortgages just to live.
And since the baby boomers are retiring , I thought (with no more data or numbers to back me) that we would see signifigant downward pressure on housing prices.
But of course as long as this doesn’t happen in large piles , in large numbers of zipcodes and sort of in a short amount of time , then its not an issue.
Over decades in large parts of the world facing demographic challenges yeh. Not here.
To broaden things a bit discussionwise.
The leap from 1950′s transistors and semi conductors to what...early 90′s?
I’m not familiar enough with material science or any of that to make an intelligent call but does it seem like a logical progression or on inspection does it actually raise questions about recovered UFO technology?
At the very least I feel like experts in those fields either have or could point out that something seems fishy or they could convincgly dismiss the assertion.
continue to fail at basic reasoning.
But , a huge huge portion of human labor doesnt require basic reasoning. Its rote enough to use flowcharts , I don’t need my calculator to “understand” math , I need it to give me the correct answer.
And for the “hallucinating” behavior you can just have it learn not do to that by rote. Even if you still need 10% of a certain “discipline” (job) to double check that the AI isn’t making things up you’ve still increased productivity insanely.
And what does that profit and freed up capital do other than chase more profit and invest in things that draw down all the conditionals vastly?
5% increased productivity here , 3% over here , it all starts to multiply.
I guess I just feel completely different about those conditional probabilities.
Unless we hit another AI winter the profit and national security incentives just snowball right past almost all of those. Regulation? “Severe depression”
I admit that thr loss of taiwan does innfact set back chip manufactyre by a decade or more regardless of resoyrces thrown at it but every other case just seems way off (because of the incentive structure)
So we’re what , 3 months post chatgpt and customer service and drive throughs are solved or about to be solved? , so lets call that the lowrst hanging fruit. So just some quick back of the napkin google fu , the customer service by itself is a 30 billion dollar industry just in the US.
And how much more does the math break down if say , we have an AGI that can do construction work (embodied in a robot) at say 90% human efficiency for...27 dollars an hour?
In my mind every human task fully (or fully enough) automated snowballs the economic incentive and pushes more resources and man hours into solving problems with material science and things like...idk piston designs or multifunctionality or whatever.
I admit I’m impressed by the collected wisdom and apparent track records of these authors but it seems like its missing the key drivers for further improvement in the analysis.
Like would the authors have put the concept of a smartphone at 1% by 2020 if asked in 2001 based on some abnormally high conditionals about seemingly rational but actually totally orthagonal concern based on how well palm pilots did?
I also dont see how the semi conductor fab bottleneck is such a thing? , 21 million users of openai costs 700k a day to run.
So taking some liberties here but thats 30 bucks a person (so a loss with their current model but thats not my point)
If some forthcoming iteration with better cognitive architecture etc costs about that then we have , 1.25$ per hour to replace a human “thinking” job.
Im having trouble seeing how we don’t rapidly advance robotics and chip manufacture and mining and energy production etc when we stumble into a world where thats the only bottleneck standing in our way to 100% replacemwnt of all useful human labor.
Again , you got the checkout clerks at grocery stores last decade. 3 months in and the entire customer service industry is on its knees. Even if you only get 95% as good as a human and have to sort of take things one at a time to start with , all that excess productivity and profit then chases the next thing. It snowballs from here.
Well to flesh that out , we could have an ASI that seems valye aligned and controllable...until it isn’t.
Or the sociap effects (deep fakes for example) cpuld ruin the world or land us in a dystopia well before actual AGI.
But that might be a bit orthagonal and in the weeds (specific examples of how we end up with x-risk or s-risk end scenarios without the attributing magic powers to the ASI)
I think degree to which LPE is actually necessary for solving problems in any given domain, as well as the minimum amount of time, resources, and general tractability of obtaining such LPE, is an empirical question which people frequently investigate for particular important domains.
Isn’t it sort of “god in the gaps” to presume that the ASI , simply by having lots of compute , no longer actually has to validate anything and apply the scientific method in the reality its attempting to exert control over?
We have machine learning algo’s in biomedicine screen for molecules of interest. This lowers the fail rate of new pharmaceuticals , most of them still fail. Most of them during rat and mouse studies.
So all available human data on chemistry , pharmacodynamics , pharmacokinetics etc + the best simulation models available (alphago etc) still wont result in it being able to “hit” on a new drug for say “making humans obedient zombies” on the first try.
Even if we hand wave and say it discovers a bunch of insights in our data we dont have access to , their are simply too many variables and sheer unknowns for this to work without it being able to simulate human bodies down to the molecular level.
So it can discover a nerve gas thats deadly enough no problem , but we already have deadly nerve gas.
It just again , seems very hand wavy to have all these leaps in reasoning “because ASI” when good hypothesis prove false all the time upon application of avtual experimentation.
But every environment which isn’t perfectly known and every “goal” which isn’t complete concrete , opens up error. Which then stacka upon error as any “plan” to interact with / modify reality adds another step.
If the ASI can infer some materials science breakthroughs with given human knowledge and existing experimental data to some great degree of certainty , ok I buy it.
What I don’t buy is that it can simulate enough actions and reactions with enough certainty to nail a large domain of things on the first try.
But I suppose thats still sort of moot from an existential risk perspective because FOOM and sharp turns aren’t really a requirement.
But “inferring” the best move in tic tac toe and say “developing a unified theory of reality without access to super colliders” is a stretch that doesn’t hold up to reason.
“Hands on experience ia not magic” , neither is “superintelligence” , the LLM’s already hallucinate and any concievable future iteration will still be bound by physics , a few wrong assumptions compounded together can whiff a lot of hyperintelligent schemes.
For point one , yes. We have evidence that your body has a steady state homeostatic “weight” that it will attempt to return you to. Which is why on the whole all fad diets are equivalent and none are reccomended.
“Non metabolic” is sort of a vague statement but top of my head besides “organs” i’d imagine the possible gut flora problems could be huge (or it might be great because presumably you have flora right now encouraging excess fat etc)
I’m not sure the terms as you define them really hold.
https://ourworldindata.org/trust
So , the nations with high trust levels dont seem to map to your take. Chinas rated very highly , but from a western perspecrive its rather coercive socially right?
And what about small cohesive agricultural towns? , my...knee jerk take is that you should re-evaluate this model with a “maslows hierarchy” foundation.
Right right. It doesn’t need to be finctionalized , just a kind of fun documentary. The key is , this stuff is not interesting for most folks. Mesa optimization sounds like a snore.
You have to be able to walk the audience through it in ane engaging way.
Ok well. Lets forget that exact example (which I now admit having not seen in almost twenty years)
I think we need a narrarive style film / docudrama. Beggining , middle , end. Story driven.
1.) Introduces the topic.
2.) Expands on it and touches on concepts
3.) Explains them in an ELI5 manner.
And that it should include all the relevant things like value alignment , control , inner and outer alignment etc without “losing” the audience.
Similarly if its going to touch on niche examples of x-risk or s-risk it should just “wet the imagination” without pulling down the entire edifice and losing the forest for the trees.
I think this is a format that is more likely to be engaged by a wider swathe of persons , I think (as I stated elsewhere in this thread) that rob miles , yudkowski and a large number of other AI experts can be quoted or summarized but do not offer the tonality / charisma to keep an audience engaged.
Think “attenborough” and the planet earth series.
It also seems sensible to me to kind of meld socratic questioning / rationality to bring the audience into the fold in terms of the deductive reasoning leading to the conclusions vs just outright feeding it to them upfront. Its going to be very hard to make a popular movie thst essentially promises catastophe. However if the narrator is asking the audience as it goes along “now , given the alien nature of the intelligence, why would it share human values? , imagine for a moment what it wpuld be like to be a bat...” then when you get to thr summary points any audience member with an iq above 80 is already halfway or more to the point independantly.
Thats what I like about the reddit controlproblem faq , it touches on all the basic superficial / kneejerk questions anyone who hasnt read like all of “superintelligence” would have when casually introduced to this.
I love robert miles but he suffers from the same problem as elizer or say connor leahy. Not a radio voice. Not a movie face. Also his existing videos are “deep dive” style.
You need to be able to introduce the overall problem, the reasons / deductions on why and how its problematic. Address the obvious pushback (which the reddit control problem faq does well) and then introduce the more “intelligentsia” concepts like “mesa optimization” in an easily digestible manner for a population with an average reading comprehension of a 6th grade level and a 20 second attention span.
So you could work off of Robert miles videos but they need to fit into a narrative / storytelling format. Beggining, middle and end. The end should be basically where were all at “we’re probably all screwed but it doesn’t mean we can’t try” and then actionable advise (which should be sprinkled throughout the film, that’s foreshadowing)
Regarding that documentary , I see a major flaw as drifting off into specifics like killer drones. The media has already primed peoples imaginations for lots of the specific ways x risk or s risk might plan out (matrix trilogy , black mirror etc). You could go down an entire rabbot hole on just nano tech or bioweapons. IMO you sprinkle those about to keep the audience engaged (and so that the takeaway isn’t just “something something paperclips”) but driving into them too much grts you lost in the weeds.
For example , I foresaw the societal problems of deepfakes but the way its actually played out (mass distributed powerful llm’s people can diy with) coupled with the immediacy of the employment problem introeuces entire new vectors in social cohesion as problems I hadn’t thought through at all. So , better to broadly introduce individual danger scenarios while keeping the narrative focused on the value alignment / control problems themselves.
You should pull them up on youtube or whatever and then just jump around (sound off is fine) , the film maker is independent. I’m not saying that particular producer / film maker is the go to but the “style” and “tone” and overall storytelling fits the theme.
“Serious documentary about the interesting thing you never heard about” , also this was really popular with young adults when it came out, it caught the flame of a group of young Americans who came of age during 9/11 and the middle east invasions and sort of shaped up what became the occupy wall street movement. Now, that’s probably not exactly the demographic you want to target, most of them are tech savvy enough that they’ll stumble upon this on their own (although they do need something digestible) but broadly speaking it seems to me like having a cultural “phenomenon” that brings this more into the mainstream and introduces the main takeaways or concepts is a must have project for our efforts.
“A” graded evidence on examine for PCOS symptoms and “fertility”. “B” for anxiety (slight improvement for anxiety, moderate for “panic symptoms”) .
Now, I have a lot of TBI’s in my past and originally came across this for “OCD symptoms” , I wont bore you with details but it would definitely be considered sub clinical and not meeting DSM criteria for an actual OCD diagnosis. I came across inositol I think in 2013 or 14, either the nootropics or MTHFR sub reddits.
“C” rating on examine but that’s because they only have one human study linked. Up to 12 grams a day oral in adults usually only results in GI upset although a thorough long term and dose dependant study has yet to be done so we can’t definitively say its “safe and harmless”. My own regime is 2 grams in the morning and 2 in the afternoon for months at a time (been doing this for probably a decade) with a few weeks off every now and then when I forget to order more. I do twice yearly labs and so far my CBC and CMP are unremarkable, 38 male, testosterone levels where they need to be.
Honestly I can’t say anything I get from it isn’t just placebo, even this far in. I’m not keeping “weird sort of OCD / anxiety” symptom journals when I don’t have it and I randomly arrives at the current 4 grams a day (I get 1 gram tablets so two is just easy to remember and dispense into my supplement case)