belkarx0 AT outlook.com
belkarx
NIH Cancer Myths Myths
Oh I totally forgot to mention control theory, add that.
ctrl theory: brian douglas on yt
3d sketching: just draw things from models you’ll get better QUICK
optics, signal processing: I learned from youtube, choice MIT lectures, implementing sims, etc but there are probably good textbooks
abstract algebra: An Infinitely Large Napkin (I stan this book so hard)
This may be a me thing but I draw stuff out when I ideate (esp w hardware) and more dimensions → better physical models → better, faster iteration speed mental models
the enneagram fears and motivations. Good compression of a lot of people.
IMPROV
better 3d sketching
architecture (think burglar‘s guide to the city), urbex
optics (lotsa good metaphors)
signal processing
abstract algebra
side comment that I’ve been reminded of: epigenetics *exist(s?)*. I wonder if that could somehow be a more naturally integrate-able approach
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4251063/
I like the premise. I’m glad this is getting researched. But:
Lots of things in the space are understudied and the startup-vibe approach of “we’ll figure this all out on the way because previous papers don’t exist” seems way less likely to work with bio than tech because of the length of iteration cycles. But props if it does?
Black swan effects of polygenic edits
cellular stress if on a large scale?
might be an exception where pleiotropy does actually matter, which would suck. the table in another comment showing correlations between illnesses is pretty convincing however it’s possible there are effects that aren’t quantified there (doesn’t present as diagnosable disease)
???? not sufficiently enmeshed in the bio space but this entire post gives off the vibe of “most of the components are bleeding edge and there aren’t many papers, esp not large scale/long term ones” and I imagine that’ll cause more issues than you expect and streeeetch timescales
given black box bio and difficulty of studying the brain, it’s really hard to tell what’s being left out in studies that measure only change in intelligence/what other things are being affected
We have gotten nowhere near as much as we could out of behavioral interventions (on long timescales) and nootropics, and both of those seem like better areas to put research time into. I don’t actually think a research project of this scale will be faster (for AI safety research etc) than either of those.
counterpoint: this will just make it easier/lower ‘energy’ to apply interventions and is hence worthwhile?? but it’s still so risky that I maintain the above approaches are more worthwhile in the short term
Do the people who contract things out because their time is worth $n/hr or whatever actually keep track of how many “extra” hours they work on top of their basic expenses such that they know how much work they can practically expense out? Or is this a thing that people just say and don’t stop to actually think through. Very much has the vibes of a family with 300k+ takehome or whatever living paycheck to paycheck cuz they’re too good for certain things
There is no such thing as the present, and you are experiencing everything that can possibly be experienced
Deja vu is actually the only time you’re not repeating things infinitely
no creative, original thought exists. everything has been thought, and you’ve just forgotten. you know everything, you just don’t know that you know everything
An actually appropriate replacement for what literature should be trying to develop is debate
I would be interested in dialoguing about:
- why the “imagine AI as a psychopath” analogy for possible doom is or isn’t appropriate (I brought this up a couple months ago and someone on LW told me Eliezer argued it was bad, but I couldn’t find the source and am curious about reasons)- how to maintain meaning in life if it seems that everything you value doing is replaceable by AI (and mere pleasure doesn’t feel sustainable)
Doesn’t 80k hours do this?
Research on the effectivity of hypnosis as an analgesic and in general
How does cyronic neuropreservation consider the peripheral and enteric nervous systems? why do they assume CNS is enough?
I had been communicating with someone who had had great success and very fine control with modification, so that was a clear “this is possible” (they were much more careful though!), and I was also reflecting a lot on how people don’t explicitly take advantage of their self modifying properties enough (it is amazing that we can just … will thoughts and goals into existence and delude ourselves like what?? and the % of people that meditate is low??! the heck?).
I think my success was mostly due to just being in a frame of mind that made me very receptive to change, I think if you’re fighting it because you, at some level, believe your current equilibrium is better than what you’re aiming for (probably the case, tbh) or unaware to what extent change is possible you’d have much weaker results. Also, I had very little explicit, continuous certainty in my goals and habits, rendering them quite susceptible to change.
Bad sleep schedule is another good example of something that gets romanticised when it shouldn’t.
There is a strong analogy to be made between human psychopaths and misaligned AI
I wonder if having a significant other work by you so you can see each others’ screens would have a similar effect—I assume the effect is diminished because it’s a more familiar relationship, but it might work out? Has anyone tried this?
What can a researcher even do against a government that’s using the AI to fulfill their own unscrupulous/not-optimizing-for-societal-good goals?
Maybe this is obvious but isn’t AI alignment only useful if you have access to the model? And aren’t well-funded governments the most likely to develop ‘dangerous’/strong AI, regardless of whether AI alignment “solutions” exist outside of the govt sphere?
yes. but they were subtle.