Suppose we were able to gather large amounts of brain-scans, lets say w/ millions of wearable helmets w/ video and audio as well,[1] then what could we do with that? I’m assuming a similar pre-training stage where models are used to predict next brain-states (possibly also video and audio), and then can be finetuned or prompted for specific purposes.
Jhana helmets
Jhana is a non-addicting high pleasure state. If we can scan people entering this state, we might drastically reduce the time it takes to learn to enter this state. I predict (maybe just hope) that being able to easily enter this state will lead to an increase in psychological well-being and a reduction in power-seeking behavior.
Jhana is great but is only temporary. It is a great foundation to then investigate the causes of suffering and remove it.
End Suffering
I currently believe the cause of suffering is an unnecessary mental motion which should show up in high fidelity brain scans. Again, eliminating suffering would go a long way to make decision-makers more selfless (they can still make poor decisions, just likely not out of selfish intent, unless it’s habitual maybe? idk)
Additionally, this tech would also enable seeing the brain state of enlightened vs not, which might look like Steve Byrne’s model here. This might not be a desirable state to be in, and understanding the structure of it could give insight.
Lie Detectors
If we can reliably detect lying, this could be used for better coordination between labs & governments, as well as providing trust in employees and government staff (that they won’t e.g. steal algorithmic/state secrets).
It could also be used to maintain power in authoritarian regimes (e.g. repeatedly test for faithfulness of those in power under you), and I agree w/ this comment that it could destroy relationships that would’ve otherwise been fine.
Additionally this is useful for...
Super Therapy
Look! PTSD is just your brain repeating this same pattern in response to this trigger. CBT-related, negative thought spirals could be detected and brought up immediately.
Beyond helping w/ existing brands of therapy, I expect larger amounts of therapeutic progress could be made in short amounts of time, which would also be useful for coordination between labs/governments.
Super Stimuli & Hacking Brains
You could also produce extreme levels of stimulation of images/audio that most saliently capture attention and lead to continued engagement.
I’m unsure if there exist mind-state independent jail-breaks (e.g. a set of pixels that hacks a person’s brains); however, I could see different mind states (e.g. just waking up, sleep/food/water deprivation) that, when combined w/ images/sounds could lead to crazy behavior.
Super Pursuasion
If you could simulate someone’s brain state given different inputs, you could try to figure out the different arguments (or the surrounding setting or their current mood) that would be required to get them to do what you want. Using this for manipulating others is clear, but also could be used for better coordinating.
A more ethical way to use a simulation for pursuasion is to not decieve/leave-out-important truths. You could even run many simulations to figure out what those important factors that they find important.
However, simulating human brains for optimizing is also an ethical concern in general due to simulated suffering, especially if you do figure out what specific mental motion suffering is.
Reverse Engineering Social Instincts
Part of Steven Byrne’s agenda. This should be pretty easy to reverse engineer given this tech (epistemic note: I know nothing in this field) since you could find what the initial reward components are and how specific image/audio/thoughts led to our values.
Reverse Engineering the secret sauce of human learning
With the good of learning human values comes the bad of figuring out how humans are so sample efficient!
Uploads
Simulating a full human brain is already an upload; however, I don’t expect there to be optimized hardware ready for this. Additionally, running a human brain would be more complicated than just running normal NNs but w/ algorithmic insights from our brains. So this is still possible, but would have a large alignment tax.
[I believe I first heard this argument from Steve Byrnes]
Everything Else in Medical Literature
Beyond trying to understand how the brain works (which is covered already), likely studying diseases and analyzing the brain’s reaction to drugs.
...
These are clearly dual-use; however, I do believe some of these are potentially pivotal acts (e.g. uploads, solving value alignment, sufficient coordination involving human actors), but would definitely require a lot of trust in the pursuers of this technology (e.g. existing power-seekers in power, potentially unaligned TAI, etc).
Hopefully things like jhana (if the benefits pan out) don’t require a high fidelity helmet. The current SOTA is (AFAIK) 50% of people learning how to get into jhana given 1 week of a silent retreat, which is quite good for pedagogy alone! But maybe not sufficient to get e.g. all relevant decision-makers on board. Additionally, I think this percentage is only for getting into it once, as opposed to being able to enter that state at well.
Why would many people give up their privacy in this way? Well, when the technology actually becomes feasable, I predict most people will be out of jobs, so this would equivalent to a new job.
For pleasure/insight helmets you probably need intervention in the form of brain simulation (tDCS, tFUS, tMS). Biofeedback might help but you need to at least know where to steer towards.
The current SOTA is (AFAIK) 50% of people learning how to get into jhana given 1 week of a silent retreat
I’m pretty skeptical of those numbers, all exiting projects I know of don’t have a better method of measurement other than surveys and that gets bitten hard by social desirability bias/not wanting to have committed a sunk cost. Seems relevant that jhourney isn’t doing much EEG & biofeedback anymore.
Huh, those brain stimulation methods might actually be practical to use now, thanks for mentioning them!
Regarding skepticism of survey-data: If you’re imagining it’s only an end-of-the-retreat survey which asks “did you experience the jhana?”, then yeah, I’ll be skeptical too. But my understanding is that everyone has several meetings w/ instructors where a not-true-jhana/social-lie wouldn’t hold up against scrutiny.
I can ask during my online retreat w/ them in a couple months.
As for brain stimulation, TMS devices can be bought for <$10k from ebay. tDCS devices are available for ~$100, though I don’t expect them to have large effect sizes in any direction. There’s been noises of consumer-level tFUS devices for <$10k, but that’s likely >5 years in the future.
Regarding skepticism of survey-data: If you’re imagining it’s only an end-of-the-retreat survey which asks “did you experience the jhana?”, then yeah, I’ll be skeptical too. But my understanding is that everyone has several meetings w/ instructors where a not-true-jhana/social-lie wouldn’t hold up against scrutiny.
The incentives of the people running jhourney are to over-claim attainments, especially on edge-cases, and hype the retreats. Organizations can be sufficiently on guard to prevent the extreme forms of over-claiming & turning into a positive-reviews-factory, but I haven’t seen people from jhourney talk about it (or take action that shows they’re aware of the problem).
Implications of a Brain Scan Revolution
Suppose we were able to gather large amounts of brain-scans, lets say w/ millions of wearable helmets w/ video and audio as well,[1] then what could we do with that? I’m assuming a similar pre-training stage where models are used to predict next brain-states (possibly also video and audio), and then can be finetuned or prompted for specific purposes.
Jhana helmets
Jhana is a non-addicting high pleasure state. If we can scan people entering this state, we might drastically reduce the time it takes to learn to enter this state. I predict (maybe just hope) that being able to easily enter this state will lead to an increase in psychological well-being and a reduction in power-seeking behavior.
Jhana is great but is only temporary. It is a great foundation to then investigate the causes of suffering and remove it.
End Suffering
I currently believe the cause of suffering is an unnecessary mental motion which should show up in high fidelity brain scans. Again, eliminating suffering would go a long way to make decision-makers more selfless (they can still make poor decisions, just likely not out of selfish intent, unless it’s habitual maybe? idk)
Additionally, this tech would also enable seeing the brain state of enlightened vs not, which might look like Steve Byrne’s model here. This might not be a desirable state to be in, and understanding the structure of it could give insight.
Lie Detectors
If we can reliably detect lying, this could be used for better coordination between labs & governments, as well as providing trust in employees and government staff (that they won’t e.g. steal algorithmic/state secrets).
It could also be used to maintain power in authoritarian regimes (e.g. repeatedly test for faithfulness of those in power under you), and I agree w/ this comment that it could destroy relationships that would’ve otherwise been fine.
Additionally this is useful for...
Super Therapy
Look! PTSD is just your brain repeating this same pattern in response to this trigger. CBT-related, negative thought spirals could be detected and brought up immediately.
Beyond helping w/ existing brands of therapy, I expect larger amounts of therapeutic progress could be made in short amounts of time, which would also be useful for coordination between labs/governments.
Super Stimuli & Hacking Brains
You could also produce extreme levels of stimulation of images/audio that most saliently capture attention and lead to continued engagement.
I’m unsure if there exist mind-state independent jail-breaks (e.g. a set of pixels that hacks a person’s brains); however, I could see different mind states (e.g. just waking up, sleep/food/water deprivation) that, when combined w/ images/sounds could lead to crazy behavior.
Super Pursuasion
If you could simulate someone’s brain state given different inputs, you could try to figure out the different arguments (or the surrounding setting or their current mood) that would be required to get them to do what you want. Using this for manipulating others is clear, but also could be used for better coordinating.
A more ethical way to use a simulation for pursuasion is to not decieve/leave-out-important truths. You could even run many simulations to figure out what those important factors that they find important.
However, simulating human brains for optimizing is also an ethical concern in general due to simulated suffering, especially if you do figure out what specific mental motion suffering is.
Reverse Engineering Social Instincts
Part of Steven Byrne’s agenda. This should be pretty easy to reverse engineer given this tech (epistemic note: I know nothing in this field) since you could find what the initial reward components are and how specific image/audio/thoughts led to our values.
Reverse Engineering the secret sauce of human learning
With the good of learning human values comes the bad of figuring out how humans are so sample efficient!
Uploads
Simulating a full human brain is already an upload; however, I don’t expect there to be optimized hardware ready for this. Additionally, running a human brain would be more complicated than just running normal NNs but w/ algorithmic insights from our brains. So this is still possible, but would have a large alignment tax.
[I believe I first heard this argument from Steve Byrnes]
Everything Else in Medical Literature
Beyond trying to understand how the brain works (which is covered already), likely studying diseases and analyzing the brain’s reaction to drugs.
...
These are clearly dual-use; however, I do believe some of these are potentially pivotal acts (e.g. uploads, solving value alignment, sufficient coordination involving human actors), but would definitely require a lot of trust in the pursuers of this technology (e.g. existing power-seekers in power, potentially unaligned TAI, etc).
Hopefully things like jhana (if the benefits pan out) don’t require a high fidelity helmet. The current SOTA is (AFAIK) 50% of people learning how to get into jhana given 1 week of a silent retreat, which is quite good for pedagogy alone! But maybe not sufficient to get e.g. all relevant decision-makers on board. Additionally, I think this percentage is only for getting into it once, as opposed to being able to enter that state at well.
Why would many people give up their privacy in this way? Well, when the technology actually becomes feasable, I predict most people will be out of jobs, so this would equivalent to a new job.
For pleasure/insight helmets you probably need intervention in the form of brain simulation (tDCS, tFUS, tMS). Biofeedback might help but you need to at least know where to steer towards.
I’m pretty skeptical of those numbers, all exiting projects I know of don’t have a better method of measurement other than surveys and that gets bitten hard by social desirability bias/not wanting to have committed a sunk cost. Seems relevant that jhourney isn’t doing much EEG & biofeedback anymore.
Huh, those brain stimulation methods might actually be practical to use now, thanks for mentioning them!
Regarding skepticism of survey-data: If you’re imagining it’s only an end-of-the-retreat survey which asks “did you experience the jhana?”, then yeah, I’ll be skeptical too. But my understanding is that everyone has several meetings w/ instructors where a not-true-jhana/social-lie wouldn’t hold up against scrutiny.
I can ask during my online retreat w/ them in a couple months.
As for brain stimulation, TMS devices can be bought for <$10k from ebay. tDCS devices are available for ~$100, though I don’t expect them to have large effect sizes in any direction. There’s been noises of consumer-level tFUS devices for <$10k, but that’s likely >5 years in the future.
The incentives of the people running jhourney are to over-claim attainments, especially on edge-cases, and hype the retreats. Organizations can be sufficiently on guard to prevent the extreme forms of over-claiming & turning into a positive-reviews-factory, but I haven’t seen people from jhourney talk about it (or take action that shows they’re aware of the problem).