Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.
Why would this be an ethical thing to do? It sounds like you’re trying to manipulate others into people you’d like them to be and not what they themselves like to be.
How to utilize my “cut X, cold-turkey” ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques
Ethics aside, this seems to be a tall order. You’re basically trying to hack into someone else’s mind through very limited input channels (speech/text). In my experience it’s never a lack of knowledge that’s hindering people from overcoming akrasia (also the reason I’m skeptical towards the efficacy of self-help books).
Essentially, I think we’re under-utilizing several higher mathematical objects—Tensors, to name one.
That’s a very good point. In ML courses lots of time is spent on introducing different network types and technical details of calculus/linear algebra, without explaining why to pick out neural networks from idea space in the first place beyond hand-waving that it’s “biologically inspired”.
Why would this be an ethical thing to do? It sounds like you’re trying to manipulate others into people you’d like them to be and not what they themselves like to be.
Perhaps I didn’t give enough detail. I definitely don’t want to drive others exclusively into what I would like them to be. Nor do I want people to believe as I do in most regards. There’s a greater principle that I think would make the world a better place:
When I engage with someone who presents themselves as opposed to an entire Other group, they tend to (in one way or another) divulge their assumption for opposing/hating/rebuking/etc that group. Very rarely do they have a complex enemy. The ethical ground I stand on is one of seeking to build bridges of understanding to those whom one claims to oppose, that will be readily crossed.
My hope is that, with time, the “I’m anti-XYZ” or “I’m pro-ABC” won’t be necessary because we’ll be willing to consider people as fellow humans. We won’t seek to make them a low-resolution representation of one sliver of their identity. We will, hopefully, face our opposition with eyes wide open, Bayesian “self-updaters” at the ready.
You’re basically trying to hack into someone else’s mind through very limited input channels (speech/text).
Again, I may have put incorrect emphasis or perhaps you are perceptive of the ways ideas can turn dangerous. Either way, I thank you for helping me relate these ideas.
I want to teach what I uncover because I think there is a limited impact to whatever sweet truths I glean from the universe if they stay strictly inside my head. Part of this goal is acquiring new teaching abilities, such as the ability to custom-fit my conveyance of material to the audience and dynamically (“real-time”) adjust delivery based on reception.
In my experience it’s never a lack of knowledge that’s hindering people from overcoming akrasia (also the reason I’m skeptical towards the efficacy of self-help books).
This is exactly the point of that idea: just having the information doesn’t seem to be enough. But for me, the knowledge seems more than enough for many applications. I want to
extract what ever that is
figure out how to apply it in the domains where—for myself—“cold-turkey” doesn’t seem to do it,
distill it, and
share what’s distilled.
Enabling the sincere dropping of bad habits strikes me as “for the good”.
For example, it would be great if I could switch-off the processes that allow me to easily generate resentment for my spouse. It would be even better if I could flip the switch like I dropped hot showers, or the belief that the runtime complexity of the “power” function was constant-time (rather than the correct logarithmic-time).
There are possible ways of using this ability for ill. There would need to be controlled experiments if the tool is even extricable. There get to be a lot of conjunctions, so it’s of a lesser concern for the near-term.
Why would this be an ethical thing to do? It sounds like you’re trying to manipulate others into people you’d like them to be and not what they themselves like to be.
Ethics aside, this seems to be a tall order. You’re basically trying to hack into someone else’s mind through very limited input channels (speech/text). In my experience it’s never a lack of knowledge that’s hindering people from overcoming akrasia (also the reason I’m skeptical towards the efficacy of self-help books).
That’s a very good point. In ML courses lots of time is spent on introducing different network types and technical details of calculus/linear algebra, without explaining why to pick out neural networks from idea space in the first place beyond hand-waving that it’s “biologically inspired”.
Perhaps I didn’t give enough detail. I definitely don’t want to drive others exclusively into what I would like them to be. Nor do I want people to believe as I do in most regards. There’s a greater principle that I think would make the world a better place:
When I engage with someone who presents themselves as opposed to an entire Other group, they tend to (in one way or another) divulge their assumption for opposing/hating/rebuking/etc that group. Very rarely do they have a complex enemy. The ethical ground I stand on is one of seeking to build bridges of understanding to those whom one claims to oppose, that will be readily crossed. My hope is that, with time, the “I’m anti-XYZ” or “I’m pro-ABC” won’t be necessary because we’ll be willing to consider people as fellow humans. We won’t seek to make them a low-resolution representation of one sliver of their identity. We will, hopefully, face our opposition with eyes wide open, Bayesian “self-updaters” at the ready.
Again, I may have put incorrect emphasis or perhaps you are perceptive of the ways ideas can turn dangerous. Either way, I thank you for helping me relate these ideas.
I want to teach what I uncover because I think there is a limited impact to whatever sweet truths I glean from the universe if they stay strictly inside my head. Part of this goal is acquiring new teaching abilities, such as the ability to custom-fit my conveyance of material to the audience and dynamically (“real-time”) adjust delivery based on reception.
This is exactly the point of that idea: just having the information doesn’t seem to be enough. But for me, the knowledge seems more than enough for many applications. I want to
extract what ever that is
figure out how to apply it in the domains where—for myself—“cold-turkey” doesn’t seem to do it,
distill it, and
share what’s distilled.
Enabling the sincere dropping of bad habits strikes me as “for the good”.
For example, it would be great if I could switch-off the processes that allow me to easily generate resentment for my spouse. It would be even better if I could flip the switch like I dropped hot showers, or the belief that the runtime complexity of the “power” function was constant-time (rather than the correct logarithmic-time).
There are possible ways of using this ability for ill. There would need to be controlled experiments if the tool is even extricable. There get to be a lot of conjunctions, so it’s of a lesser concern for the near-term.