Please go further towards maximization of clarity. Let’s start by this example: > Epistemic status: Musings about questioning assumptions and purpose. Are those your musings about agents questioning their assumptions and word-views?
And like, do you wish to improve your fallacies?
> ability to pursue goals that would not lead to the algorithm’s instability. higher threshold than ability, like inherent desire/optimisation? What kind of stability? Any from https://en.wikipedia.org/wiki/Stable_algorithm? I’d focus more on sort of non-fatal influence. Should the property be more about the alg being careful/cautious?
>Are those your musings about agents questioning their assumptions and word-views? - Yes, these are my musings about agents questioning their assumptions and world-views.
>And like, do you wish to improve your fallacies? - I want get better at avoiding fallacies. What I desire for myself I also desire for AI. As Marvin Minsky put it: “Will robots inherit the Earth? Yes, but they will be our children.”
>higher threshold than ability, like inherent desire/optimisation? What kind of stability? Any from https://en.wikipedia.org/wiki/Stable_algorithm? I’d focus more on sort of non-fatal influence. Should the property be more about the alg being careful/cautious? - I was thinking of stability in terms of avoiding infinite regress, as illustrated by Jonas noticing the endless sequence of metaphorical whale bellies.
Philosopher Gabriel Liiceanu in his book “Despre limită” (English: Concerning limit—unfortunately, no English version seems to be available) argues that we fell lost when we loose our landmark-limit i.e. in the desert/in the middle of the ocean on a cloudy night with no navigational tools. I would say that we can also get lost in our mental landscape and thus be unable to decide which goal to pursue.
Consider the paperclip maximizing algorithm: once it has turned all available matter in the Universe into paperclips, what will it do? And if the algorithm can predict that it will reach this confusing state, does it decide to continue the paperclip optimization? As a Buddhist saying goes: “When you get what you desire, you become a different person. Consider becoming that version of yourself first and you might find that you no longer need the object of your desires.”.
Please go further towards maximization of clarity. Let’s start by this example:
> Epistemic status: Musings about questioning assumptions and purpose.
Are those your musings about agents questioning their assumptions and word-views?
And like, do you wish to improve your fallacies?
> ability to pursue goals that would not lead to the algorithm’s instability.
higher threshold than ability, like inherent desire/optimisation?
What kind of stability? Any from https://en.wikipedia.org/wiki/Stable_algorithm? I’d focus more on sort of non-fatal influence. Should the property be more about the alg being careful/cautious?
>Are those your musings about agents questioning their assumptions and word-views?
- Yes, these are my musings about agents questioning their assumptions and world-views.
>And like, do you wish to improve your fallacies?
- I want get better at avoiding fallacies. What I desire for myself I also desire for AI. As Marvin Minsky put it: “Will robots inherit the Earth? Yes, but they will be our children.”
>higher threshold than ability, like inherent desire/optimisation?
What kind of stability? Any from https://en.wikipedia.org/wiki/Stable_algorithm? I’d focus more on sort of non-fatal influence. Should the property be more about the alg being careful/cautious?
- I was thinking of stability in terms of avoiding infinite regress, as illustrated by Jonas noticing the endless sequence of metaphorical whale bellies.
Philosopher Gabriel Liiceanu in his book “Despre limită” (English: Concerning limit—unfortunately, no English version seems to be available) argues that we fell lost when we loose our landmark-limit i.e. in the desert/in the middle of the ocean on a cloudy night with no navigational tools. I would say that we can also get lost in our mental landscape and thus be unable to decide which goal to pursue.
Consider the paperclip maximizing algorithm: once it has turned all available matter in the Universe into paperclips, what will it do? And if the algorithm can predict that it will reach this confusing state, does it decide to continue the paperclip optimization? As a Buddhist saying goes: “When you get what you desire, you become a different person. Consider becoming that version of yourself first and you might find that you no longer need the object of your desires.”.