preference inference based on the structure of [your] goals
It’s nothing too formal—wisdom gleaned from an article here and a blog post there.
Most of us readily have a list of goals that come to mind, but it’s likely that they are subgoals and we are unaware of why exactly we do them. So, you keep on asking “What will this goal do for me?” instead of “What will do this goal for me?”, creating downwind nodes in your graph until you presumably hit your preferences. In which case you (a) could check your preferences for consistency and overlap (see Nozick), and (b) investigate if your current subgoals are the best way to maximize your preferences, or find new ones.
Apparently this can be prescribed from the Connection Theory framework, but I haven’t found it necessary to study Connection Theory. They make some interesting guesses about what happens when our stated goals conflict, amongst other things.
Most of us readily have a list of goals that come to mind, but it’s likely that they are subgoals and we are unaware of why exactly we do them. So, you keep on asking “What will this goal do for me?” instead of “What will do this goal for me?”, creating downwind nodes your graph until you presumably hit your preferences.
That’s interestingly dual to what I have in mind: the core notion is that it shows you one thing to do (as opposed to a list, to avoid excess choice or dismay). And if you want to not-do-that, you have choices such as:
“Just give me another random available item.”
“I can’t do that because ___” (add new task as prerequisite; generalizing, this also covers such things as “because I’m not in the right location or at the right time”).
“I don’t need to do that”, which implies that you won’t do anything which depends on this task, which therefore should cause the system to ask you about the validity of those dependencies.
That last option is what sounds similar to what you’re doing, but it supposes you’ve already entered dependency chains all the way up to preferences. Which might be another sort of one-thing the app presents you with: “Why do you want to do this thing you entered previously?” (Which information isn’t mandatory, because it should still permit quick entry of simple reminders.)
Obviously this has a whole lot of scope, the extreme case becoming a complete “outboard brain” planning system, but I’m hoping that (if I ever get around to programming it) it’ll be useful even in a rudimentary form.
My notion is that managing dependencies allows avoiding the problem of having a long to-do list which you have to actually look at and consciously reject items for not being something for this exact moment, thus leading to the habit of rejecting all of the items; instead, nearly all of the “list” will be filtered out by some dependency (which ends up being another task, a topic of interest (e.g. a hobby that you only do sometimes), a time, a location, etc.) and you need not ever think about it.
That’s also the reason why the user interface I imagine defaults to presenting you with exactly one item at a time: each interaction you have with it gives it more data, but there is never a long list or form inviting you to deal with many items, or many fields-to-fill-out about a single item.
It’s nothing too formal—wisdom gleaned from an article here and a blog post there.
Most of us readily have a list of goals that come to mind, but it’s likely that they are subgoals and we are unaware of why exactly we do them. So, you keep on asking “What will this goal do for me?” instead of “What will do this goal for me?”, creating downwind nodes in your graph until you presumably hit your preferences. In which case you (a) could check your preferences for consistency and overlap (see Nozick), and (b) investigate if your current subgoals are the best way to maximize your preferences, or find new ones.
Apparently this can be prescribed from the Connection Theory framework, but I haven’t found it necessary to study Connection Theory. They make some interesting guesses about what happens when our stated goals conflict, amongst other things.
That’s interestingly dual to what I have in mind: the core notion is that it shows you one thing to do (as opposed to a list, to avoid excess choice or dismay). And if you want to not-do-that, you have choices such as:
“Just give me another random available item.”
“I can’t do that because ___” (add new task as prerequisite; generalizing, this also covers such things as “because I’m not in the right location or at the right time”).
“I don’t need to do that”, which implies that you won’t do anything which depends on this task, which therefore should cause the system to ask you about the validity of those dependencies.
That last option is what sounds similar to what you’re doing, but it supposes you’ve already entered dependency chains all the way up to preferences. Which might be another sort of one-thing the app presents you with: “Why do you want to do this thing you entered previously?” (Which information isn’t mandatory, because it should still permit quick entry of simple reminders.)
Obviously this has a whole lot of scope, the extreme case becoming a complete “outboard brain” planning system, but I’m hoping that (if I ever get around to programming it) it’ll be useful even in a rudimentary form.
My notion is that managing dependencies allows avoiding the problem of having a long to-do list which you have to actually look at and consciously reject items for not being something for this exact moment, thus leading to the habit of rejecting all of the items; instead, nearly all of the “list” will be filtered out by some dependency (which ends up being another task, a topic of interest (e.g. a hobby that you only do sometimes), a time, a location, etc.) and you need not ever think about it.
That’s also the reason why the user interface I imagine defaults to presenting you with exactly one item at a time: each interaction you have with it gives it more data, but there is never a long list or form inviting you to deal with many items, or many fields-to-fill-out about a single item.
But, this is all vaporware.