Readers don’t know what your post is about. Your comment explains “My goal …” but that should be the start of the post, orienting the reader.
How does your hypothetical help identify possible dangling units? You’ve worked it out in your head. That should be the second part of post, working through the logic, here is my goal, here is the obstacle, here is how I get round it.
Also, Tool AI is a conclusion, possibly wrong one (still reading the Tool AI post), of a more general point I was trying to make, which I have also not found on this site:
What technical problems traditionally associated with AI do NOT need to be solved to achieve the effects of friendly AI.
It’s allowed to ask it’s programmers an arbitrary amount of stupid questions and just halt refusing to do anything if they are unavailable to answer them.
It’s allowed to just Google things, ask about them on Stack Exchange, or enlist Mechanical Turk rather than coming up with it’s own solutions to every problem.
it doesn’t need to work in real time; it’s OK if you have to run it for 5 minutes to answer a question a human could instantly as long as the quality is just as good.
It doesn’t need to pass the aspects of the Turing test that have to do with mimicking the flaws of humans convincingly.
It can run on cloud servers the team can only afford to rent for just as long as it takes for it to FOOM. Specificaly once you have an AI that could FOOM on the computer you have currently in decades, you can rent cloud servers that do those same calculations in minutes.
generally people have a lot more patience with people saying “i have an answer to this complicated question but I would rather you guess first” when they’ve built up a lot of credit first.
Readers don’t know what your post is about. Your comment explains “My goal …” but that should be the start of the post, orienting the reader.
How does your hypothetical help identify possible dangling units? You’ve worked it out in your head. That should be the second part of post, working through the logic, here is my goal, here is the obstacle, here is how I get round it.
Also, Tool AI is a conclusion, possibly wrong one (still reading the Tool AI post), of a more general point I was trying to make, which I have also not found on this site:
What technical problems traditionally associated with AI do NOT need to be solved to achieve the effects of friendly AI.
It’s allowed to ask it’s programmers an arbitrary amount of stupid questions and just halt refusing to do anything if they are unavailable to answer them.
It’s allowed to just Google things, ask about them on Stack Exchange, or enlist Mechanical Turk rather than coming up with it’s own solutions to every problem.
it doesn’t need to work in real time; it’s OK if you have to run it for 5 minutes to answer a question a human could instantly as long as the quality is just as good.
It doesn’t need to pass the aspects of the Turing test that have to do with mimicking the flaws of humans convincingly.
It can run on cloud servers the team can only afford to rent for just as long as it takes for it to FOOM. Specificaly once you have an AI that could FOOM on the computer you have currently in decades, you can rent cloud servers that do those same calculations in minutes.
I don’t want to influence people with my opinion before they had a chance to express theirs.
I am starting to explain it, though, in the comments.
But here too, I’m interested in finding holes in my reasoning, not spreading an opinion that I don’t yet have a sufficient reason to believe is right.
generally people have a lot more patience with people saying “i have an answer to this complicated question but I would rather you guess first” when they’ve built up a lot of credit first.