Thank you for this thread—I have been reading a lot of the sequences here and I have a few stupid questions around FAI:
What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.
Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI
Regardless on how it could be implemented, what might be the safest set of goals for an AI—for something to evolve it seems that a drive is needed otherwise the program would not bother continuing. Could “help humanity” work if tied into point 2 which was a human controlled list of “things not to do”
Thank you for this thread—I have been reading a lot of the sequences here and I have a few stupid questions around FAI:
What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.
Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI
Regardless on how it could be implemented, what might be the safest set of goals for an AI—for something to evolve it seems that a drive is needed otherwise the program would not bother continuing. Could “help humanity” work if tied into point 2 which was a human controlled list of “things not to do”