And now I think I know what I might do for a hobby during exams month and summer vacation. Last I looked at the source-code, I’d just have to write some data structures describing environment-observations (let’s say… of the current working directory of a Unix filesystem) and potential actions (let’s say… Unix system calls) in order to get the experiment up and running. Then it would just be a matter of rewarding the agent instance for any behavior I happen to find interesting, and watching what happens.
Initial prediction: since I won’t have a clearly-developed reward criterion and the agent won’t have huge exponential sums of CPU cycles at its disposal, not much will happen.
However, I do strongly believe that the agent will not suddenly develop a moral sense out of nowhere.
And now I think I know what I might do for a hobby during exams month and summer vacation. Last I looked at the source-code, I’d just have to write some data structures describing environment-observations (let’s say… of the current working directory of a Unix filesystem) and potential actions (let’s say… Unix system calls) in order to get the experiment up and running. Then it would just be a matter of rewarding the agent instance for any behavior I happen to find interesting, and watching what happens.
Initial prediction: since I won’t have a clearly-developed reward criterion and the agent won’t have huge exponential sums of CPU cycles at its disposal, not much will happen.
However, I do strongly believe that the agent will not suddenly develop a moral sense out of nowhere.
No. But .it will be eminently boxable. In fact, if you not nuts, youll be running it a box.