What stops you from making a change that is addictive or self-amplifying? For example, suppose a subtle tweak makes you less averse to making another subtle tweak in the same direction. A few thousand iterations later and your network is trashed.
http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
It seems to me that the only safe way to do this would be to only permit other uploaded entities to make the edits, working in teams, with careful observation and testing of results. Older versions of yourself might be team members.
Also, the hardware design would need to be extremely well thought out, so that it is not possible for someone to Blue Pill attack you without your knowledge, or directly overwrite your neural structures with someone else’s patterns. The hardware would have to be designed with security permissions inherently baked in : here’s a blog post where Drexler discusses this :
What stops you from making a change that is addictive or self-amplifying? For example, suppose a subtle tweak makes you less averse to making another subtle tweak in the same direction. A few thousand iterations later and your network is trashed. http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
It seems to me that the only safe way to do this would be to only permit other uploaded entities to make the edits, working in teams, with careful observation and testing of results. Older versions of yourself might be team members.
Also, the hardware design would need to be extremely well thought out, so that it is not possible for someone to Blue Pill attack you without your knowledge, or directly overwrite your neural structures with someone else’s patterns. The hardware would have to be designed with security permissions inherently baked in : here’s a blog post where Drexler discusses this :
http://metamodern.com/2011/08/03/quiz-question-what-is-wrong-with-this-model-of-computation/