Douglas Reay graduated from Cambridge University (Trinity College) in 1994. He has since worked in the computing and educational sectors.
Douglas_Reay
Trustworthy Computing
shminux wrote a post about something similar:
Mathematics as a lossy compression algorithm gone wild
possibly the two effects combine?
Other people have written some relevant blog posts about this, so I’ll provide links:
Reduced impact AI: no back channels
Summoning the Least Powerful Genie
The advantage of not being open-ended
For example, if anyone is planning on setting up an investment vehicle along the lines described in the article:
Investing in Cryptocurrency with Index Tracking
with periodic rebalancing between the currencies.
I’d be interested (with adequate safeguards).
When such a situation arises again, that there’s an investment opportunity which is generally thought to be worth while, but which has a lower than expected uptake due to ‘trivial inconveniences’, I wonder whether that is in itself an opportunity for a group of rationalists to cooperate by outsourcing as much as possible of the inconvenience to just a few members of the group? Sort of:
“Hey, Lesswrong. I want to invest $100 in new technology foo, but I’m being put off by the upfront time investment of 5-20 hours. If anyone wants to make the offer of {I’ve investigated foo, I know the technological process needed to turn dollars into foo investments, here’s a step by step guide that I’ve tested and which works, or post me a cheque and an email address, and I’ll set it up for you and send you back the access details} I’d be interested in being one of those who pays you compensation for providing that service. ”
There’s a lot lesswrong (or a similar group) could set up to facilitate such outsourcing, such as letting multiple people register interest in the same potential offer, and providing some filtering or guarantee against someone claiming the offer and then ripping people off.
The ability to edit this particular post appears to be broken at the moment (bug submitted).
In the mean time, here’s a link to the next part:
https://www.lesserwrong.com/posts/SypqmtNcndDwAxhxZ/environments-for-killing-ais
Edited to add: It is now working again, so I’ve fixed it.
Environments for killing AIs
Defect or Cooperate
Don’t put all your eggs in one basket
> Also maybe this is just getting us ready for later content
Yes, that is the intention.
Parts 2 and 3 now added (links in post), so hopefully the link to building aligned AGI is now clearer?
The other articles in the series have been written, but it was suggested that rather than posting a whole series at once, it is kinder to post one part a day, so as not to flood the frontpage.
So, unless I hear otherwise, my intention is to do that and edit the links at the top of the article to point to each part as it gets posted.
Optimum number of single points of failure
Press Your Luck (1/3)
Why mathematics works
Companies writing programs to model and display large 3D environments in real time face a similar problem, where they only have limited resources. One work around they common use are “imposters”
A solar system sized simulation of a civilisation that has not made observable changes to anything outside our own solar system could take a lot of short cuts when generating the photons that arrive from outside. In particular, until a telescope or camera of particular resolution has been invented, would they need to bother generating thousands of years of such photons in more detail than could be captured by devices yet present?
Press Your Luck (3/3)
Press Your Luck (2/3)
Look for people who can state your own position as well (or better) than you can, and yet still disagree with your conclusion. They may be aware of additional information that you are not yet aware of.
In addition, if someone who knows more than you about a subject in which you disagree, also has views about several other areas that you do know lots about, and their arguments in those other areas are generally constructive and well balanced, pay close attention to them.
For civilization to hold together, we need to make coordinated steps away from Nash equilibria in lockstep. This requires general rules that are allowed to impose penalties on people we like or reward people we don’t like. When people stop believing the general rules are being evaluated sufficiently fairly, they go back to the Nash equilibrium and civilization falls.
Two similar ideas:
There is a group evolutionary advantage for a society to support punishing those who defect from the social contract.
We get the worst democracy that we’re willing to put up with. If you are not prepared to vote against ‘your own side’ when they bend the rules, that level of rule bending becomes the new norm. If you accept the excuse “the other side did it first”, then the system becomes unstable because there are various baises (both cognitive, and deliberately induced by external spin) that make people more harshly evaluate the transgressions of other, than they evaluate those of their own side.
This is one reason why a thriving civil society (organisations, whether charities or newspapers, minimally under or influenced by the state) promotes stability—because they provide a yardstick to measure how vital it is to electorally punish a particular transgression that is external to the political process.
A game of soccer in which referee decisions are taken by a vote of the players turns into a mob.