The sensible approach is. to demand a stream of payments over time. If you reveal it to others who also demand streams, that will cut how much of a stream they are willing to pay you.
RobinHanson
You are very much in the minority if you want to abolish norms in general.
NDAs are also legal in the case where info was known before the agreement. For example, Trump using NDAs to keep affairs secret.
“models are brittle” and “models are limited” ARE the generic complaints I pointed to.
We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.
There are THOUSANDS of critiques out there of the form “Economic theory can’t be trusted because economic theory analyses make assumptions that can’t be proven and are often wrong, and conclusions are often sensitive to assumptions.” Really, this is a very standard and generic critique, and of course it is quite wrong, as such a critique can be equally made against any area of theory whatsoever, in any field.
The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of “unawareness”. If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn’t work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.
“Hanson believes that the principal-agent literature (PAL) provides strong evidence against rents being this high.”
I didn’t say that. This is what I actually said:
“surely the burden of ‘proof’ (really argument) should lie on those say this case is radically different from most found in our large and robust agency literatures.”
Uh, we are talking about holding people to MUCH higher rationality standards than the ability to parse Phil arguments.
“At its worst, there might be pressure to carve out the parts of ourselves that make us human, like Hanson discusses in Age of Em.”
To be clear, while some people do claim that such such things might happen in an Age of Em, I’m not one of them. Of course I can’t exclude such things in the long run; few things can be excluded in the long run. But that doesn’t seem at all likely to me in the short run.
You are a bit too quick to allow the reader the presumption that they have more algorithmic faith than the other folks they talk to. Yes if you are super rational and they are not, you can ignore them. But how did you come to be confident in that description of the situation?
Seems like you guys might have (or be able to create) a dataset on who makes what kind of forecasts, and who tends to be accurate or hyped re them. Would be great if you could publish some simple stats from such a dataset.
To be clear, Foresight asked each speakers to offer a topic for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question seemed an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.
Foresight asked us to offer topics for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question is an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.
If you specifically want models with “bounded rationality”, why do add in that search term: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&as_vis=1&q=bounded+rationality+principal+agent&btnG=
See also:
https://onlinelibrary.wiley.com/doi/abs/10.1111/geer.12111
https://www.mdpi.com/2073-4336/4/3/508
https://etd.ohiolink.edu/!etd.send_file?accession=miami153299521737861&disposition=inline
The % of world income that goes to computer hardware & software, and the % of useful tasks that are done by them.
Most models have an agent who is fully rational, but I’m not sure what you mean by “principal is very limited”.
I’d also want to know that ratio X for each of the previous booms. There isn’t a discrete threshold, because analogies go on a continuum from more to less relevant. An unusually high X would be noteworthy and relevant, but not make prior analogies irrelevant.
The literature is vast, but this gets you started: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=%22principal+agent%22&btnG=
Yup.