Thanks that does help clarify the challenges for me.
jmh
I was just scrolling through Metaculus and its predictions for the US Elections. I noticed that pretty much every case was a conditional If Trump wins/If doesn’t win. Had two thought about the estimates for these. All seem to suggest the outcomes are worse under Trump. But that assessment of the outcome being worse is certainly subject to my own biases, values and preferences. (For example, for US voters is it really a bad outcome if the probability of China attacking Taiwan increases under Trump? I think so but other may well see the costs necessary to reduce the likelihood as high for something that is not something that actually involves the USA.)
So my first though was how much bias should I infer as present in these probability estimates? I’m not sure. But that does relate a bit to my other thought.
In one sense you could naively apply the p, therefore not p is the outcome for the other candidate as only two actually exist. But I think it is also clear that the two probability distributions don’t come from the same pool so conceivably you could change the name to Harris and get the exact same estimates.
So I was thinking, what if Metaculus did run the two cases side by side? Would seeing p(Haris) + p(Trump) significantly different than 1 suggest one should have lower confidence in the estimates? I am not sure about that.
What if we see something like p(H) approximately equale to p(T)? does that suggest the selected outcome is poorly chosen as it is largely independant of the elected candidate so the estimates are largely meaninless in terms of election outcomes? I have a stronger sense this is the case.
So my bottome line now is that I should likely not hold a high confidence that the estimates on these outcomes are really meaninful with regards to the election impacts.
Had something of a similar reaction but the note about far-UV not having the same problems as other UV serilization (i.e., also harmful to humans) I gather the point is about locality. UV in ducks will kill viri in the air system. But the spread of an airborn illness goes host-to-target before it passed through the air system.
As such seems that while the in-duct UV solution would help limit spread, it’s not going to do much to clean the air in the room while people are in it exhailing, coughing or sneezing, talking.…
I suspect it does little to protect the people directly next/in front of a contagious person but probably good for those practicing that old 6 foot rule (or whatever the arbitray distancing rule was).
Just my guess though.
Quick comment regarding research.
If far-UV is really so great, and not that simple, I would assume that any company that would be selling and installing might not be some small Mom and Pop type operation. If that holds, why are the companies that want to promote and sell the systems using them and then collecting the data?
Or is would that type of investment be seen as too costly even for those with a direct interest in producing the results to bolster sales and increase the size of the network/ecosystem?
I think perhaps a first one might be:
On what evidence do I conclude what I think is know is correct/factual/true and how strong is that evidence? To what extent have I verified that view and just how extensively should I verify the evidence?
After that might be a similar approach to the implications or outcomes of applying actions based on what one holds as truth/fact.
I tend to think of rationality as a process rather than endpoint. Which isn’t to say that the destination is not important but clearly without the journey the destination is just a thought or dream. That first of a thousand steps thing.
What happens when Bob can be found in or out of the set of bald things at different times or in different situations, but we might not understand (or even be well aware) of the conditions that drive Bob’s membership in the set when we’re evaluating baldness and Bob?
Can membership in baldness turn out to be some type of quantum state thing?
That might be a basis for separating the concept of fuzzy language and fuzzy truth.But I would agree that if we can identify all possible cases where Bob is or is not in the set of baldness one might claim truth is no longer fuzzy but one needs to then prove that knowledge of all possible states has been established I think.
I really like the observation in your Further Thoughts point. I do think that is a problem people need to look at as I would guess many will view the government involvement from a acting in public interests view rather than acting in either self interest (as problematic as that migh be when the players keep changing) or from a special interest/public choice perspective.
Probably some great historical analysis already written about events in the past that might serve as indicators of the pros and cons here. Any historians in the group here?
Strong upvote based on the first sentence. I often wonder why people think an ASI/AGI will want anything that humans do or even see the same things that biological life sees as resources. But it seems like under the covers of many arguments here that is largely assumed true.
I am a bit confused on point 2. Other than trading or doing it your selfs what other ways are you thinking about getting something?
That is certainly a more directly related, non-obvious aspect for verification. Thanks.
I assumed John was pointing at verifying that perhaps the chemicals used in the production of the chair might have some really bad impact on the environmnet, start causing a problem with the food chain eco system and make food much scarcers for everyone—including the person who bought the chair—in the meaningfully near future. Something a long those lines.
As you note, verifying the chair functions as you want—as a place to sit that is comfortable—is pretty easy. Most of us probably do that without even really thinking about it. But will this chair “kill me” in the future is not so obvious or easy to assess.
I suspect at the core, this is a question about an assumption about evaluating a simple/non-complex world and doing so in an inherently complex world do doesn’t allow true separability in simple and independant structures.
In terms of the hard to verify aspect, while it’s true that any one person will face any number of challenges do we live in a world where one person does anything on their own?
How would the open-source model influence outcomes? When pretty much anyone can take a look, and persumable many do, does the level of verifcation, or ease of verification, improve in your model?
Kind of speculative on my part and nothing I’ve tried to research for the comment. I am wondering is the tort version of reasonableness is a good model for new, poorly understood technologies. Somewhat thinking about the picture in https://www.lesswrong.com/posts/CZQYP7BBY4r9bdxtY/the-best-lay-argument-is-not-a-simple-english-yud-essay distinguising between narrow AI and AGI.
Tort law reasonableness seems okay for narrow AI. I am not so sure about the AGI setting though.
So I wonder if a stronger liability model would not be better until we have a good bit more direct experience with more AGIish models and functionality/products and a better data set to assess.
The Public Choice type cynic in me has to wonder if the law is making a strong case for the tort version of liability under a reasonable man standard if I should not think it’s more about limiting the liability for harms the companies might be enabling (I’m thinking what would we have if social media companies faced stronger obligations for what is posted on their networks rather that the imunity they were granted) and less about protecting the general society.
Over time perhaps liability moves more towards the tort world of a reasonable man but is that were this should start? Seems like a lower bar than is justified.
I find this rather exciting—and clearly the cryonics implications are positive. But beyond that, and yes, this is really scifi down the road thinking here, the implications for education/learning and treatment of things like PTSD seems huge. Assuming we can figure out how to control these. Of course I’m ignoring some of the real down sides like manipulation of memory for bad reasons or an Orwellean application. I am not sure those types of risks at that large in most open societies.
Thanks. Just took a quick glance as the abstract but looks interesting. Will have something to read while waiting at the airport for a flight tomorrow.
Is that thought one that is generally shared for those working in the field of memory or more something that is new/cutting edge? It’s a very interesting statement so if you have some pointers to a (not too difficult) a paper on how that works, or just had the time to write something up, I for one would be interested and greatful.
I think you’re right that the incentive structure around AI safety is important for getting those doing the work to do it as well as they can. I think there might be something to be said for the suggestion of moving to a cash payment over equity but think that needs a lot more development.
For instnace, if everyone is paid up front for the work they are doing to protect the world from some AI takeover in the future, then they are no longer tied to that future in terms of their current state. That might not produce any better results than equity that could decline in value in the future.
Ultimately the goal has to be that those able to and doing the work have a pretty tight personal interests stance on future state of AI. It might even be the case that such a research effort alignment is only loosely connected to compensation.
Additionally, as you note, it’s not entirely those working to limit a bad outcome for humans in general from AGI but also what the incentives are for the companies as a whole. Here I think the discussion regarding AI liabilities and insurance might matter more. Which also opens up a whole question about corporate law. Years ago, pre 1930s, banking law used to hold the bankers liable for twice the losses from bank failures to make them better at risk management with other peoples’ money. That seems to have been a special case that didn’t apply to other businesses even if they were largely owned by outsiders. Perhaps making those who are in conrtol of AI development and deplyment, or are largely the ones financing the efforts, personally responsibile might be a better incentive structure.
All of these are difficult to work though to get a good, and fair, structure in place. I don’t think any one approach will ultimately be the solution but all or some combination of them might be. But I also think it’s a given that the risk will always remain. So figuring out just what level of risk as acceptable is also needed, and problematic in its own way.
Actually checking those hypotheses statistically would be a pretty involved project; subtle details of accounting tend to end up relevant to this sort of thing, and the causality checks are nontrivial. But it’s the sort of thing economists have tools to test.
Yes, it would be a challenge statistically, and measurment a challenge as well. It’s not really about subtle accounting details but the economic costs—opportunity costs, subjective costs, expected costs. Additionally, economics has been trying to explain the existance, size and nature of the firm at least a century but still has not come to a firm conclusion.
I suspect a big part of the problem here is that a firm is a rather complex “thing” and and it’s not clear any single explanation that is logically consistent internally can explain the phenomena as the whole does not necessarily hold to some easily understood collection of parts. For instance, at a certain size do we think of a firm as a market particiant maximizing profits (or some internal dominance metric), a hybrid part market participant and part internal market or perhaps no longs even a market participant even when providing goods/services to some external market but really functioning as an alternative market form for those acting within the that large firm? If you accept the view that explaining the firm requires explanations at each of those levels and believe such a theory exists, then you also have to believe that some unified theory of micro and macro economics also exist as it’s basically the same problem.
So I’m not sure it’s correct to say “economist have tools to test” in the sense of and they will come up with clear and uncontested answers rather than perhaps have shed a bit of light on something but have not yet identified the elephant they are touching.
First, I have to note this is way more than I can wrap my head around in one reading (in fact it was more than I could read in one sitting so really have not completed reading it) but thank you for posting this as it presents a very complicated subject in a framework I find more accessible that prior discussings here (or anywhere else I’ve looked at). But then I’m just a curious outsider to this issue who occasionally explores the discussion so information overload is normal I think.
I particularly like the chart and how it laid out the various states/outcomes.
Could you clarify a bit here. Is Hanson talking about specific cultures or all of the instances of culture?