This started happening in Hawaii, and to a lesser extent in Arizona. The resolution, apart from reducing net metering subsidies, has been to increased the fixed component of the bill (which pays for the grid connection) and reduce the variable component. My impression is this has been a reasonably effective solution, assuming people don’t want to cut their connection entirely.
Larks
Moral Trade, Impact Distributions and Large Worlds
I agree with you that basically anything in the stock market has much less counterparty risk than that. I disagree with basically all non-trading examples you give.
It’s not just the stock market, it’s true for the bond market, the derivatives market, the commodities market… financial markets, a category which includes prediction markets, cannot function effectively with counterparty risk anything like 5%.
My sense is around 1⁄20 Ubers don’t show up, or if they show up, fail to do their job in some pretty obvious and clear way.
If the Uber doesn’t show up I’m not sure that’s counterparty risk: you haven’t paid anything, so it seems more like them declining the contract. The equivalent for a prediction market would be if you hit ‘buy’ and the button didn’t work, not for when you have paid the money and then don’t get the result taken from you. That’s much less bad than if the trade went through and then was settled incorrectly.
I think that’s false, at least the statistics on wage theft seemed quite substantial to me. I am kind of confused how to interpret these, but various different studies on Wikipedia suggest wage theft on-average to be around 5%-15% (higher among lower-income workers).
I think those studies have significant methodological flaws, though unfortunately I can’t remember the specific issues off the top off my head, so this may not be very convincing to you.
I agree this is true for gas and water (and mostly true for electricity, though PG&E is terrible and Berkeley really has a lot of outages).
According to the first google hit, PG&E said the average customer suffered 255.9 minutes of outage in 2013, which is a lot higher than I expected, but is still only 100*255.9/(60*24*365) = 0.05%
In most domains except the most hardened part of the stock market counterparty risk is generally >5%.
This seems quite wrong to me:
High Yield Corporate Bond OAS spreads are <5% according to bloomberg, and most of that is economic risk, not “you will get screwed by a change of rules” risk.
Trades on US stock exchanges almost always succeed, many more 9s than just one.
If I buy a product in a box in a supermarket the contents of the box match the label >>95% of the time.
Banks make errors with depositor balances <<5% of the time.
Most employers manage to pay fortnightly wages on time without missing one or more paycheques per year.
Once you’re seated in an Uber or Taxi they take you to your destination almost all the time.
Your utility company fulfills its obligations to supply your house >>95% of the time under all but the most extreme circumstances.
Most employees turn up >95% of non-holiday days, and most students maintain >95% attendance.
A bit dated but have you read Robin’s 2007 paper on the subject?
Prediction markets are low volume speculative markets whose prices offer informative forecasts on particular policy topics. Observers worry that traders may attempt to mislead decision makers by manipulating prices. We adapt a Kyle-style market microstructure model to this case, adding a manipulator with an additional quadratic preference regarding the price. In this model, when other traders are uncertain about the manipulator’s target price, the mean target price has no effect on prices, and increases in the variance of the target price can increase average price accuracy, by increasing the returns to informed trading and thereby incentives for traders to become informed.
Yes, sorry for being unclear. I meant to suggest that this argument implied ‘accelerate agents and decelerate planners’ could be the desirable piece of differential progress.
LLMs as a Planning Overhang
This post seems like it was quite influential. This is basically a trivial review to allow the post to be voted on.
L’Ésswrong, c’est moi.
I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people.
I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this.
If this is an acceptable resolution, why didn’t you just let the problems with NonLinear ripply out into the social graph naturally?
If most firms have these clauses, one firm doesn’t, and most people don’t understand this, it seems possible that most people would end up with a less accurate impression of their relative merits than if all firms had been subject to equivalent evidence filtering effects.
In particular, it seems like this might matter for Wave if most of their hiring is from non-EA/LW people who are comparing them against random other normal companies.
Sorry, not for 2022.
I would typically aim for mid-December, in time for the American charitable giving season.
After having written an annual review of AI safety organisations for six years, I intend to stop this year. I’m sharing this in case someone else wanted to in my stead.
Reasons
It is very time consuming and I am busy.
I have a lot of conflicts of interests now.
The space is much better funded by large donors than when I started. As a small donor, it seems like you either donate to:
A large org that OP/FTX/etc. support, in which case funging is ~ total and you can probably just support any.
A large org than OP/FTX/etc. reject in which case there is a high chance you are wrong.
A small org OP/FTX/etc. haven’t heard of, in which case I probably can’t help you either.
Part of my motivation was to ensure I stayed involved in the community but this is not at threat now.
Hopefully it was helpful to people over the years. If you have any questions feel free to reach out.
- 1 Mar 2023 4:33 UTC; 5 points) 's comment on 2021 AI Alignment Literature Review and Charity Comparison by (
Larks’s Shortform
Thanks!
Alignment research: 30
Could you share some breakdown for what these people work on? Does this include things like the ‘anti-bias’ prompt engineering?
I would expect that to be the case for staff who truly support faculty. But many of them seem to be there to directly support students, rather than via faculty. The number of student mental health coordinators (and so on) you need doesn’t scale with the number of faculty you have. The largest increase in this category is ‘student services’, which seems to be definitely of this nature.
You probably should have said ‘yes’ when asked if it was AI-written.