Former tech entrepreneur (co-creator of the music software Sibelius). Among other things I now play the stock market, write software to predict it, and occasionally advise tech startups. I have degrees in philosophy.
bfinn
Indeed years ago I tried for several months to improve my poor sleep, varying and recording numerous variables from alcohol intake to timing of dinner and bed. Though I did a rather simple statistical analysis I was unable to identify anything that made a statistically significant difference—per your findings.
(In the end I did improve my sleep a lot—not due to my own experimentation but thanks to an excellent online course called Sleepio, clinically proven and prescribed by the NHS. Which identified something that hadn’t occurred to me—I was allowing too much time for sleep, ie setting my alarm too late, resulting in shallow sleep with too many night-time awakenings.)
Good post & efforts!
I glanced at the Compendium, which obvs needs some updating. Also, the opening seems odd:
Ideologically motivated companies are racing to build smarter-than-human AIs. Big Tech already backs them, and now nation states are getting roped in too.
Are they really/primarily ideologically motivated (rather than financially), and with what ideology?
‘Big Tech already backs them’ - but these AI developers are big tech companies (or fast-growing ones), surely?
Great post & work.
I’m hesitant to criticise your campaign statement as no doubt it has been carefully crafted over months—but it strikes me as a bit wordy and lacking punch. Though maybe this is the point—to appear measured and slightly ponderous/academic rather than risking looking like activism & sloganizing?
Also could be a bit clearer in other ways—e.g. on a cursory read it’s not clear that specialised and superintelligent AIs are being contrasted, or what the difference is (aren’t these specialised AIs also ‘superintelligent’ i.e. better than humans at what they do? Just not at everything)
To clarify, I think your criticism of utilitarianism/consequentialism is of a naive form of it that only looks at first-order effects. Not ‘proper’ utilitarianism. But yes no doubt many are naive like this, and it’s v hard to evaluate second- and higher-order effects (such as exploitation and coordination).
Also, this kind of naivety is particularly common on the left.
They’re not really calling for mob action (in almost all cases). It’s a rhetorical expression of hatred. Cf saying ‘eat the rich’ is not a serious advocacy of cannibalism.
That’s not to say though that it’s ok to call for mob action eg on social media, as it slightly increases the chance that some extremists might take it literally and act on it
I’ve only just realised that a key part of the AI alignment problem is essentially Wittgenstein’s rule-following argument. (Maybe obvious, but I’ve never seen this stated before.)
His rule-following argument claims that it’s impossible to define a term unambiguously, whether by examples or rules or using other terms; indeed any definition is so ambiguous as to be consistent with any future application of the term. So you can’t even teach someone ‘+’ in such a way that when following your definition/rule/algorithm they will give your desired answer to a sum they haven’t seen before, eg 1000 + 1000 = 2000. They could just as ‘correctly’ give 3000 or −45.7 or pi. (I won’t explain why here.)
Cf no amount of training an AI to be ‘good’ etc will ensure that it remains so in novel situations.
I’m not convinced Wittgenstein was right (and argued against the rule-following argument for my philosophy masters FWIW); maybe a real philosopher more familiar with the topic could apply it usefully to AI alignment.
Having read a few studies myself I got a CO2 monitor (from AirThings, also monitors VOCs, temperature, humidity etc). From which I can confirm that CO2 builds to quite high levels in an unventilated room within an hour or two. But even leaving a window only slightly ajar helps a lot.
Apparently fan heating and air conditioning systems may or may not mix in air from outside—many just recirculate the same air—so switching these on may or may not help with ventilation.
Some studies suggest high CO2 also harms sleep—though again the research is inadequate. If so, sleeping with the window slightly open should help; if cold/noise makes this impractical, sleep with the bedroom door ajar (if there aren’t other people around) and a window open in another room Or even if no window is open at all, having your bedroom door ajar seems to help by letting the CO2 out. I’ve done this for the last year, though can’t be sure if it’s helped my sleep.
A confounding factor is that it’s best to sleep in a cool room, which opening a window also achieves. Either way this is an argument for opening a window while you sleep.
Would be good to hear more of this
Many excellent examples and analysis. Obviously v long but no doubt others will find it useful source material.
Cancer is an interesting example I haven’t seen before, with suitably alarming connotations.
I don’t know; I had assumed so but maybe not
Re ‘AI is being rapidly adopted, and people are already believing the AIs’ - two recent cases from the UK of national importance:
In a landmark employment case (re trans rights), the judge’s ruling turned out to have been partly written by AI which had made up case law:
And in a controversy in which police banned Israeli fans from attending a soccer match, their decision cited evidence which had also been made up by AI (eg an entirely fictional previous UK visit by the Israeli team). The local police chief has just resigned over it:
Also with eg dog territory, the boundary markers aren’t arbitrary—presumably the reason dogs piss on trees & lampposts, which are not physical thresholds, is (a) they provide some protection for the scent against being removed eg by rain; (b) they are (hence) standard locations for rival dogs to check for scent, rather than having to sniff vast areas of ground; ie they are (evolved) Schelling points for potential boundary markers.
(Walls are different as they are both potential boundary markers and physical thresholds.)
According to the Wikipedia article above, the Frisch–Peierls memorandum included those two scientist’s suggestion that the best way to deal with their concern that the Germans would develop an atomic bomb was to build one first. But what they thought about the moral issues I don’t know
When scientists first realised an atomic bomb might be feasible (in the UK in 1939), and how important it would be, the UK defence science adviser reckoned there was only a 1 in 100,000 chance of successfully making one. Nonetheless the government thought that high enough to instigate secret experiments into it.
(Obliquely relevant to AI risk.)
https://en.wikipedia.org/wiki/Frisch–Peierls_memorandum
Reminds me of when I was 8 and our history teacher told us about some king of England being deposed by the common people. We were shocked and confused as to how this could happen—he was the king! If he commanded them to stop, they’d have to obey! How could they not do that?? (Our teacher found this hilarious.)
Great post. Three comments:
If it were the case that events in the future mattered less than events now (as is the case with money, because money sooner can earn interest), one could discount far future events almost completely and thereby make the long-term effects of one’s actions more tractable. However I understand time discounting doesn’t apply to ethics (though maybe this is disputed by some).
That said, I suspect discounting the future instead on the grounds of uncertainty (the further out you go the harder it is to predict anything) - using say a discount rate per year (as with money) to model this—may be a useful heuristic. No doubt this is a topic discussed in the field.
Secondly, no doubt there is much to be said about what the natural social and temporal boundaries of people’s moral and other influence & plans are, eg family, friends, work, retirement, death (and contents of their will); and how these can change—eg if you gain or exercise power/influence, say by getting an important job, having children, or doing things with wider influence (eg donating to charity), which can be for better or worse.
Thirdly, a minor observation: chess has an equivalent to the Go thing about a local sequence of moves ending in a stop sign, viz. an exchange of pieces—eg capturing a pawn in exchange for a pawn, or a much longer & more complicated sequence involving multiple pieces, but either way ending in a ‘quiet position’ where not very much is happening. Before Alpha Zero, chess programs considering an exchange would look at all plausible ways it might play out, stopping each move sequence only when a quiet position was reached. And in the absence of an exchange or other instability, would stop a sequence after a ‘horizon’ of say 10 moves (and evaluate the resulting situation on the basis of the board position, eg what pieces there are and their mobility).
FWIW ‘directionally correct’ includes ‘right but for the wrong reasons’, I.e. right only by fluke, hence irrelevant & ignorable. Which isn’t what you want to include. Though it’s maybe not often used in that situation
In London where I live, philosophy meetup groups are much better than this. A broader mix of people—few have philosophy degrees, few know any formal philosophy, some have no university degree, very many recent immigrants, though admittedly almost everyone is middle class. Almost always good conversations, with decent reasoning, including people taking contrary and controversial stances, but respectfully discussed and never any heatedness or performative wokeness. Discussions in groups of 4-6 people work best. (The main bad dynamic is if you get someone who talks too much and dominates a conversation.)
How about ‘out-of-control superintelligence’? (Either because it’s uncontrollable or at least not controlled.) Which carries the appropriately alarming connotations that it’s doing its own thing and that we can’t stop it (or aren’t doing so anyway)
Re criteria for AI being transformative in some sense, it might be useful to look at how historians determine sharp dates for fuzzy-edged eras, which at least sometimes seems to involve identifying key events across multiple domains. E.g. common criteria for the end of the Middle Ages include:
1453: Fall of Constantinople
1455: Gutenberg Bible (first major European printed book)
1492: Columbus’s first voyage
1517: Luther’s 95 theses
Viz. in this case, key political, technological and cultural criteria. (For which the years could be averaged, say, to c.1480.)