then you could spread the pesticide (and not other pesticides) in the region
This would affect other insects in addition to the targeted mosquitoes, right? This seems strictly worse than the original gene drive proposition to me.
then you could spread the pesticide (and not other pesticides) in the region
This would affect other insects in addition to the targeted mosquitoes, right? This seems strictly worse than the original gene drive proposition to me.
A survey shows that gay male teenagers are several times more likely to conceive girls than straight male teenagers.
Does “conceive” mean “have sex with” here? Because according to what I think of as the standard definition of that word, you would be saying that gay male teenagers are more likely to produce female offspring (which sounds pretty silly). Did the survey use that word?
Also asked (with some responses from the authors of the paper) here: https://www.lesswrong.com/posts/khFC2a4pLPvGtXAGG/how-to-catch-an-ai-liar-lie-detection-in-black-box-llms-by?commentId=v3J5ZdYwz97Rcz9HJ
Testing with PortAudio’s demo paex_read_write_wire.c [2]
It looks like this uses the blocking IO interface, I guess that adds its own buffering on top of everything else. For minimal latency you want the callback interface. Try adapting test/patest_wire.c or test/pa_minlat.c.
Humans have lived during one of Earth’s colder period, but historically it’s been a lot hotter. Our bodies are well adapted for heat (so long as we can cool off using sweat)
This doesn’t seem very reassuring? For example, https://climate.nasa.gov/explore/ask-nasa-climate/3151/too-hot-to-handle-how-climate-change-may-make-some-places-too-hot-to-live/
Since 2005, wet-bulb temperature values above 95 degrees Fahrenheit [35 C] have occurred for short periods of time on nine separate occasions in a few subtropical places like Pakistan and the Persian Gulf. They also appear to be becoming more frequent.
If it’s been hotter historically, such that dinosaurs would have been totally fine with these higher temperatures that doesn’t exactly help humans...
Let me just quote Wikipedia: “A seahorse [...] is any of 46 species of small marine fish in the genus Hippocampus.” Because I spent a few confused minutes trying to figure out how males could face more intense competion in a brain part.
He says non-programmers; I guess you misread?
Theoretically capitalism should be fixing these examples automatically
Huh? Why?
If you want to get a job working on machine learning research, the claim here is that the best way to do that is to replicate a bunch of papers. Daniel Ziegler (yes, a Stanford ML PhD dropout, and yes that was likely doing a lot of work here) spent 6 weeks replicating papers and then got a research engineer job at OpenAI.
Wait, a research job at OpenAI? That’s worse. You do know why that’s worse, right?
I don’t know why, and I’m confused about what this sentence is saying. Worse than what?
I don’t think anyone is proposing to offer this deal to Putin; it’s not like the rank and file soldiers are able to make the “invade your neighbor” decision in a bid to get EU citizenship.
Low confidence generally means questionable or implausible information was used, the information is too fragmented or poorly corroborated to make solid analytic inferences, or significant concerns or problems with sources existed.1
I haven’t voted at all, but perhaps the downvotes are because it seems like a non sequitur? That is, I don’t understand why Richard_Kennaway is declaring his preferences about this.
I don’t understand what’s the point of all the swearing? It’s just kind of annoying to read.
I’ve read (I don’t have any first hand knowledge of it though) that in sign language dialogues both signers can be signing to each other at the same time (full-duplex) as opposed to each speaker having to wait for the other to stop (half-duplex). Might be another thing to file under “neat features”.
They also talk about the protestors entering government buildings, but never about any people working in those buildings being afraid or hurt, so according to Zvi’s rules this would imply that the buildings were empty or something.
I don’t know about the other stuff, but https://www.vox.com/world/2023/1/9/23546507/brazil-bolsonaro-lula-capital-invasion-january-8 says
Congress was in recess at the time, leaving the building mostly empty.
Huh. I literally have no idea what feeling this is referring to.
Also, any reason you swapped the friend for a stranger? That changes the situation somewhat – in degree at least, but maybe in kind too.
Yes, the other examples seemed to be about caring about people you are close to more than strangers, but I wanted to focus on the ethical reasoning vs internal motivation part.
examples of when it is right to be motivated by careful principled ethical reasoning or rule-worship
Thanks, that’s helpful.
Okay, I think my main confusion is that all the examples have both the motivation-by-ethical-reasoning and lack-of-personal-caring/empathy on the moral disharmony/ugliness side. I’ll try to modify the examples a bit to tease them apart:
Visiting a stranger in the hospital in order to increase the sum of global utility is morally ugly
Visiting a stranger in the hospital because you’ve successfully internalized compassion toward them via loving kindness meditation (or something like that) is morally good(?)
That is, the important part is the internalized motivation vs reasoning out what to do from ethical principles.
(although I notice my intuition has a hard time believing the premise in the 2nd case)
imagine visiting a sick friend at the hospital. If our motivation for visiting our sick friend is that we think doing so will maximize the general good, (or best obeys the rules most conducive to the general good, or best respects our duties), then we are morally ugly in some way.
If our motivation is just to make our friend feel better is that okay? Because it seems like that is perfectly compatible with consequentialism, but doesn’t give the “I don’t really care about you” message to our friend like the other motivations.
Or is the fact that the main problem I see with the “morally ugly” motivations is that they would make the friend feel bad a sign that I’m still too stuck in the consequentialist mindset and completely missing the point?
Isn’t the assumption that once we successfully align AGI, it can do the work on immortality? So “we” don’t need to know how beyond that.