Cryonics may be so expensive and so unlikely to succeed that it might be bad utilitarianism to sign up
Having somebody be a Big Damn Hero actually be a good thing.
Bayes Theorem itself is INCREDIBLY poorly explained on this site.
While FOOM and the AI-Box problem (leading to an AI acting as a social and potentially economic agent) are possible and make Friendlyness or Fettering important, most singularitarians VASTLY overestimate the speed and ease with which even an incredibly powerful AI can generate the nigh-godlike nanoconstructor swarms (I see barely plausible ideas about biological FOOMS from time to time) and in particular the difficulty of technicians trying to resecure an unboxed but still communication-restricted AI. That doesn’t mean I think this stuff is impossible, or even that an AI can’t gain a lot of power and comms and manipulators in a short time, but I think that LWers (who seem often to be software or cogsci backgrounds compared to my Mechanical Engineering) have a tendancy to stop considering hardware-related issues past a certain point.
Many singularitarians have a bias toward expecting a singularity in their own lifetime or shortly after it. (I assign a single-digit to singularity before 2100 and something like 25-40 in the next 500 years
Old Culture gets way too little credit, but most of the people who realize this or appear to realize this are reactionaries who either can’t imagine different, much better Old Cultures or are neither utilitarian nor consensual with respect to participation in said cultures. (
Cryonics may be so expensive and so unlikely to succeed that it might be bad utilitarianism to sign up
Having somebody be a Big Damn Hero actually be a good thing.
Bayes Theorem itself is INCREDIBLY poorly explained on this site.
While FOOM and the AI-Box problem (leading to an AI acting as a social and potentially economic agent) are possible and make Friendlyness or Fettering important, most singularitarians VASTLY overestimate the speed and ease with which even an incredibly powerful AI can generate the nigh-godlike nanoconstructor swarms (I see barely plausible ideas about biological FOOMS from time to time) and in particular the difficulty of technicians trying to resecure an unboxed but still communication-restricted AI. That doesn’t mean I think this stuff is impossible, or even that an AI can’t gain a lot of power and comms and manipulators in a short time, but I think that LWers (who seem often to be software or cogsci backgrounds compared to my Mechanical Engineering) have a tendancy to stop considering hardware-related issues past a certain point.
Many singularitarians have a bias toward expecting a singularity in their own lifetime or shortly after it. (I assign a single-digit to singularity before 2100 and something like 25-40 in the next 500 years
Old Culture gets way too little credit, but most of the people who realize this or appear to realize this are reactionaries who either can’t imagine different, much better Old Cultures or are neither utilitarian nor consensual with respect to participation in said cultures. (
I’m not sure what you mean by this.