notsonewuser, yes, “a (very) lossy compression”, that’s a good way of putting it—not just burger-eating Jane’s lossy representation of the first-person perspective of a cow, but also her lossy representation of her pensioner namesake with atherosclerosis forty years hence. Insofar as Jane is ideally rational, she will take pains to offset such lossiness before acting.
Ants? Yes, you could indeed choose not to have your brain reconfigured so as faithfully to access their subjective panic and distress. Likewise, a touchy-feely super-empathiser can choose not to have her brain reconfigured so she better understands of the formal, structural features of the world—or what it means to be a good Bayesian rationalist. But insofar as you aspire to be an ideal rational agent, then you must aspire to maximum representational fidelity to the first-person and the first-third facts alike. This is a constraint on idealised rationality, not a plea for us to be more moral—although yes, the ethical implications may turn out to be profound.
The Hedonistic Imperative? Well, I wrote HI in 1995. The Abolitionist Project (2007) (http://www.abolitionist.com) is shorter, more up-to-date, and (I hope) more readable. Of course, you don’t need to buy into my quirky ideas on ideal rationality or ethics to believe that we should use biotech and infotech to phase out the biology of suffering throughout the living world.
On a different note, I don’t know who’ll be around in London next month. But on May 11, there is a book launch of the Springer volume, “Singularity Hypotheses: A Scientific and Philosophical Assessment”:
I’ll be making the case for imminent biologically-based superintelligence. I trust there will be speakers to put the Kurzweilian and MIRI / lesswrong perspective. I fear a consensus may prove elusive. But Springer have a commissioned a second volume—perhaps to tie up any loose ends.
notsonewuser, yes, “a (very) lossy compression”, that’s a good way of putting it—not just burger-eating Jane’s lossy representation of the first-person perspective of a cow, but also her lossy representation of her pensioner namesake with atherosclerosis forty years hence. Insofar as Jane is ideally rational, she will take pains to offset such lossiness before acting.
Ants? Yes, you could indeed choose not to have your brain reconfigured so as faithfully to access their subjective panic and distress. Likewise, a touchy-feely super-empathiser can choose not to have her brain reconfigured so she better understands of the formal, structural features of the world—or what it means to be a good Bayesian rationalist. But insofar as you aspire to be an ideal rational agent, then you must aspire to maximum representational fidelity to the first-person and the first-third facts alike. This is a constraint on idealised rationality, not a plea for us to be more moral—although yes, the ethical implications may turn out to be profound.
The Hedonistic Imperative? Well, I wrote HI in 1995. The Abolitionist Project (2007) (http://www.abolitionist.com) is shorter, more up-to-date, and (I hope) more readable. Of course, you don’t need to buy into my quirky ideas on ideal rationality or ethics to believe that we should use biotech and infotech to phase out the biology of suffering throughout the living world.
On a different note, I don’t know who’ll be around in London next month. But on May 11, there is a book launch of the Springer volume, “Singularity Hypotheses: A Scientific and Philosophical Assessment”:
http://www.meetup.com/London-Futurists/events/110562132/?a=co1.1_grp&rv=co1.1
I’ll be making the case for imminent biologically-based superintelligence. I trust there will be speakers to put the Kurzweilian and MIRI / lesswrong perspective. I fear a consensus may prove elusive. But Springer have a commissioned a second volume—perhaps to tie up any loose ends.