I can see straight away that we’re running into a jargon barrier.
One of us is.
Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary.
Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.
I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.
I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information.
That’s not deontology, because it’s not object level.
you can take organs from the least healthy of the people needing organs just before he pops his clogs
Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.
“Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.”
Indeed it is relevant here, but it is also relevant to AGI in a bigger way, because AGI is a philosopher, and the vast bulk of what we want it to do (applied reasoning) is philosophy. AGI will do philosophy properly, eliminating the mistakes. It will do the same for maths and physics where there are also some serious mistakes waiting to be fixed.
“Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.”
The problem with it is the proliferation of bad ideas—no one should have to become an expert in the wide range of misguided issues if all they need is to know how to put moral control into AGI. I have shown how it should be done, and I will tear to pieces any ill-founded objection that is made to it. If an objection comes up that actually works, I will abandon my approach if I can’t refine it to fix the fault.
“That’s not deontology, because it’s not object level.”
Does it matter what it is if it works? Show me where it fails.Get a team together and throw your best objection at me. If my approach breaks, we all win—I have no desire to cling to a disproven idea. If it stands up, you get two more goes. And if it stands up after three goes, I expect you to admit that it may be right and to agree that I might just have something.
“Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.”
Great—you would wait as late as possible and transfer organs before multiple organ failure sets in. The important point is not the timing, but that it would be more moral than taking them from the healthy person.
One of us is.
Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.
Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.
That’s not deontology, because it’s not object level.
Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.
“Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.”
Indeed it is relevant here, but it is also relevant to AGI in a bigger way, because AGI is a philosopher, and the vast bulk of what we want it to do (applied reasoning) is philosophy. AGI will do philosophy properly, eliminating the mistakes. It will do the same for maths and physics where there are also some serious mistakes waiting to be fixed.
“Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.”
The problem with it is the proliferation of bad ideas—no one should have to become an expert in the wide range of misguided issues if all they need is to know how to put moral control into AGI. I have shown how it should be done, and I will tear to pieces any ill-founded objection that is made to it. If an objection comes up that actually works, I will abandon my approach if I can’t refine it to fix the fault.
“That’s not deontology, because it’s not object level.”
Does it matter what it is if it works? Show me where it fails.Get a team together and throw your best objection at me. If my approach breaks, we all win—I have no desire to cling to a disproven idea. If it stands up, you get two more goes. And if it stands up after three goes, I expect you to admit that it may be right and to agree that I might just have something.
“Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.”
Great—you would wait as late as possible and transfer organs before multiple organ failure sets in. The important point is not the timing, but that it would be more moral than taking them from the healthy person.