I appreciate the discussion, but I’m disappointed by the lack of rigor in proposals, and somewhat expect failure for the entire endeavor of quantifying empathy (which is the underlying drive for discussing consciousness in these contexts, as far as I’m concerned).
Of course, we do not measure computers by mass, but by speed, number of processors and information integration. But if you directly do not have enough computing capacity, your neural network is simply small and information processing is limited.
It’s worth going one step further here—how DO we measure computers, and how might that apply to consciousness? Computer benchmarking is a pretty complex topic, with most of the objective trivial measures (FlOps, IOPS, data throughput, etc.) being well-known to not tell the important details, and specific usage benchmarks being required to really evaluate a computing system. Number of transistors is a marketing datum, not a measure of value for any given purpose.
Until we get closer to actual measurements of cognition and emotion, we’re unlikely to get any agreement on relative importance of different entities’ experiences.
Agree on this criticism for the difference between humans and pigs, but there too many orders of magnitude of difference between shrimp and human to consider detailed measures of computing power very necesary.
Quantifying empathy is intrinsically hard, because everything begins by postulating (not observing) consciousness in a group of beings, and that is only well grounded for humans. So, at the end, even if you are totally successful in developing a theory of human sentience, for other beings you are extrapolating. Anything beyond solipsism is a leap of faith (unlike you find St. Anselm ontological proof credible).
I appreciate the discussion, but I’m disappointed by the lack of rigor in proposals, and somewhat expect failure for the entire endeavor of quantifying empathy (which is the underlying drive for discussing consciousness in these contexts, as far as I’m concerned).
It’s worth going one step further here—how DO we measure computers, and how might that apply to consciousness? Computer benchmarking is a pretty complex topic, with most of the objective trivial measures (FlOps, IOPS, data throughput, etc.) being well-known to not tell the important details, and specific usage benchmarks being required to really evaluate a computing system. Number of transistors is a marketing datum, not a measure of value for any given purpose.
Until we get closer to actual measurements of cognition and emotion, we’re unlikely to get any agreement on relative importance of different entities’ experiences.
Agree on this criticism for the difference between humans and pigs, but there too many orders of magnitude of difference between shrimp and human to consider detailed measures of computing power very necesary.
Quantifying empathy is intrinsically hard, because everything begins by postulating (not observing) consciousness in a group of beings, and that is only well grounded for humans. So, at the end, even if you are totally successful in developing a theory of human sentience, for other beings you are extrapolating. Anything beyond solipsism is a leap of faith (unlike you find St. Anselm ontological proof credible).