The zero-point of utilitarianism
- DistractedDodo
- New Trial Member
- Posts: 8
- Joined: November 22nd, 2020, 9:18 am
The zero-point of utilitarianism
- Jack D Ripper
- Posts: 610
- Joined: September 30th, 2020, 10:30 pm
- Location: Burpelson Air Force Base
- Contact:
Re: The zero-point of utilitarianism
In order to determine these things in any precise way, one must be able to determine the precise value of pleasures and pains (or whatever is relevant to the particular version of utilitarianism under consideration). That is not something that anyone is likely to be able to do.
How much pleasure do you get from a hot cup of coffee? How does that compare with the pain of having a headache? These are things that are not easily measured and put on a scale to compare against each other.
-
- Posts: 147
- Joined: November 18th, 2020, 12:22 am
Re: The zero-point of utilitarianism
When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.
This I believe can be explained in one of two ways. This is either a product of broad human egoism, or it points to there being an innate value of human life, that we have some faculty for (of course so much more to this, but thats not for this post).
To bring this back to the original post, I believe a utilitarian would have a very hard time pitching this idea to the general public.
- DistractedDodo
- New Trial Member
- Posts: 8
- Joined: November 22nd, 2020, 9:18 am
Re: The zero-point of utilitarianism
Yes.Jack D Ripper wrote: ↑November 22nd, 2020, 11:52 pm The zero point, to use your expression, would be when the good and the bad are equal in an individual's life.
That's the thing. I would argue, though, that a rough, subjective estimate might often suffice -- although this doesn't help us much if we're talking about non-human animals with a lesser ability to self-report quality of life.Jack D Ripper wrote: ↑November 22nd, 2020, 11:52 pm That, in practice, is going to be difficult to specify in a precise way (not to mention that what that would be would depend on the exact version of utilitarianism being considered).
In order to determine these things in any precise way, one must be able to determine the precise value of pleasures and pains (or whatever is relevant to the particular version of utilitarianism under consideration). That is not something that anyone is likely to be able to do.
Well, that depends where you put that zero-point. I would agree with you from a utilitarian point that a life of equal current pleasure and pain -- whatever that means -- is more than worthless because if it's potential to contain more pleasure than pain, and because of the potential for its continuation to contribute utility to other individuals.HJCarden wrote: ↑November 24th, 2020, 12:08 am When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.
Agreed!
- Sculptor1
- Posts: 7148
- Joined: May 16th, 2019, 5:35 am
Re: The zero-point of utilitarianism
Determinism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.DistractedDodo wrote: ↑November 22nd, 2020, 9:26 am Here's a question that's been bugging me for a while: How does one determine the zero-point of determinism -- the point at which a life is exactly worthless? This ought to be important for questions of animal welfare, for example: if the quality of life of animals in industrial farming is just below the zero-point, and improving their living conditions isn't practicable, the practice should stop because the animals contribute negative value -- whereas if their quality of life is just above this point, the practice can be allowed to continue, even if their conditions probably still should be improved if possible. Is there a term for this "zero point", and has the issue been discussed? It must have been, but I haven't been able to find the name of the problem. Cheers.
As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.
- Jack D Ripper
- Posts: 610
- Joined: September 30th, 2020, 10:30 pm
- Location: Burpelson Air Force Base
- Contact:
Re: The zero-point of utilitarianism
Sculptor1 wrote: ↑December 5th, 2020, 6:41 amDeterminism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.DistractedDodo wrote: ↑November 22nd, 2020, 9:26 am Here's a question that's been bugging me for a while: How does one determine the zero-point of determinism -- the point at which a life is exactly worthless? This ought to be important for questions of animal welfare, for example: if the quality of life of animals in industrial farming is just below the zero-point, and improving their living conditions isn't practicable, the practice should stop because the animals contribute negative value -- whereas if their quality of life is just above this point, the practice can be allowed to continue, even if their conditions probably still should be improved if possible. Is there a term for this "zero point", and has the issue been discussed? It must have been, but I haven't been able to find the name of the problem. Cheers.
As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.
Because of the title of this thread, I think that it is a typographical error in the first sentence, and it should be:
Here's a question that's been bugging me for a while: How does one determine the zero-point of utilitarianism -- the point at which a life is exactly worthless?
That is how I took it and I responded to it as if that is what had been written.
As written, it does not make sense, as you observe.
- Jack D Ripper
- Posts: 610
- Joined: September 30th, 2020, 10:30 pm
- Location: Burpelson Air Force Base
- Contact:
Re: The zero-point of utilitarianism
DistractedDodo wrote: ↑December 5th, 2020, 5:30 am Thank you for you replies and apologies for my late return to this. I had to think this over a couple times before replying back.
Yes.Jack D Ripper wrote: ↑November 22nd, 2020, 11:52 pm The zero point, to use your expression, would be when the good and the bad are equal in an individual's life.
That's the thing. I would argue, though, that a rough, subjective estimate might often suffice -- although this doesn't help us much if we're talking about non-human animals with a lesser ability to self-report quality of life.Jack D Ripper wrote: ↑November 22nd, 2020, 11:52 pm That, in practice, is going to be difficult to specify in a precise way (not to mention that what that would be would depend on the exact version of utilitarianism being considered).
In order to determine these things in any precise way, one must be able to determine the precise value of pleasures and pains (or whatever is relevant to the particular version of utilitarianism under consideration). That is not something that anyone is likely to be able to do.
Well, that depends where you put that zero-point. I would agree with you from a utilitarian point that a life of equal current pleasure and pain -- whatever that means -- is more than worthless because if it's potential to contain more pleasure than pain, and because of the potential for its continuation to contribute utility to other individuals.HJCarden wrote: ↑November 24th, 2020, 12:08 am When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.
...
You seem to be forgetting the fact that as long as one is alive, one also has the potential to have more pain than pleasure in the future, so that living could be much worse than dying. And also, there is the potential to cause more harm than good for others if one continues to live. Consequently, a life of currently equal pleasure and pain could be worse than worthless.
Your assumption that such a life is more than worthless is completely unwarranted.
What a utilitarian would do is try to predict whether the future would hold more pleasure or pain (or whatever it is that the utilitarian is claiming to value), as well as the future effects on others, and base decisions on that.
One of the issues of utilitarianism, or any form of consequentialism, is that it relies on predicting the future, predicting the outcome of things. That can never be done with certainty, so errors will always be happening, even with people who are always reasonable and always are trying their best.
- Sculptor1
- Posts: 7148
- Joined: May 16th, 2019, 5:35 am
Re: The zero-point of utilitarianism
Yes, I thought that has I hit send!!Jack D Ripper wrote: ↑December 5th, 2020, 3:13 pmSculptor1 wrote: ↑December 5th, 2020, 6:41 am
Determinism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.
As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.
Because of the title of this thread, I think that it is a typographical error in the first sentence, and it should be:
Here's a question that's been bugging me for a while: How does one determine the zero-point of utilitarianism -- the point at which a life is exactly worthless?
That is how I took it and I responded to it as if that is what had been written.
As written, it does not make sense, as you observe.
-
- Posts: 147
- Joined: November 18th, 2020, 12:22 am
Re: The zero-point of utilitarianism
Heres another way to illustrate my point, with a slight twist that could bring some interesting responses:DistractedDodo wrote: ↑December 5th, 2020, 5:30 amWell, that depends where you put that zero-point. I would agree with you from a utilitarian point that a life of equal current pleasure and pain -- whatever that means -- is more than worthless because if it's potential to contain more pleasure than pain, and because of the potential for its continuation to contribute utility to other individuals.HJCarden wrote: ↑November 24th, 2020, 12:08 am When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.
Say that we had some sort of technology that at birth, could make an entirely accurate prediction of the net pain/pleasure balance that would result at the end of someoene's life, AND the net pain and pleasure they would cause in their life . Babies that were born clocking in at or below 0 on this scale would, to a utilitarian, be useless correct? Their life would be a wash, with nothing positive gained.
However, even if this were real, don't you feel that the opinion of the "man on the street" would be that it still doesn't make these people's lives worthless? if they were just immediately terminated, the world, according to this machine, would have lost nothing or gained something.
But there's some hard to grasp intuition in my opinion that a utilitarian cannot account for that would give us pause when terminating this net 0 human.
So this is the beginning of an argument against utilitarianism for humans, in regards to the idea that the net pleasure/pain from a life is whats important.
However, back to this wonderful technology that exists in this hypothetical world.
Would it be ethical to keep someone alive who themselves would experience extreme discomfort and pain in their life, but was predicted to give great benefit to others? or the inverse, someone who was predicted to live the best life imaginable but brings massive suffering to those around them? can the utilitarian make a decision in these cases that would appeal to people as being just?
- DistractedDodo
- New Trial Member
- Posts: 8
- Joined: November 22nd, 2020, 9:18 am
Re: The zero-point of utilitarianism
Quite right! Apologies for the error.Jack D Ripper wrote: ↑December 5th, 2020, 3:13 pmSculptor1 wrote: ↑December 5th, 2020, 6:41 am Determinism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.
As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.
Because of the title of this thread, I think that it is a typographical error in the first sentence, and it should be:
Here's a question that's been bugging me for a while: How does one determine the zero-point of utilitarianism -- the point at which a life is exactly worthless?
That is how I took it and I responded to it as if that is what had been written.
As written, it does not make sense, as you observe.
- DistractedDodo
- New Trial Member
- Posts: 8
- Joined: November 22nd, 2020, 9:18 am
Re: The zero-point of utilitarianism
You're absolutely right, that was a bit of a leap on my part! I assumed most humans both experience and contribute more pleasure than pain to others in a life and took that as a starting point, but of course that point should have been explicit if not argued for.Jack D Ripper wrote: ↑December 5th, 2020, 5:20 pm You seem to be forgetting the fact that as long as one is alive, one also has the potential to have more pain than pleasure in the future, so that living could be much worse than dying. And also, there is the potential to cause more harm than good for others if one continues to live. Consequently, a life of currently equal pleasure and pain could be worse than worthless.
Your assumption that such a life is more than worthless is completely unwarranted.
Also a very good point!Jack D Ripper wrote: ↑December 5th, 2020, 5:20 pm What a utilitarian would do is try to predict whether the future would hold more pleasure or pain (or whatever it is that the utilitarian is claiming to value), as well as the future effects on others, and base decisions on that.
- DistractedDodo
- New Trial Member
- Posts: 8
- Joined: November 22nd, 2020, 9:18 am
Re: The zero-point of utilitarianism
This is a point that's always fascinated me; are you arguing that utilitarianism is unfit for humans because it yields counter-intuitive conclusions? To me that seems a bit like confusing the normative and descriptive perspectives; I absolutely agree utilitarianism doesn't accurately model how humans make decisions -- but I don't think that should have any bearing on the validity of our conclusions from a normative perspective. If we start with a set of axioms (e.g. pleasure is good, pain is bad) and arrive, though logical reasoning, at some counter-intuitive conclusion, does that justify a change to the axioms themselves?HJCarden wrote: ↑December 9th, 2020, 10:37 am Heres another way to illustrate my point, with a slight twist that could bring some interesting responses:
Say that we had some sort of technology that at birth, could make an entirely accurate prediction of the net pain/pleasure balance that would result at the end of someoene's life, AND the net pain and pleasure they would cause in their life . Babies that were born clocking in at or below 0 on this scale would, to a utilitarian, be useless correct? Their life would be a wash, with nothing positive gained.
However, even if this were real, don't you feel that the opinion of the "man on the street" would be that it still doesn't make these people's lives worthless? if they were just immediately terminated, the world, according to this machine, would have lost nothing or gained something.
But there's some hard to grasp intuition in my opinion that a utilitarian cannot account for that would give us pause when terminating this net 0 human.
So this is the beginning of an argument against utilitarianism for humans, in regards to the idea that the net pleasure/pain from a life is whats important.
However, back to this wonderful technology that exists in this hypothetical world.
Would it be ethical to keep someone alive who themselves would experience extreme discomfort and pain in their life, but was predicted to give great benefit to others? or the inverse, someone who was predicted to live the best life imaginable but brings massive suffering to those around them? can the utilitarian make a decision in these cases that would appeal to people as being just?
-
- Posts: 147
- Joined: November 18th, 2020, 12:22 am
Re: The zero-point of utilitarianism
I do believe that utilitarianism is unfit for humans precisely because I do not think the best moral system is one that does not respect the vast descriptive aspect to morality. When thinking of the aspects of morality, a system that only has a normative value looks like it does not capture everything that we truly associate with morality. I think this is why many philosophers have tried to come up with an ultimate maxim for the idea of morality, because it is something that must resonate with all of us in a way that a set of rules and mathematical decisions won't. So what I'd say is that morality must be partly intuitive because part of morality is feeling that what we are doing is right, and that this can be at least partially the basis of morality.DistractedDodo wrote: ↑February 2nd, 2021, 2:15 pm
This is a point that's always fascinated me; are you arguing that utilitarianism is unfit for humans because it yields counter-intuitive conclusions? To me that seems a bit like confusing the normative and descriptive perspectives; I absolutely agree utilitarianism doesn't accurately model how humans make decisions -- but I don't think that should have any bearing on the validity of our conclusions from a normative perspective. If we start with a set of axioms (e.g. pleasure is good, pain is bad) and arrive, though logical reasoning, at some counter-intuitive conclusion, does that justify a change to the axioms themselves?
- Eckhart Aurelius Hughes
- The admin formerly known as Scott
- Posts: 5786
- Joined: January 20th, 2007, 6:24 pm
- Favorite Philosopher: Eckhart Aurelius Hughes
- Contact:
Re: The zero-point of utilitarianism
By the zero-point of utilitarianism (great phrase by the way), it seems you are asking about the point at which from a utilitarian perspective euthanasia would be preferable (or "morally right" as a moralizing utilitarian might call it), particularly if discounting the indirect affects on others of the life in question.
The question raises a common issue with utilitarianism. By seeking to quantify everything into a one-dimensional scale of utility, a utilitarian seems to run into a problem with the conversation rate between the generally assumed 'badness' of death with the generally assumed badness of 'pain'. For instance, utilitarian math works when one says that two deaths are doubly worse than one, or half the pain is preferable to twice the pain. Then it's simple math. But converting the negative utility of pain to death units, or vice versa, presents a problem for a utilitarian.
I'm not a utilitarian. In fact, my overall philosophy does not entail moralizing at all, but instead it is spiritual in nature, not judgmentally moralistic. In other words, it reflects what I will do not what I 'should' do, whatever that would mean. With that said, and with my philosophy's focus on freedom, subjectivity, and diversity (rather than mere utility) also in mind, I think I can add a philosophical thought experiment to this discussion that can be usefully explored not just from anyone's perspective but also from a utilitarian perspective, which is as follows:
To keep the thought experiment simple, we can imagine one is the last living creature on Earth. There can be plant-life and thus vegetarian food, but no other humans or animals. This makes things simple in that we need not calculate one's would-be future affect on others, such as the alleged value of livestock as food to humans or the value of a sick patient to their family who will miss the patient with great sadness upon the utilitarian murdering of the patient against their will.
Indeed, by imagining ourselves as the last living creature on earth, we can side step the whole messy issue of brutal utilitarian murder altogether. You cannot run over 5 people with a trolley if there is only one human alive to start with.
Now we only have to deal with utilitarian suicide. The great philosopher Albert Camus once wrote, "There is but one truly serious philosophical problem and that is suicide."
This thought experiment leaves us with two questions:
First, as the last living creature on Earth, what is the zero-point in terms of projected life quality when a utilitarian 'should' commit suicide according to utilitarianism?
Second, as the last living creature on Earth, what is the zero-point at which one would commit suicide?
I cannot answer the first question because as previously mentioned I do not believe in such moralizing and do not consider the word 'should' to be meaningful.
As for the second question, I think I would never commit suicide in such a situation, at least not when in my own sane rational mind (as opposed to for instance being mentally ill with schizophrenia). This is because I believe I have cultivated a certain degree of invincible inner peace, which transcends the one-dimensional analysis of a utilitarian. Thus, I have inner peace both when I am eating delicious food and when I am feeling hunger pain, both when I am sleeping in a comfy bed and when I am getting repeatedly punched in the face, both when feeling pleasure and when feeling terrible long-term ongoing pain. In other words, my transcendental spirit remains transcendentally content no matter how much negative utility my body and egoic human mind experience. In terms of the spiritual, death is not bad and pain is not bad, but rather they are both good, and life is worth living not just despite but in part because of the yin-yang-balance between birth and death. Life is worth living even if the life is full of pain, disaster, and tragedy. Pain, challenges, and discomfort are a crucial part of what makes life worth living, in my philosophy. In fact, it's possible that a boring painless drama-free life would be the one that is unworth living, to me at least, but luckily such a life or world is, in my opinion, totally absurd and couldn't exist, meaning that I believe the yin-yang-like balance in the world between life and death and between comfort and discomfort is not a matter of happenstance but of a priori mathematical law. In other words, I believe an imaginary hypothetical heaven of only comfort and pleasure is as absurdly impossible as it would be hellishly boring.
Long story short, since my philosophy has me solely focused on how I play the figurative cards I am dealt in this material world, rather than complaining about how I 'should' have been dealt different cards whatever that would mean, it doesn't matter what cards I get dealt, I am still happy to play, and I am still completely 100% content as long as I play my best. That's a life always worth living. Inner peace is invincible.
Yes, at least it can. As you probably already know, it's called a reductio ad absurdum.DistractedDodo wrote: ↑February 2nd, 2021, 2:15 pm If we start with a set of axioms (e.g. pleasure is good, pain is bad) and arrive, though logical reasoning, at some counter-intuitive conclusion, does that justify a change to the axioms themselves?
Presumably, if a valid logical argument has a counterintuitive conclusion, one would choose to therefore consider it evidence against the axioms rather than evidence for the conclusion to the degree the negation of the axioms (as a set) is less counterintuitive than the assertion of the conclusion. Needless to say, it doesn't require all axioms be negated, but rather only one axiom needs to be slightly negated (i.e. slightly off from true) for the axioms as a set to be negated.
"The mind is a wonderful servant but a terrible master."
I believe spiritual freedom (a.k.a. self-discipline) manifests as bravery, confidence, grace, honesty, love, and inner peace.
- DistractedDodo
- New Trial Member
- Posts: 8
- Joined: November 22nd, 2020, 9:18 am
Re: The zero-point of utilitarianism
That is very interesting -- thank you for elaborating! While I can appreciate the practical value of an ethical system reflecting intuitive beliefs about morality, though, aren't you at risk of accepting moral relativism? And also, isn't it the responsibility of ethics, philosophy, and even academia in general to challenge the layman's intuitions and insist on whichever idea is logically superior, regardless of whether that idea is intuitive or convenient? To use an extreme example, if a majority of a population were, say, racist, wouldn't your approach require any viable ethical system to accommodate the belief that one set of races race is superior to another -- simply on the grounds that it has to capture the population's beliefs on the subject?HJCarden wrote: ↑February 16th, 2021, 6:12 pm I do believe that utilitarianism is unfit for humans precisely because I do not think the best moral system is one that does not respect the vast descriptive aspect to morality. When thinking of the aspects of morality, a system that only has a normative value looks like it does not capture everything that we truly associate with morality. I think this is why many philosophers have tried to come up with an ultimate maxim for the idea of morality, because it is something that must resonate with all of us in a way that a set of rules and mathematical decisions won't. So what I'd say is that morality must be partly intuitive because part of morality is feeling that what we are doing is right, and that this can be at least partially the basis of morality.
2024 Philosophy Books of the Month
2023 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023