The zero-point of utilitarianism

Discuss morality and ethics in this message board.
Featured Article: Philosophical Analysis of Abortion, The Right to Life, and Murder
User avatar
DistractedDodo
New Trial Member
Posts: 8
Joined: November 22nd, 2020, 9:18 am

The zero-point of utilitarianism

Post by DistractedDodo »

Here's a question that's been bugging me for a while: How does one determine the zero-point of determinism -- the point at which a life is exactly worthless? This ought to be important for questions of animal welfare, for example: if the quality of life of animals in industrial farming is just below the zero-point, and improving their living conditions isn't practicable, the practice should stop because the animals contribute negative value -- whereas if their quality of life is just above this point, the practice can be allowed to continue, even if their conditions probably still should be improved if possible. Is there a term for this "zero point", and has the issue been discussed? It must have been, but I haven't been able to find the name of the problem. Cheers.
User avatar
Jack D Ripper
Posts: 610
Joined: September 30th, 2020, 10:30 pm
Location: Burpelson Air Force Base
Contact:

Re: The zero-point of utilitarianism

Post by Jack D Ripper »

The zero point, to use your expression, would be when the good and the bad are equal in an individual's life. That, in practice, is going to be difficult to specify in a precise way (not to mention that what that would be would depend on the exact version of utilitarianism being considered).

In order to determine these things in any precise way, one must be able to determine the precise value of pleasures and pains (or whatever is relevant to the particular version of utilitarianism under consideration). That is not something that anyone is likely to be able to do.

How much pleasure do you get from a hot cup of coffee? How does that compare with the pain of having a headache? These are things that are not easily measured and put on a scale to compare against each other.
"A wise man ... proportions his belief to the evidence." - David Hume
HJCarden
Posts: 147
Joined: November 18th, 2020, 12:22 am

Re: The zero-point of utilitarianism

Post by HJCarden »

Looking at this 0 point from a non-utilitarian view, I believe that there are several interesting observations that can lead to further debate related to this subject.

When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.

This I believe can be explained in one of two ways. This is either a product of broad human egoism, or it points to there being an innate value of human life, that we have some faculty for (of course so much more to this, but thats not for this post).

To bring this back to the original post, I believe a utilitarian would have a very hard time pitching this idea to the general public.
User avatar
DistractedDodo
New Trial Member
Posts: 8
Joined: November 22nd, 2020, 9:18 am

Re: The zero-point of utilitarianism

Post by DistractedDodo »

Thank you for you replies and apologies for my late return to this. I had to think this over a couple times before replying back.
Jack D Ripper wrote: November 22nd, 2020, 11:52 pm The zero point, to use your expression, would be when the good and the bad are equal in an individual's life.
Yes.
Jack D Ripper wrote: November 22nd, 2020, 11:52 pm That, in practice, is going to be difficult to specify in a precise way (not to mention that what that would be would depend on the exact version of utilitarianism being considered).

In order to determine these things in any precise way, one must be able to determine the precise value of pleasures and pains (or whatever is relevant to the particular version of utilitarianism under consideration). That is not something that anyone is likely to be able to do.
That's the thing. I would argue, though, that a rough, subjective estimate might often suffice -- although this doesn't help us much if we're talking about non-human animals with a lesser ability to self-report quality of life.
HJCarden wrote: November 24th, 2020, 12:08 am When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.
Well, that depends where you put that zero-point. I would agree with you from a utilitarian point that a life of equal current pleasure and pain -- whatever that means -- is more than worthless because if it's potential to contain more pleasure than pain, and because of the potential for its continuation to contribute utility to other individuals.
HJCarden wrote: November 24th, 2020, 12:08 am To bring this back to the original post, I believe a utilitarian would have a very hard time pitching this idea to the general public.
Agreed!
User avatar
Sculptor1
Posts: 7148
Joined: May 16th, 2019, 5:35 am

Re: The zero-point of utilitarianism

Post by Sculptor1 »

DistractedDodo wrote: November 22nd, 2020, 9:26 am Here's a question that's been bugging me for a while: How does one determine the zero-point of determinism -- the point at which a life is exactly worthless? This ought to be important for questions of animal welfare, for example: if the quality of life of animals in industrial farming is just below the zero-point, and improving their living conditions isn't practicable, the practice should stop because the animals contribute negative value -- whereas if their quality of life is just above this point, the practice can be allowed to continue, even if their conditions probably still should be improved if possible. Is there a term for this "zero point", and has the issue been discussed? It must have been, but I haven't been able to find the name of the problem. Cheers.
Determinism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.

As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.
User avatar
Jack D Ripper
Posts: 610
Joined: September 30th, 2020, 10:30 pm
Location: Burpelson Air Force Base
Contact:

Re: The zero-point of utilitarianism

Post by Jack D Ripper »

Sculptor1 wrote: December 5th, 2020, 6:41 am
DistractedDodo wrote: November 22nd, 2020, 9:26 am Here's a question that's been bugging me for a while: How does one determine the zero-point of determinism -- the point at which a life is exactly worthless? This ought to be important for questions of animal welfare, for example: if the quality of life of animals in industrial farming is just below the zero-point, and improving their living conditions isn't practicable, the practice should stop because the animals contribute negative value -- whereas if their quality of life is just above this point, the practice can be allowed to continue, even if their conditions probably still should be improved if possible. Is there a term for this "zero point", and has the issue been discussed? It must have been, but I haven't been able to find the name of the problem. Cheers.
Determinism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.

As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.

Because of the title of this thread, I think that it is a typographical error in the first sentence, and it should be:

Here's a question that's been bugging me for a while: How does one determine the zero-point of utilitarianism -- the point at which a life is exactly worthless?

That is how I took it and I responded to it as if that is what had been written.

As written, it does not make sense, as you observe.
"A wise man ... proportions his belief to the evidence." - David Hume
User avatar
Jack D Ripper
Posts: 610
Joined: September 30th, 2020, 10:30 pm
Location: Burpelson Air Force Base
Contact:

Re: The zero-point of utilitarianism

Post by Jack D Ripper »

DistractedDodo wrote: December 5th, 2020, 5:30 am Thank you for you replies and apologies for my late return to this. I had to think this over a couple times before replying back.
Jack D Ripper wrote: November 22nd, 2020, 11:52 pm The zero point, to use your expression, would be when the good and the bad are equal in an individual's life.
Yes.
Jack D Ripper wrote: November 22nd, 2020, 11:52 pm That, in practice, is going to be difficult to specify in a precise way (not to mention that what that would be would depend on the exact version of utilitarianism being considered).

In order to determine these things in any precise way, one must be able to determine the precise value of pleasures and pains (or whatever is relevant to the particular version of utilitarianism under consideration). That is not something that anyone is likely to be able to do.
That's the thing. I would argue, though, that a rough, subjective estimate might often suffice -- although this doesn't help us much if we're talking about non-human animals with a lesser ability to self-report quality of life.
HJCarden wrote: November 24th, 2020, 12:08 am When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.
Well, that depends where you put that zero-point. I would agree with you from a utilitarian point that a life of equal current pleasure and pain -- whatever that means -- is more than worthless because if it's potential to contain more pleasure than pain, and because of the potential for its continuation to contribute utility to other individuals.

...

You seem to be forgetting the fact that as long as one is alive, one also has the potential to have more pain than pleasure in the future, so that living could be much worse than dying. And also, there is the potential to cause more harm than good for others if one continues to live. Consequently, a life of currently equal pleasure and pain could be worse than worthless.

Your assumption that such a life is more than worthless is completely unwarranted.


What a utilitarian would do is try to predict whether the future would hold more pleasure or pain (or whatever it is that the utilitarian is claiming to value), as well as the future effects on others, and base decisions on that.

One of the issues of utilitarianism, or any form of consequentialism, is that it relies on predicting the future, predicting the outcome of things. That can never be done with certainty, so errors will always be happening, even with people who are always reasonable and always are trying their best.
"A wise man ... proportions his belief to the evidence." - David Hume
User avatar
Sculptor1
Posts: 7148
Joined: May 16th, 2019, 5:35 am

Re: The zero-point of utilitarianism

Post by Sculptor1 »

Jack D Ripper wrote: December 5th, 2020, 3:13 pm
Sculptor1 wrote: December 5th, 2020, 6:41 am

Determinism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.

As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.

Because of the title of this thread, I think that it is a typographical error in the first sentence, and it should be:

Here's a question that's been bugging me for a while: How does one determine the zero-point of utilitarianism -- the point at which a life is exactly worthless?

That is how I took it and I responded to it as if that is what had been written.

As written, it does not make sense, as you observe.
Yes, I thought that has I hit send!!
HJCarden
Posts: 147
Joined: November 18th, 2020, 12:22 am

Re: The zero-point of utilitarianism

Post by HJCarden »

DistractedDodo wrote: December 5th, 2020, 5:30 am
HJCarden wrote: November 24th, 2020, 12:08 am When someone who is not a utilitarian looks at this 0 point of worth, they would often say that it is at a point far beyond this point of 0 utility that a human life actually becomes worthless. While some (I included) might say that all human life has value regardless, many would say that (to put it in utilitarian terms) a human life becomes worthless only at a point of great negative utility. This shows me that on average, when it comes to matters of deep importance to us, we tend to disagree with the idea of human life being worthless past this 0 point.
Well, that depends where you put that zero-point. I would agree with you from a utilitarian point that a life of equal current pleasure and pain -- whatever that means -- is more than worthless because if it's potential to contain more pleasure than pain, and because of the potential for its continuation to contribute utility to other individuals.
Heres another way to illustrate my point, with a slight twist that could bring some interesting responses:
Say that we had some sort of technology that at birth, could make an entirely accurate prediction of the net pain/pleasure balance that would result at the end of someoene's life, AND the net pain and pleasure they would cause in their life . Babies that were born clocking in at or below 0 on this scale would, to a utilitarian, be useless correct? Their life would be a wash, with nothing positive gained.

However, even if this were real, don't you feel that the opinion of the "man on the street" would be that it still doesn't make these people's lives worthless? if they were just immediately terminated, the world, according to this machine, would have lost nothing or gained something.

But there's some hard to grasp intuition in my opinion that a utilitarian cannot account for that would give us pause when terminating this net 0 human.
So this is the beginning of an argument against utilitarianism for humans, in regards to the idea that the net pleasure/pain from a life is whats important.

However, back to this wonderful technology that exists in this hypothetical world.

Would it be ethical to keep someone alive who themselves would experience extreme discomfort and pain in their life, but was predicted to give great benefit to others? or the inverse, someone who was predicted to live the best life imaginable but brings massive suffering to those around them? can the utilitarian make a decision in these cases that would appeal to people as being just?
User avatar
DistractedDodo
New Trial Member
Posts: 8
Joined: November 22nd, 2020, 9:18 am

Re: The zero-point of utilitarianism

Post by DistractedDodo »

Jack D Ripper wrote: December 5th, 2020, 3:13 pm
Sculptor1 wrote: December 5th, 2020, 6:41 am Determinism has nothing what ever to say about the value of human life or of any life for that matter. Placing a value on a life is a matter of morality. Determinism and "free will" are amoral ideas concerned with the ontological and epistemological nature of things.

As for utilitarianism. The question of a value of an animal is its food value, primarily.
The moral adjuct to this is the welfare of the animal whilst alive. For that you have to ask what harm might the poor treatment of animals have on humans who use them; or what good would it do to people when they can rest assured that the animals are treated well.

Because of the title of this thread, I think that it is a typographical error in the first sentence, and it should be:

Here's a question that's been bugging me for a while: How does one determine the zero-point of utilitarianism -- the point at which a life is exactly worthless?

That is how I took it and I responded to it as if that is what had been written.

As written, it does not make sense, as you observe.
Quite right! Apologies for the error.
User avatar
DistractedDodo
New Trial Member
Posts: 8
Joined: November 22nd, 2020, 9:18 am

Re: The zero-point of utilitarianism

Post by DistractedDodo »

Jack D Ripper wrote: December 5th, 2020, 5:20 pm You seem to be forgetting the fact that as long as one is alive, one also has the potential to have more pain than pleasure in the future, so that living could be much worse than dying. And also, there is the potential to cause more harm than good for others if one continues to live. Consequently, a life of currently equal pleasure and pain could be worse than worthless.

Your assumption that such a life is more than worthless is completely unwarranted.
You're absolutely right, that was a bit of a leap on my part! I assumed most humans both experience and contribute more pleasure than pain to others in a life and took that as a starting point, but of course that point should have been explicit if not argued for.
Jack D Ripper wrote: December 5th, 2020, 5:20 pm What a utilitarian would do is try to predict whether the future would hold more pleasure or pain (or whatever it is that the utilitarian is claiming to value), as well as the future effects on others, and base decisions on that.
Also a very good point!
User avatar
DistractedDodo
New Trial Member
Posts: 8
Joined: November 22nd, 2020, 9:18 am

Re: The zero-point of utilitarianism

Post by DistractedDodo »

HJCarden wrote: December 9th, 2020, 10:37 am Heres another way to illustrate my point, with a slight twist that could bring some interesting responses:
Say that we had some sort of technology that at birth, could make an entirely accurate prediction of the net pain/pleasure balance that would result at the end of someoene's life, AND the net pain and pleasure they would cause in their life . Babies that were born clocking in at or below 0 on this scale would, to a utilitarian, be useless correct? Their life would be a wash, with nothing positive gained.

However, even if this were real, don't you feel that the opinion of the "man on the street" would be that it still doesn't make these people's lives worthless? if they were just immediately terminated, the world, according to this machine, would have lost nothing or gained something.

But there's some hard to grasp intuition in my opinion that a utilitarian cannot account for that would give us pause when terminating this net 0 human.
So this is the beginning of an argument against utilitarianism for humans, in regards to the idea that the net pleasure/pain from a life is whats important.

However, back to this wonderful technology that exists in this hypothetical world.

Would it be ethical to keep someone alive who themselves would experience extreme discomfort and pain in their life, but was predicted to give great benefit to others? or the inverse, someone who was predicted to live the best life imaginable but brings massive suffering to those around them? can the utilitarian make a decision in these cases that would appeal to people as being just?
This is a point that's always fascinated me; are you arguing that utilitarianism is unfit for humans because it yields counter-intuitive conclusions? To me that seems a bit like confusing the normative and descriptive perspectives; I absolutely agree utilitarianism doesn't accurately model how humans make decisions -- but I don't think that should have any bearing on the validity of our conclusions from a normative perspective. If we start with a set of axioms (e.g. pleasure is good, pain is bad) and arrive, though logical reasoning, at some counter-intuitive conclusion, does that justify a change to the axioms themselves?
HJCarden
Posts: 147
Joined: November 18th, 2020, 12:22 am

Re: The zero-point of utilitarianism

Post by HJCarden »

DistractedDodo wrote: February 2nd, 2021, 2:15 pm
This is a point that's always fascinated me; are you arguing that utilitarianism is unfit for humans because it yields counter-intuitive conclusions? To me that seems a bit like confusing the normative and descriptive perspectives; I absolutely agree utilitarianism doesn't accurately model how humans make decisions -- but I don't think that should have any bearing on the validity of our conclusions from a normative perspective. If we start with a set of axioms (e.g. pleasure is good, pain is bad) and arrive, though logical reasoning, at some counter-intuitive conclusion, does that justify a change to the axioms themselves?
I do believe that utilitarianism is unfit for humans precisely because I do not think the best moral system is one that does not respect the vast descriptive aspect to morality. When thinking of the aspects of morality, a system that only has a normative value looks like it does not capture everything that we truly associate with morality. I think this is why many philosophers have tried to come up with an ultimate maxim for the idea of morality, because it is something that must resonate with all of us in a way that a set of rules and mathematical decisions won't. So what I'd say is that morality must be partly intuitive because part of morality is feeling that what we are doing is right, and that this can be at least partially the basis of morality.
User avatar
Eckhart Aurelius Hughes
The admin formerly known as Scott
Posts: 5786
Joined: January 20th, 2007, 6:24 pm
Favorite Philosopher: Eckhart Aurelius Hughes
Contact:

Re: The zero-point of utilitarianism

Post by Eckhart Aurelius Hughes »

DistractedDodo,

By the zero-point of utilitarianism (great phrase by the way), it seems you are asking about the point at which from a utilitarian perspective euthanasia would be preferable (or "morally right" as a moralizing utilitarian might call it), particularly if discounting the indirect affects on others of the life in question.

The question raises a common issue with utilitarianism. By seeking to quantify everything into a one-dimensional scale of utility, a utilitarian seems to run into a problem with the conversation rate between the generally assumed 'badness' of death with the generally assumed badness of 'pain'. For instance, utilitarian math works when one says that two deaths are doubly worse than one, or half the pain is preferable to twice the pain. Then it's simple math. But converting the negative utility of pain to death units, or vice versa, presents a problem for a utilitarian.

I'm not a utilitarian. In fact, my overall philosophy does not entail moralizing at all, but instead it is spiritual in nature, not judgmentally moralistic. In other words, it reflects what I will do not what I 'should' do, whatever that would mean. With that said, and with my philosophy's focus on freedom, subjectivity, and diversity (rather than mere utility) also in mind, I think I can add a philosophical thought experiment to this discussion that can be usefully explored not just from anyone's perspective but also from a utilitarian perspective, which is as follows:


To keep the thought experiment simple, we can imagine one is the last living creature on Earth. There can be plant-life and thus vegetarian food, but no other humans or animals. This makes things simple in that we need not calculate one's would-be future affect on others, such as the alleged value of livestock as food to humans or the value of a sick patient to their family who will miss the patient with great sadness upon the utilitarian murdering of the patient against their will.

Indeed, by imagining ourselves as the last living creature on earth, we can side step the whole messy issue of brutal utilitarian murder altogether. You cannot run over 5 people with a trolley if there is only one human alive to start with.

Now we only have to deal with utilitarian suicide. The great philosopher Albert Camus once wrote, "There is but one truly serious philosophical problem and that is suicide."

This thought experiment leaves us with two questions:

First, as the last living creature on Earth, what is the zero-point in terms of projected life quality when a utilitarian 'should' commit suicide according to utilitarianism?

Second, as the last living creature on Earth, what is the zero-point at which one would commit suicide?



I cannot answer the first question because as previously mentioned I do not believe in such moralizing and do not consider the word 'should' to be meaningful.

As for the second question, I think I would never commit suicide in such a situation, at least not when in my own sane rational mind (as opposed to for instance being mentally ill with schizophrenia). This is because I believe I have cultivated a certain degree of invincible inner peace, which transcends the one-dimensional analysis of a utilitarian. Thus, I have inner peace both when I am eating delicious food and when I am feeling hunger pain, both when I am sleeping in a comfy bed and when I am getting repeatedly punched in the face, both when feeling pleasure and when feeling terrible long-term ongoing pain. In other words, my transcendental spirit remains transcendentally content no matter how much negative utility my body and egoic human mind experience. In terms of the spiritual, death is not bad and pain is not bad, but rather they are both good, and life is worth living not just despite but in part because of the yin-yang-balance between birth and death. Life is worth living even if the life is full of pain, disaster, and tragedy. Pain, challenges, and discomfort are a crucial part of what makes life worth living, in my philosophy. In fact, it's possible that a boring painless drama-free life would be the one that is unworth living, to me at least, but luckily such a life or world is, in my opinion, totally absurd and couldn't exist, meaning that I believe the yin-yang-like balance in the world between life and death and between comfort and discomfort is not a matter of happenstance but of a priori mathematical law. In other words, I believe an imaginary hypothetical heaven of only comfort and pleasure is as absurdly impossible as it would be hellishly boring.

Long story short, since my philosophy has me solely focused on how I play the figurative cards I am dealt in this material world, rather than complaining about how I 'should' have been dealt different cards whatever that would mean, it doesn't matter what cards I get dealt, I am still happy to play, and I am still completely 100% content as long as I play my best. That's a life always worth living. Inner peace is invincible.

DistractedDodo wrote: February 2nd, 2021, 2:15 pm If we start with a set of axioms (e.g. pleasure is good, pain is bad) and arrive, though logical reasoning, at some counter-intuitive conclusion, does that justify a change to the axioms themselves?
Yes, at least it can. As you probably already know, it's called a reductio ad absurdum.

Presumably, if a valid logical argument has a counterintuitive conclusion, one would choose to therefore consider it evidence against the axioms rather than evidence for the conclusion to the degree the negation of the axioms (as a set) is less counterintuitive than the assertion of the conclusion. Needless to say, it doesn't require all axioms be negated, but rather only one axiom needs to be slightly negated (i.e. slightly off from true) for the axioms as a set to be negated.
My entire political philosophy summed up in one tweet.

"The mind is a wonderful servant but a terrible master."

I believe spiritual freedom (a.k.a. self-discipline) manifests as bravery, confidence, grace, honesty, love, and inner peace.
User avatar
DistractedDodo
New Trial Member
Posts: 8
Joined: November 22nd, 2020, 9:18 am

Re: The zero-point of utilitarianism

Post by DistractedDodo »

HJCarden wrote: February 16th, 2021, 6:12 pm I do believe that utilitarianism is unfit for humans precisely because I do not think the best moral system is one that does not respect the vast descriptive aspect to morality. When thinking of the aspects of morality, a system that only has a normative value looks like it does not capture everything that we truly associate with morality. I think this is why many philosophers have tried to come up with an ultimate maxim for the idea of morality, because it is something that must resonate with all of us in a way that a set of rules and mathematical decisions won't. So what I'd say is that morality must be partly intuitive because part of morality is feeling that what we are doing is right, and that this can be at least partially the basis of morality.
That is very interesting -- thank you for elaborating! While I can appreciate the practical value of an ethical system reflecting intuitive beliefs about morality, though, aren't you at risk of accepting moral relativism? And also, isn't it the responsibility of ethics, philosophy, and even academia in general to challenge the layman's intuitions and insist on whichever idea is logically superior, regardless of whether that idea is intuitive or convenient? To use an extreme example, if a majority of a population were, say, racist, wouldn't your approach require any viable ethical system to accommodate the belief that one set of races race is superior to another -- simply on the grounds that it has to capture the population's beliefs on the subject?
Post Reply

Return to “Ethics and Morality”

2024 Philosophy Books of the Month

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021