The March Philosophy Book of the Month is Final Notice by Van Fleisher. Discuss Final Notice now.

The April Philosophy Book of the Month is The Unbound Soul by Richard L. Haight. Discuss The Unbound Soul Now

The May Philosophy Book of the Month is Misreading Judas by Robert Wahler.

The Necessity of Moral Realism (Moral Objectivism)

Discuss morality and ethics in this message board.
Featured Article: Philosophical Analysis of Abortion, The Right to Life, and Murder
Post Reply
GE Morton
Posts: 613
Joined: February 1st, 2017, 1:06 am

Re: The Necessity of Moral Realism (Moral Objectivism)

Post by GE Morton » November 6th, 2018, 11:33 am

Fooloso4 wrote:
November 6th, 2018, 9:30 am

Reason as I understand it is not some timeless, abstract, objective, universal method.
Well, then I think you have adopted a Newspeak definition of that word --- a word coined precisely to differentiate that thinking methodology from methodologies which are transitory, case-dependent, subjective, and idiosyncratic. You've erased the difference between rational and non-rational.

Fooloso4
Moderator
Posts: 3601
Joined: February 28th, 2014, 4:50 pm

Re: The Necessity of Moral Realism (Moral Objectivism)

Post by Fooloso4 » November 6th, 2018, 12:34 pm

Newspeak? The issue is as old as the quarrel between the ancients and the moderns. You are laboring under the modern concept of rationality in search of certainty.

GE Morton
Posts: 613
Joined: February 1st, 2017, 1:06 am

Re: The Necessity of Moral Realism (Moral Objectivism)

Post by GE Morton » November 6th, 2018, 12:56 pm

Steve3007 wrote:
November 6th, 2018, 5:55 am
GE Morton wrote:The reason for that is, that when the disagreements are moral, one or both of the positions taken are based on non-rational grounds, thus making them impervious to rational argument.
I think we disagree as to the meaning of the word "rational" in this context. To me, the word "rational" applies only to the logic of an argument, not to the axiomatic beliefs - the underlying goals - on which it might be based. The argument is irrational if it contradicts itself or misuses factual evidence. My point here was that if two people have different underlying goals then they are unlikely ever to agree, even if the arguments that they build on those goals and use to generate proposed actions are perfectly rational.
I agree that rationality does not apply to the selection of goals, the assignment of values, or other interests, all of which are subjective and idiosyncratic. It only applies to the means chosen to attain those goals or satisfy those interests.

A moral theory is a means to an end. When applied to theories, in science, morality, economics, or any other field, rationality requires that:

1. The axioms must be self-evidently true (and thus requiring no proof). An axiom will be self-evident if 1) its denial would be self-contradictory, or 2) its denial would be contrary to all experience, or 3) no falsifying scenario is conceivable.

2. The postulates must be true, cogent, and cognitive, i.e, their truth or falsity can be established by objective methods.

3. The conclusion(s) must follow from the axioms and postulates without logical error.

I take the end, the goal, of a theory of public morality to be generating rules governing interactions between agents in a moral field (a social setting), and the goal of those rules to be enabling all agents in the field to maximize their welfare, which consists in their securing as many of their personal goals and interests as possible. That goal defines the axiom of the theory. The theory also includes a postulate of Equal Agency, which asserts that all agents in the field have equal status in the eyes of the theory. The rules generated apply in the same way to all, and no agent's interests are preferred to those of any other agent.

Moral theories must include at least one normative postulate; you cannot derive "ought" from "is." The axiom and the Equal Agency postulate are normative postulates. They are not self-evident and not cognitive (that will be true of any normative postulate). Hence they can be rationally rejected, and will be by egoists, elitists, moral anarchists, supernaturalists, and others. But I think they will be deemed true by most thinkers.
In saying "most of them are rationally solvable" you seem to be saying that although the consequences of some choices are not foreseeable, most are. I don't think so. If we take any political disagreement between two people at different positions on the political spectrum where their disagreement stems from a disagreement as to the consequences of competing political policies, I think we usually find that it's not unambiguously possible to resolve their dispute and demonstrate unambiguously one of them to be right and the other wrong.
I agree with that. But most political differences do not spring from disagreements regarding the consequences of alternative policies. They spring from differing moral presumptions; the consequentialist arguments are merely rationalizations for those antecedent views.
But even if there was one --- one whose axioms and postulates are indubitably true, whose logic is impeccable, and whose rules captured the most widely shared principles (as reflected in the laws and customs of most civilized societies), it would not necessarily be adopted by everyone --- simply because, as noted previously, many "vernacular" moralities are non-rational. And dogmas absorbed "on mama's knee" are not easily dislodged, especially when they reinforced by the agent's micro-culture.
It would be interesting if you could give an example of this, so we can see if there is indeed a failure of reason involved.
One example, discussed earlier in the thread, was the suggestion that the "value of human life" is considered axiomatic by some persons. But value assignments are subjective, idiosyncratic, and non-rational. No value statement is self-evident, and thus cannot serve as an axiom.
Theoretical physicists and ethical theorists usually have to assume that the "horse" is spherical and is travelling in a vacuum. It rarely, if ever, is.
:-) That problem is difficult. Predicting which jet fighter plane will win a similar race will be somewhat easier.

Steve3007
Posts: 5750
Joined: June 15th, 2011, 5:53 pm
Favorite Philosopher: Eratosthenes
Location: UK

Re: The Necessity of Moral Realism (Moral Objectivism)

Post by Steve3007 » November 7th, 2018, 7:46 am

GE Morton wrote:I agree that rationality does not apply to the selection of goals, the assignment of values, or other interests, all of which are subjective and idiosyncratic. It only applies to the means chosen to attain those goals or satisfy those interests.
Yes. In project management speak: it applies to processes, not milestones.
A moral theory is a means to an end. When applied to theories, in science, morality, economics, or any other field, rationality requires that:

1. The axioms must be self-evidently true (and thus requiring no proof). An axiom will be self-evident if 1) its denial would be self-contradictory, or 2) its denial would be contrary to all experience, or 3) no falsifying scenario is conceivable.
In the context of ethics, I don't think this wording applies. In that context, I would take an example of an axiom to be something like "all people should be treated as ends and not as a means to an end". I don't see any of your 3 criteria above applying there.
2. The postulates must be true, cogent, and cognitive, i.e, their truth or falsity can be established by objective methods.
Ideally, yes.
3. The conclusion(s) must follow from the axioms and postulates without logical error.
Yes. I don't think I could deny that logical errors are a bad thing.
I take the end, the goal, of a theory of public morality to be generating rules governing interactions between agents in a moral field (a social setting)...
I agree with this. And rules of society (as opposed to physical laws) are "oughts". They don't govern in the sense that the movements of the planets are sometimes said to be "governed" by the laws of gravity. They prescribe and proscribe.

So its goal is to generate oughts.
... and the goal of those rules to be enabling all agents in the field to maximize their welfare, which consists in their securing as many of their personal goals and interests as possible. That goal defines the axiom of the theory...
Too simplistic, I think. I think one of the tests of a postulate like:

"The goal of the rules is enabling all agents in the field to maximise their welfare, which consists in their securing as many of their personal goals and interests as possible."

is to consider what it leads to and whether those consequence deviate significantly from what most people in a society hold to be self-evidence moral truths. The above postulate puts all the emphasis on enabling agents to maximise their welfare by their own efforts. On the face of it this is natural enough because, by definition, an "agent" is a moral actor, not a passive receiver. And your definition of "agent" doesn't seem to allow for any shades of grey. Either one is an agent or one is not.

But some entities (such as children, and non-human animals) who are not necessarily capable of maximising their own welfare are still generally regarded as needing the protection of moral rules. It may be one of the personal goals of a child's parents to protect that child, and therefore the above postulate might serve to protect that child by enabling the parent-agent to maximise their own welfare (as they perceive it) by protecting the child. But our societies generally tend to take the view that children should be protected by law even if there are no free agents (responsible parents) who see the protection of that child as maximising their own welfare.

The example of children is just one end of a spectrum, to make the point. A small child is an example of a person who is more or less incapable of acting to maximise their own welfare. There is a continuous spectrum of agents, from helpless babies to powerful autonomous adults, with a continuously varying level of ability to act to maximise their own welfare; a continuously varying level of power over their own lives and ability to work out how to maximise their welfare.

Of course, you're still free to postulate what I've quoted above. But I don't think, when its consequences are fully thought through, that it will be a widely shared postulate if kept in that simple form.
...The theory also includes a postulate of Equal Agency, which asserts that all agents in the field have equal status in the eyes of the theory. The rules generated apply in the same way to all, and no agent's interests are preferred to those of any other agent.
But with the problem of defining who or what is an agent, whether being an agent is an all-or-nothing affair, and whether entities that we might not regard as agents (such as children) nevertheless need protection in law.
Moral theories must include at least one normative postulate; you cannot derive "ought" from "is."
About "ought": If a rule is an "ought", such that a rule forbidding the killing of other agents can be expressed as: "one ought not to kill", then if we examine how that rule is to be enforced, it could also be expressed as something like: "IF one kills THEN one goes to prison". In that case, it could be expressed as simply an empirical proposition about necessary consequences. "IF this THEN that". What this really means is that the "ought" can be expressed in terms of goals. "IF one is to avoid going to prison THEN one ought not to kill". The question then is: Does the word "ought" make sense without any reference (implicit or explicit) to goals?
Moral theories must include at least one normative postulate; you cannot derive "ought" from "is." The axiom and the Equal Agency postulate are normative postulates. They are not self-evident and not cognitive (that will be true of any normative postulate). Hence they can be rationally rejected, and will be by egoists, elitists, moral anarchists, supernaturalists, and others. But I think they will be deemed true by most thinkers.
I thought it was part of the definition of an "axiom" that it is deemed to be self-evident? If so, I don't see how you can also say that the axiom is a non-self-evident normative postulate.

As I said above, I don't think your axiom, as described above, when its consequences are followed by a process of rational argument, would be deemed true by most thinkers, at least not in our societies.
I agree with that. But most political differences do not spring from disagreements regarding the consequences of alternative policies. They spring from differing moral presumptions; the consequentialist arguments are merely rationalizations for those antecedent views.
Maybe. Perhaps there's a mixture. An example I've given previously is the concept of a minimum wage. I've argued previously that people at different ends of the political spectrum tend to disagree as to whether a legally enforced minimum wage is a good or a bad thing by arguing about the consequences of implementing it. They can do that while both agreeing that people earning enough money to live on is a good thing.

Taking the example of what is perhaps the biggest political difference in our societies: Left versus right. Big versus small government. I don't think even that necessarily springs from differing moral presumptions. I think it can be argued to spring from differences of opinion as to the consequences of different sizes of government/different levels of taxation.
One example, discussed earlier in the thread, was the suggestion that the "value of human life" is considered axiomatic by some persons. But value assignments are subjective, idiosyncratic, and non-rational. No value statement is self-evident, and thus cannot serve as an axiom.
As we've noted before, one of the fundamental disagreements that that suggestion relies on for its practical meaning is the definition of "human life". If I say "human life beings when a sperm fertilises an embryo" and you say "no, it begins at 4 weeks gestation", if our purpose in trying to tie down this definition is the assignment of protection by law, I don't think we will ever resolve our difference.
:-) That problem is difficult. Predicting which jet fighter plane will win a similar race will be somewhat easier.
Yes, if the fighter planes are on autopilot then the messy, complicated involvement of a living creature (whether a pilot or a horse) is removed from the problem!

GE Morton
Posts: 613
Joined: February 1st, 2017, 1:06 am

Re: The Necessity of Moral Realism (Moral Objectivism)

Post by GE Morton » November 8th, 2018, 12:54 pm

Steve3007 wrote:
November 7th, 2018, 7:46 am
GE Morton wrote:A moral theory is a means to an end. When applied to theories, in science, morality, economics, or any other field, rationality requires that:

1. The axioms must be self-evidently true (and thus requiring no proof). An axiom will be self-evident if 1) its denial would be self-contradictory, or 2) its denial would be contrary to all experience, or 3) no falsifying scenario is conceivable.
In the context of ethics, I don't think this wording applies. In that context, I would take an example of an axiom to be something like "all people should be treated as ends and not as a means to an end". I don't see any of your 3 criteria above applying there.
That Kantian precept cannot be an axiom (because it is not self-evident); it is a theorem that needs to be proved. And in fact it is derivable from my axiom/postulates.
I take the end, the goal, of a theory of public morality to be generating rules governing interactions between agents in a moral field (a social setting)...
I agree with this. And rules of society (as opposed to physical laws) are "oughts". They don't govern in the sense that the movements of the planets are sometimes said to be "governed" by the laws of gravity. They prescribe and proscribe.
Yes. They are normative claims, not causal ones. But the "oughts" derived, while they have normative import, are instrumental: i.e, they assert that if you wish to attain the goal stated in the axiom, then you ought to do X, just as in, "If you want to drive a nail, you ought to get a hammer, " or, "If you want to pass the exam, you ought to study." An instrumental "ought" is an assertion that doing X is the best, or at least an effective, means of accomplishing Y, and perhaps necessary to accomplish it. "Ought" statements in that sense are cognitive; they are either true or false.
I think one of the tests of a postulate like:

"The goal of the rules is enabling all agents in the field to maximise their welfare, which consists in their securing as many of their personal goals and interests as possible."

is to consider what it leads to and whether those consequence deviate significantly from what most people in a society hold to be self-evidence moral truths.
Oh, no. "What most people hold to be moral truths" surely cannot be a consideration when testing the soundness of a moral theory, any more than what most people hold to be true is relevant to the soundness of, say, the theory of evolution. That is a paradigm example of the ad populum fallacy. Nor is whether a proposition is self-evident a matter of belief. A proposition thaat does not satisfy one of the criteria given earlier is not self-evident, no matter what anyone believes.
The above postulate puts all the emphasis on enabling agents to maximise their welfare by their own efforts. On the face of it this is natural enough because, by definition, an "agent" is a moral actor, not a passive receiver. And your definition of "agent" doesn't seem to allow for any shades of grey. Either one is an agent or one is not.
The theory also recognizes "moral subjects" --- beings who have moral status, but a different status from agents. Children, some animals, and some human adults are subjects, not agents.
But some entities (such as children, and non-human animals) who are not necessarily capable of maximising their own welfare are still generally regarded as needing the protection of moral rules. It may be one of the personal goals of a child's parents to protect that child, and therefore the above postulate might serve to protect that child by enabling the parent-agent to maximise their own welfare (as they perceive it) by protecting the child. But our societies generally tend to take the view that children should be protected by law even if there are no free agents (responsible parents) who see the protection of that child as maximising their own welfare.
Parents have duties to the children they have brought into the world, and the law may (morally) insist that they perform those duties.
The example of children is just one end of a spectrum, to make the point. A small child is an example of a person who is more or less incapable of acting to maximise their own welfare. There is a continuous spectrum of agents, from helpless babies to powerful autonomous adults, with a continuously varying level of ability to act to maximise their own welfare; a continuously varying level of power over their own lives and ability to work out how to maximise their welfare.
Persons incapable of effectively pursuing their own welfare are moral subjects, not agents. "Maximize" in the axiom means, however, "maximize to the extent of one's abilities," not "maximize" in any absolute or theoretical sense.
Moral theories must include at least one normative postulate; you cannot derive "ought" from "is."
About "ought": If a rule is an "ought", such that a rule forbidding the killing of other agents can be expressed as: "one ought not to kill", then if we examine how that rule is to be enforced, it could also be expressed as something like: "IF one kills THEN one goes to prison". In that case, it could be expressed as simply an empirical proposition about necessary consequences. "IF this THEN that". What this really means is that the "ought" can be expressed in terms of goals. "IF one is to avoid going to prison THEN one ought not to kill". The question then is: Does the word "ought" make sense without any reference (implicit or explicit) to goals?
No (as addressed above). The "is-ought" gap, pondered by many moral philosophers, disappears when the "ought" is understood instrumentally. In a moral theory the goal is given by the axiom, rather than avoidance of an unpleasant consequence (which would be merely an ad baculum argument).
Moral theories must include at least one normative postulate; you cannot derive "ought" from "is." The axiom and the Equal Agency postulate are normative postulates. They are not self-evident and not cognitive (that will be true of any normative postulate). Hence they can be rationally rejected, and will be by egoists, elitists, moral anarchists, supernaturalists, and others. But I think they will be deemed true by most thinkers.
I thought it was part of the definition of an "axiom" that it is deemed to be self-evident? If so, I don't see how you can also say that the axiom is a non-self-evident normative postulate.
Good question!

The normative axiom is not self-evident. But the following statement is self-evident: Every act of every agent is undertaken to improve his welfare, i.e., to secure some good he desires or attaain some goal he seeks. (Compare Aristotle's "Every art and every inquiry, and similarly every action and pursuit, is thought to aim at some good; and for this reason the good has rightly been declared to be that at which all things aim.")

Add to this the true but not self-evident premise that the central aim of virtually all moral codes is to encourage acts which improve someone's welfare (charity, generosity, assistance) and forbid acts which reduce it (murder, assault, theft, defrauding, cheating).

Then we add the Equal Agency postulate, which asserts that all agents have equal status in the eyes of the theory.

From those we can get the Axiom, which declares the aim of the theory: "To develop rules of interaction which enable all agents to maximize their welfare."

Though not self-evident, the axiom is indubitably true (provided one grants the previous three statements).

Not everyone, of course, will agree to those three statements. Some theists, for example, may hold that the aim of a moral theory is not improving human welfare, but pleasing God (they, of course, then have the burden of defending their ontology). Egoists will deny the Equal Agency postulate, and hold that "right" actions are those which benefit them (they have the burden of explaining what makes them special). The axiom may not be self-evident, but it can be denied only by relying on assumptions even more difficult to defend.
As I said above, I don't think your axiom, as described above, when its consequences are followed by a process of rational argument, would be deemed true by most thinkers, at least not in our societies.
Well, those who disagree will have to explain why they think it false, and defend whatever presumptions underlie that judgment.
Maybe. Perhaps there's a mixture. An example I've given previously is the concept of a minimum wage. I've argued previously that people at different ends of the political spectrum tend to disagree as to whether a legally enforced minimum wage is a good or a bad thing by arguing about the consequences of implementing it. They can do that while both agreeing that people earning enough money to live on is a good thing.
Yes, it will be seen by nearly all as a good thing --- unless "enough" can only be provided by reducing someone else's welfare.
Taking the example of what is perhaps the biggest political difference in our societies: Left versus right. Big versus small government. I don't think even that necessarily springs from differing moral presumptions. I think it can be argued to spring from differences of opinion as to the consequences of different sizes of government/different levels of taxation.
I disagree. There is actually substantial agreement on the consequences: When government is bigger the "poor" will get more, the "rich" less. The underlying moral disagreement is over whether taking from the "rich" to give to the "poor" is morally acceptable.
As we've noted before, one of the fundamental disagreements that that suggestion relies on for its practical meaning is the definition of "human life". If I say "human life beings when a sperm fertilises an embryo" and you say "no, it begins at 4 weeks gestation", if our purpose in trying to tie down this definition is the assignment of protection by law, I don't think we will ever resolve our difference.
The question, though, was whether that value statement can serve as an axiom.

Thoughtful post, Steve. Refreshing.

User avatar
Consul
Posts: 1680
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: The Necessity of Moral Realism (Moral Objectivism)

Post by Consul » November 14th, 2018, 2:36 pm

GE Morton wrote:
November 8th, 2018, 12:54 pm
Yes. They are normative claims, not causal ones. But the "oughts" derived, while they have normative import, are instrumental: i.e, they assert that if you wish to attain the goal stated in the axiom, then you ought to do X, just as in, "If you want to drive a nail, you ought to get a hammer, " or, "If you want to pass the exam, you ought to study." An instrumental "ought" is an assertion that doing X is the best, or at least an effective, means of accomplishing Y, and perhaps necessary to accomplish it. "Ought" statements in that sense are cognitive; they are either true or false.
I disagree, because I think sentences of the form "If you want to x, you ought to y." are equivalent to conditional imperatives of the form "If you want to x, do y!", which are neither true nor false.
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars

User avatar
Consul
Posts: 1680
Joined: February 21st, 2014, 6:32 am
Location: Germany

Re: The Necessity of Moral Realism (Moral Objectivism)

Post by Consul » November 14th, 2018, 2:43 pm

Consul wrote:
November 14th, 2018, 2:36 pm
I disagree, because I think sentences of the form "If you want to x, you ought to y." are equivalent to conditional imperatives of the form "If you want to x, do y!", which are neither true nor false.
Of course, when someone replies "I want to x, but why should I do y?", and your answer is "Because doing y is good/best for accomplishing x", then "Doing y is good/best for accomplishing x" is a sentence which is either true or false.
"We may philosophize well or ill, but we must philosophize." – Wilfrid Sellars

Post Reply