First we need to distinguish clearly between duties (acts one is morally obliged to do) and constraints (acts one is morally forbidden to do). I assume, first, that no person is bound by any duties or constraints a priori, i.e., merely as a consequence of his existence, or because he is a "member of society," i.e., because he finds himself in a social setting. Empirical facts such as those cannot entail moral obligations (the "is-ought gap").Thomyum2 wrote: ↑June 28th, 2020, 12:38 pm
It seems that in discussions of political morality such as this, we primarily talk of the legitimate role of government as being to protect freedoms - i.e. to use its punitive power to restrict the behavior of those who would make a moral transgression by violating another's rights, but we rarely discuss in depth the idea of moral duties and the role of government with respect to them. But if there are moral duties that exist as a counterpart to the moral claims that a government protects, is it not also a legitimate use of governmental power to ensure that moral duties are performed, in addition to protecting rights? In other words, it is not an appropriate use of force to punish the both actions that are moral transgressions as well as those that are a failure to perform one's duties? I'd be interested to hear thoughts about this.
But any moral duties or constraints proposed must be justified somehow, via some sort of evidence or argument. Otherwise they will be arbitrary. So what duties and constraints to which an agent is subject depends upon the moral theory one holds, on its axioms and postulates --- upon what duties and constraints can be derived from those.
I take the central and defining aim of morality to be maximizing the welfare of all agents in a moral field. That seems to be the underlying aim of most, though not all, moral codes throughout history --- establishing rules prohibiting people from injuring one another and encouraging them to aid others in need, so that each may live the most fulfilling life possible to him. So the Axiom, or "fundamental principle," of the theory is, Devise principles and rules which enable all agents to maximize their welfare.
An important postulate is the "Relativity" postulate. "Good" and "evil" are defined relative to agents. A "good" is anything an agent desires and acts to secure or preserve; an "evil" anything an agent desires to avoid or be rid of. There is no such thing as "intrinsic" good or evil. Good and evil are not properties of things, but pseudo-properties imputed to things by agents, denoting an agent's approval or disapproval of, his desire for or desire to avoid, that thing. Hence goods and evils are subjective and idiosyncratic, and the things to which those adjectives are applied have, in themselves, no moral significance. They may have great significance to the agents who desire them, or desire to avoid them, however. Indeed, an agent's welfare is measured by the extent to which he has secured the things he deems good and avoided the things he deems evil.
I also assume a postulate of Equal Agency --- that all agents in a moral field (a social setting) have the same status, i.e., their various interests and goals have equal weight and any rules developed apply to all of them in the same way.
From these we can immediately derive a contraint --- a rule prohibiting agents from actions which would reduce the welfare of another agent. This follows directly from the Axiom and the Equal Agency postulate. An act of an agent which improves his welfare by imposing a loss on another agent is inconsistent with the mandate that moral rules enable ALL agents to maximize their welfare. It also violates the Equal Agency postulate, since the losing agent's interests and status are subordinated by that act to those of the gaining agent. Hence acts by agents to enhance their welfare must be "Pareto improvements."
https://en.wikipedia.org/wiki/Pareto_efficiency
We can then define a "right:" an agent has a right to whatever he may have acquired without violating the above constraint (a property right), and a right to do whatever he wishes to do that does not violate that constraint (a liberty right).
With that definition in hand, we can state the above constraint as, "Don't violate others' rights."
What of duties?
One duty follows immediately from the Axiom and postulates. A person who violates the above constraint, whether intentionally or by inadvertence, acquires a strict, unconditional duty to make good the loss he has caused or the harm he has inflicted, to the maximum extent feasible.
One also has strict duties to keep one's promises and honor one's contracts, unless doing so is rendered impossible by events or other factors beyond the agent's control, in which case the contracting/promising agent acquires a duty to mitigate any losses incurred by the promisee or other parties to the contract in reliance upon the promise or contract. The burden of that loss should fall equally on all parties to the contract (since none of them had any control over the intervening event).
We can also derive a duty to aid, i.e., an agent ought to come to the aid of another agent in distress, who has been injured or who faces imminent injury or loss of welfare. That duty is conditional, however --- it is only operative when 1) the impending loss to the distressed agent is substantial and the cost to the benefactor to render aid is either negligible or recoverable, 2) the agent in distress has not brought his distress upon himself via some risky, unwise, reckless, or immoral action, and 3) the agent in distress is not known to the benefactor to have previously shirked this same duty.
That such a duty advances the welfare of all agents should be obvious. But it does so only if the three above conditions are met. With respect to the first, if the cost to the benefactor to render aid is substantial and not recoverable, then the duty would not satisfy the Pareto condition. If the second condition is not met then the universal or even general performance of the duty would encourage risky, reckless, etc., actions. If the third condition is not met then a "tragedy of the commons" problem arises --- people seek to benefit from others' performance of that duty, but shirk it when it falls upon them, in violation of the Axiom.
This duty would apply in scenarios such as Peter Singer's famous example:
" . . . if it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it. . . . An application of this principle would be as follows: if I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out. This will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing."
---Peter Singer, Famine, Affluence, and Morality (1972)
https://www.utilitarian.net/singer/by/1972----.htm
Singer's principle fails as a general and unconditional rule, however, due to the subjectivity of good and evil, as discussed above. The only "moral importance" anything has is its importance to some agent, and that will vary from agent to agent for any given thing. We can be fairly sure, based on everyday experience, that most people place greater value on others' lives --- and certainly on their own --- than on the cost of laundering their clothes. But if the sacrifice required is more substantial, such as paying high taxes to relieve poverty for thousands or putting your kid though college or fulfilling a lifelong desire to open and operate a restaurant, the issue becomes "muddier," so to speak.
We can't resolve these conflicts by dogmatically asserting that some X, e.g., "human life," is more valuable that Y, e.g., anything else. The value of human lives, like the values of everything else, varies with the agent doing the evaluation and the life at risk. Most people would make some sacrifices, even substantial ones, to save the life of the drowning child, but not to save the life of an escaped convict convicted of multiple homicides. Many, indeed, would forfeit their own lives to save the lives of certain others, or even to further some abstract goal.
The relativity of values to agents underlies the problem, intractable in economics and the other social sciences, of interpersonal comparisons of welfare (or utility), which must be possible for any proffered "social welfare function" which depends upon utilitarian reasoning, as virtually all do. The value of any X to Alfie can only be determined by observing Alfie's behavior with respect to X; it can't be assumed a priori for all agents in a moral field.
(There is a huge literature on this problem. Here is one summary:
https://homepages.warwick.ac.uk/~ecsgaj/icuSurvey.pdf)
For this reason the above duty to aid must be discretionary, as opposed to strict, in addition to being conditional. It cannot be made strict without violating the Equal Agency postulate, which holds all agents, and their interests and values, to have the same status. Only Alfie can compare the costs and risks of rendering aid with the benefits to be gained, and other agents may not override his judgment. However, agents who disagree with his judgment may thereafter justifiably withhold aid from him when he finds himself drowning in the pond.
Incidentally, a beneficiary of this duty himself acquires a duty --- to compensate the benefactor for any losses he sustained by rendering the aid, or at least offer to do so.