See last response to Fooloso.Karpel Tunnel wrote: ↑November 1st, 2018, 11:05 am
Certainly other facets of humans other than emotions should go into that, but one cannot come up with morals without emotions. How do we decide what axioms we have? What we value? How to prioritize consequences to avoid`? to see? These all need value judgments that must in part be emotional.
A sound moral theory, and a set of moral rules derived from that theory, does not assume or assert any values. It is value- and interest-neutral. What it does assume is that everyone values certain things; that there is a hierarchy of values attached to each agent, and that those hierarchies differ from agent to agent. So the aim of the rules is to enable each agent to secure as many of the things he values as possible, i.e., to maximize his own welfare, by means that do not thwart the ability of others to do likewise.
Values are subjective and relative to agents. Value is a pseudo-property assigned to things by agents, not a natural or intrinsic property of anything. To say that something has value is to say that someone, some valuer, desires that thing and would give up something to attain it or retain it. That pseudo-property can be assigned to anything, not just material things. Anything that motivates an agent to act, whether securing food or a Picasso original, reaching a personal goal, winning the hand of Maryanne, understanding quantum theory, raising a child, curing AIDS in Africa, can be assigned a value by an agent. The value assigned, its rank in the hierachy, is given by what the valuer will give up to secure it.
Given that all values are relative to agents, and none have any objective existence, they cannot be assumed in the premises of a rational moral theory. But they are "built-in" to the theory because the rules of the theory are aimed at maximizing the welfare of all agents, and that welfare is a function of those agents' disparate values.
Sometimes the term "values" is used to denote someone's set of moral principles. That usage should be disparaged. A principle is a rule expressible in a proposition. Though it may express a value an agent places on something, it is not itself a value. And a moral principle that assumes a value of some particular thing is invalid prima facie, because it will certainly not be universal; it will not be true of or for all agents.
We don't. Valuing something is itself an emotional response to that thing. But because emotional responses to things differ from person to person, they cannot be assumed in a moral theory, if it is to be universal (applicable to all agents in a given moral field). A moral theory that assumes particular values is biased at the outset.How do we even take the first step at coming up with a value without emotion?
Oh, but we do. We have the fact that people value various things, and the fact that each person's welfare, well-being, is a function of whatever he/she values. So we begin from those facts, not from anyone's idiosyncratic values.We have no starting point without going on emotional evaluations.
Why? If, when faced with some moral choice, Alternative A will yield 10 units of welfare (per any scale you like) for Alfie, with no offsetting cost to anyone else, and Alternative B will yield 12 units, what role do our emotions have to play in making that decision?Rational thinking has its place also, but without the emotions there is no reason to have goal X, or priority Y, or banning Z as part of our morals. And then to weight the grade of the effect of some action or consequence our emotional reactoins will again have to be involved.
Oh, moral and principles do indeed presume a social setting. They are only needed in settings where moral agents with diverse interests, values, goals, etc., are able to interact. But devising rules for those interactions requires no emotional involvement by the theorist, other than the desire to come up with a set of rules that does the job --- enabling all agents to maximize their welfare (the goal of the theory and its axiom).OK, tell me an axiom of your moral theory that does not in any way come from you being a social mammal. One devoid of emotional interest.
No, caring is not part of the axioms, and cannot be, since it is not universal or self-evident. And the "do no harm" principle is a theorem derivable from the theory, not its axiom.But that caring is part of the axioms - we shall not harm someone without good cause, harm should be avoided.
If they do they will be invalid.All moral stances will come out of emotionally influenced axioms.
That is all true. But that some people will not, for non-rational reasons, be convinced by it is not a relevant objection, much less a refutation, of a moral theory (or any other kind of theory).And you will not be able to rationally convince people with different emotional reactions - to the suffering of strangers or outgroup people - that they are wrong. You can point to consequences, but they will happily live with them. Sure, some weak poeple will get killed not because they were evil. Who cares? Others will feel, yes feel, this is unfair, cruel, whatever, based in part on empathetic feelings.