I agree. That is the gist of the Axiom I offered.Gertie wrote:The Ises, the state of affairs in the world, affect people's quality of life, including the actions of conscious agents who make choices which contribute to the state of affairs. And so that's where the Oughts come in, we Ought to act in ways which are overall beneficial to the well being of conscious creatures.
There is no conflict between individual freedom and the good of others. All agents in the field have the same freedom to pursue they good as they define it, and the theory is indifferent --- neutral --- with respect to those various goals and goods (subject to the exception in Note 2). An agent's freedom can cause no harm to others unless he violates a rule of the theory.A crucial issue which tends to arise, is balancing the 'common good' against individual freedom, which is where you and I have disagreed in the past. You treat individual freedom as one side of the equation, vs the good of others, where-as in my model it's one element of well-being, which will inevitably be in the mix along with other elements, in creating a society which inevitably involves compromises. I believe my model is more congruent with the foundational principle of the well-being of conscious creatures, which I think we share.
A moral theory or system that is philosophically valid will necessarily be objective. It's premises will be self-evidently true and its theorems will follow from them. Since the truth conditions for the premises are public, they are objective, and hence so will be the theorems. Also, "effectively but loosely cohere around" sounds more like a political goal and strategy than a philosophical one. The history of vernacular morality is a litany of "oughts" some shaman or demagogue or tyrant has persuaded (or compelled) majorities to rally around. Those successes lend no philosophical credibility to the "oughts" being peddled.Fair point. There are two connected issues for me, the pragmatic one of what do we as a society do in the absence of a basis for Oughts to loosely but effectively cohere around. And can we come up with a philosophically valid new basis for Oughts in a world without Objective Morality.
There is a way to derive "ought" from "is," in a sense. It is done all the time with the instrumental sense (as opposed to the moral sense) of "ought." For example, "If you wish to drive a nail, you ought to get a hammer," or "If you want to get a good job, you ought to get a good education." In this sense "one ought to do X" means, "Doing X is an effective means of accomplishing Y." The latter is an "is" statement, and is objective and empirically testable.
Now . . .
So we seem to agree on the goal of a moral theory. What if the theorems of some such theory, each of which prescibes some moral rule, are effective (or even necessary) means of reaching that goal? Ought we obey those rules? Surely we ought to do so in the instrumental sense of "ought." And if the goal is a moral one (which I think we'll agree the above goal is), then do those instrumental "oughts" become moral "oughts"?And I believe the well-being of conscious creatures is a philosophically sound foundation for society overall, and one which resonates with people on an emotional and intellectual level, which makes it potentially workable.
Yup I think so.I believe your claim there is equivalent to the Axiom I offered in a previous post: "All agents in the moral field should adhere to rules per which goods can be maximized, and evils minimized, for all agents." Is it not?
Oh, no. The Axiom is not derived from the postulates. It bears no relation to them whatsoever. It is a free-standing moral goal, one which most people (or at least most moral philosophers) would agree is a worthy one, and one implicit if not explicit in nearly all moral theories. The postulates, except for the Equal Agency postulate, have no moral content. They merely describe empricially verifiable features of the moral field and the moral agents who populate it --- features that constrain the choices of theorems (rules) that will be workable. They establish "boundary conditions."Here's where I think we part company. It seems to me you tag on your foundational guiding principle at the end, as if it's derived from your definitions, roles and rules - your methodology, where-as I see it as the starting point for coming up with Oughts, which then require methodology for implementation. So I think your construction here has a problem - . . .
As I said above, the Axiom is independent of the postulates. But the means of achieving the goal of the Axiom are constrained by features of the agents in the field and of their social setting.Is this genuinely the formulation you'd come up with starting from the axiomatic guiding principle of 'the welfare of conscious creatures'? It seems like you might be deriving your foundational axiom from your constructed categories, rather than the other way round?
I'm not sure what you're counting as a "category." And no rules (theorems) are presented. The postulates do not create "categories;" they are simply descriptive statements about the agents and their setting, and are (I believe) empirically verifiable. But if you think one or more of them is false, please point out the error you see.If I'm right, how do you justify this? If you think I'm wrong, could you reformulate it, begininng with the foundational principle and showing how your rules and categories meet the the goal of optimising 'the well-being of conscious creatures'?
Take this -
1. Postulate of Liberty: There are noa priori[/i moral duties or constraints. The only duties and constraints binding upon moral agents are those derived from a sound moral theory.
Corollary: Postulate of Free Agency: The agents in the moral field are not parties to nor bound a prioriby any universal agreement or compact.
Corollary: Postulate of Autonomy: The agents in the moral field are not related as elements of an organic unity, and are not subject to any external imperatives or constraints other than those imposed by the laws of nature . . .
Out of time for today. Will pick this up in the AM.
PS: I haven't covered all your points. Too many! If there is one I've neglected to which you'd particularly like a response let me know.
-- Updated September 7th, 2017, 9:24 am to add the following --
Ranvier wrote:Before I offer my further thoughts on the subject of "Morality", it would be pertinent to define the meaning of the term from my subjective perspective. I perceive morality to be a "description" of the general "moral standard" within the society. Such moral standard stems from several cultural sources, which I mentioned before: religion, politics, economy, history...etc. The given "culture", in context of all these factors, contains certain "values" that are important to most members of the society: marriage, family, life in general, freedom, or even equality (gender, race, physical attributes such as disability). It's important to make a distinction in "values" from "value", which are two entirely different concepts . . . I personally reserve the right to view "Morality" as simply taking the "pulse" of a given culture, rather than a set of rules imposed on the people. Of course, the more uniform the culture is, the more social pressure there will be for any individual to conform to the "social norm". Therefore, discussing any "universal morality" must imply conscious "equalization" of the culture. However, as GEM points out in one of his posts, in the "society of strangers", it's virtually impossible nor it would be wise.
Your last statement there is correct, if you define morality as you have done --- as the de facto set of norms and values (what I called the "vernacular morality") dominant in a given culture. Most philosophers and anthropologists, however, prefer the term "mores" to denote those culture-dependent norms. And of course, any attempt to universalize such norms, to apply them to other cultures or uniformly within a pluralistic culture, would be futile and likely have unpleasant consequences.
But that last statement of yours is not true if you define "morality" more generally and abstractly, with no specific normative content. A universal morality will be possible if it is constrained by features of human nature and human societies that are truly universal --- and there are some. I mentioned in a previous post that there are some moral rules that are universal, or nearly so --- nearly all cultures prohibit murder, mayhem, stealing, and cheating, for example. So a moral system that proscribed only those behaviors, and perhaps a few others, could be a universal morality.
One implication of this is that the rules of --- the duties and constraints imposed by --- a universal morality will be few. In fact, the postulates and axiom of the moral theory I sketched generate only one duty and one constraint, and the duty is conditional.
I'll let you deduce what those might be.
-- Updated September 7th, 2017, 10:00 pm to add the following --
Gertie wrote:1. Postulate of Liberty: There are no a priori moral duties or constraints. The only duties and constraints binding upon moral agents are those derived from a sound moral theory.
Corollary: Postulate of Free Agency: The agents in the moral field are not parties to nor bound a priori by any universal agreement or compact.
Corollary: Postulate of Autonomy: The agents in the moral field are not related as elements of an organic unity, and are not subject to any external imperatives or constraints other than those imposed by the laws of nature . . .
and
2. The rules generated by the theory govern only the acts of agents, and is indifferent to the ends of actions. However, if a certain end should entail a violation of the rules, i.e., it cannot be pursued without violating a rule of the theory, it is effectively ruled out as a permissible end (a malum in se).
It's not looking for ways to work towards an end goal, the well being of conscious creatures, rather you're using a formulation which picks out and prioritises the freedom of the individual, imo. And I would say that's only one part of optimising the wellbeing of conscious creatures.
As I mentioned in the last installment those postulates are not moral posits. Most of them are matters of fact, subject to empirical testing. The Postulate of Liberty is somewhat different; it aims to head off question-begging. The aim of the theory is to generate moral duties and constraints, and it aims to be a complete theory. Hence it posits that the agents are tabulae rasae, morally speaking, at the outset; they have no duties not generated by the theory. The two corollaries are straightforward empirical claims: there is no "social contract" to which everyone is a party, and civilized societies are not "organic unities."
The Postulate of Liberty can also be read with another sense --- it proposes that no one enters the world "naturally" burdened by any duties or constraints (we are "born free"). Whatever duties we think people have or should honor are acquired later, via maturation, learning and other social experience.
The postulates, taken together, are not steps toward the end goal. They're facts or logical requirements we must take into account in order to move effectively toward that goal.
That's why people like Ranvier and I see it as effectively the morality of psychopathy, it's structured around, rooted in, your construction of a moral agent who should be free to pursue her own desires, and only curtailed if these cause your system to fail.
The theory would only fail if the rules it generated, though consistently followed, did not maximize goods and minimize evils for all agents. No act of Annabelle's could cause the theory to fail, but if an act of hers thwarts that goal, should it not be curtailed? And if it does not thwart that goal, then it should not be curtailed, because it (presumably) advances her own welfare, and her welfare is one of the welfares we're seeking to maximize.
Do you think agent actions should be curtailed for some reason other than that they thwart the theory's goal? If so, what are those reasons?
But perhaps I'm missing the thrust of your complaint there. If so, please amplify.
Right, that needs clarification. In the sense of an objective 'God's eye pov', there is no reason to value my quality of life above yours, the objective logic suggests this quality of mattering belongs to you as much as me.
I agree in essence, but would state the matter somewhat differently. There are no reasons to value anything (I'm speaking of end goods only; there are reasons to value means goods). No one can explain, without circularity, why they value anything. Values (in the "economic sense" -- h/t to Ranvier), tastes, preferences, desires for any end good, are all spontaneous and inexplicable; we just have them. This fact manifests itself in the current controversies over "gay rights." Why do some people prefer sex partners of the same sex? It is not a "choice," any more than preferring partners of the opposite sex is a choice. A preference for chocolate ice cream over vanilla is not a choice either, nor a preference for Mozart over Beethoven.*
Alfie just does value his own life above Bruno's, who is a stranger to him, and Bruno likewise values his life above Alfie's. There is no logic or reason behind either of their valuations. And it is not logic that suggests to me that Alfie and Bruno value their lives (and other things). I need only observe their behavior to affirm that.
So if we add a postulate of Equal Agency to our theory (all agents subject to the theory have the same moral status and are equally bound by the rules), then we'll be obliged to devise rules which give equal weight to the goals and interests of all agents.
But if our theory is built from the foundation up, based on the axiom of optimising the wellbeing of conscious creatures, then we have to look at the context. In the context of a complex interacting society, then it will of course be that people don't start from the place, aren't equal in that sense, and to work towards optimising the welfare of all some might have to help out others more, by taxation and redistribution for example. How does your formulation provide for this?
Ok, another break. This forum is becoming almost a full-time job! More tomorrow.
* There are, presumably, possible explanations for those interests and preferences involving the subtleties and idiosyncrasies of neural wiring and the effects of random environmental influences upon it. But no one knows what those are and they don't matter, morally speaking.