Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Discuss morality and ethics in this message board.
Featured Article: Philosophical Analysis of Abortion, The Right to Life, and Murder
#90129
Why would brain size be relevant? You assume creatures with many cells firing off pain signals must feel more pain than those with fewer cells doing so? What about a hypothetical creature with only one nerve cell, if it's one cell was firing off a pain signal, the creature would be 100% consumed with pain, no?
#90151
Probabilistically as a measure of the overall set of whatever it is that does determine intensity and/or probability of sentience, I think intelligence is a better -- and easier -- measure than brain-size, evolutionary-closeness to humans, complexity of brain architecture, sophistication or strength of reaction to destructive stimuli (e.g. being delimbed or poked repeatedly with a pin). Any one of these traits could be a probabilistic measure for all the others since I think we can agree all would correlate to each other. One argument for intelligence being the most sure trait to measure is that I think part of the reason we are so intelligent, particularly in measurable, noticeable ways, is because we are so sophisticatedly conscious, with our conscious identity being like this imagined parental figure living in our head constantly analyzing what happens to us and using its sophisticated prediction-ability interfere by taking over parts of our operation for our benefit. Of course, it's not a perfect connection, because for instance there is a certain degree of displayed types of intelligence in a sleep-walking person even though they are unconscious and despite popular misconception not even dreaming (if I remember what I've read various places correctly as I'm liable to be wrong). Yet, the ways in which a sleepwalking person, which in a way the closest real life equivalent of a p-zombie, is noticeably less intelligent than a waking person are the ways in which and types of which intelligence at least probabilistically signifies consciousness both at all and in intensity like those other factors brain-size, evolutionary-closeness to humans, complexity of brain architecture, sophistication or strength of reaction to destructive stimuli but I think even better and more readily measurable.

Wowbagger, let me make up some fancy sounding acronyms for future-ready use that I and if you would you can use to more simply refer to some mutually understood concepts we've been using:
  • PS - Probability of being sentient (in the sense of motivated consciousness of stimuli as opposed to mere mechanical nociception, i.e. actually having a negative mental experience as a result of nociception)
    EIS - Estimated Intensity of Sentience (if sentient/conscious at all)
    PAIS - Probability-Adjusted Estimated Estimated Intensity of Sentience ( PS * EIS )
Also, as a simple benchmark let's assume as I think we have been that for a typical adult human, PS = 100%, EIS = 100, and thus PAIS = 100.

Wowbagger, you say for mantis shrimp you estimate the EIS is 10 times that of spiders, so is it correct that your estimations are as follows:

Rabbits: PS of .97, EIS of .4, and thus PAIS of 0.388 which is a little over one third.
Mantis Shrimp: PS of .97, EIS of .04 and thus a PAIS of 0.0388
Spiders: PS of .01, EIS of .0001, and thus a PAIS of 1E-06 which is one millionth.

After watching that video of the mantis shrimp and learning about the deep complexity of their behavior and their relatively powerful ability to communicate information, and interesting things like that some species of mantis shrimp form monogamous long-term relationships lasting up to 20 years and coordinated their activities and childcare with their mate which again I assume they do in part by utilizing their powerful ability to communicate information, I'm starting wonder if mantis shrimp don't have a higher PAIS than rabbits. Also, mantis shrimp aren't that tiny, not that I think size is such a big deal. In fact, they can grow to be over a foot long and a few exceptional cases have even been recorded as being as big as 15 inches long (38 centimeters). But you are saying that the PAIS -- the experience (adjusted for probability that they do not experience anything at all) -- of rabbits is 10 times more intense than that of these smart, complexly behaved, communicating mantis shrimp? But humans are only 3 times or as intense as rabbits? Even if mantis shrimp have a smaller PAIS than rabbits, I would think they are proportionally closer to rabbits than rabbits to humans.

I'm not necessarily saying your estimation for spiders or mantis shrimp is too low, or -- if your estimations are under certain assumptions that don't generally exist in the real world such as those that discount the long-term effects, the existential angst ("why me?"), the lack of a magical restoration after a few hours, the possible lost potentiality, the unique role of happiness versus mere pain, etc. -- that your estimation for mantis shrimp or rabbits is much too high. Rather, what I can firmly say is that the proportions seem way off to me. How is this mantis shrimp's PAIS so low compared to either rabbits or humans but rabbits are so close to humans? And how are the spiders so intensely much lower than mantis shrimp and rabbits and mantis shrimp and rabbits so close to humans compared to spiders? It would be one thing if you were claiming all these species were roughly equal to humans in PAIS, or at least roughly equal to each other in being a certain small fraction (e.g. 1/3) of humans. I would disagree but I'd have a hard time justifying my side anymore than yours. But in this case you are presenting conflicting points without a reason, which to me makes your model unbelievable. You need a justification for this proportionally extreme distinction between mantis shrimp and rabbits versus rabbits and humans and the even way, way more proportionally extreme distinction between spiders and mantis shrimp than between any combination of mantis shrimp, rabbits and humans that makes one tens of thousands of times more than then other. What is your justification for this extremely proportionally low numbers? You need some arguably relevant factor that is so significantly different between spiders and mantis shrimp but not mantis shrimp and rabbits and then proportionally different between mantis shrimp and rabbits then between rabbits and humans. Remember, you are not just saying spiders have an experience a half as intense as mantis shrimp or a 10th of mantis shrimp or even a 100th of mantis shrimp, but thousands if not tens of thousandths less intense than mantis shrimp.
Wowbagger wrote:Or do you think insects, if sentient, can feel just as much pain as a blue whale getting harpooned?

No, but that's not simply because they are small. I think intelligence would be a better probabilistic measure than brain-size, let alone physical size which includes all whale blubber, for a variety of reasons. Brain-size and intelligence correlate with each other of course, but intelligence may reflect proportions better since we can imagine the issue of size becomes exponentially greater as we get to smaller scales where the issue of the efficiency of brain use becomes moot due to the size limiting the power even of a brain that makes for a super-efficient use of brain. Where a brain is so small that it has an exponentially limiting effect on consciousness wouldn't it also have a proportionally limiting effect on intelligence? So where we can measure intelligence by some way more accurate than estimating intelligence on brain-size, I think intelligence gives a more accurate probabilistic estimation of PAIS than brain-size, namely since intelligence gives us all the information of brain-size and then some. For instance, intelligence speaks not only to brain-size but to brain functioning, brain efficiency and, as you say, brain architecture. There's also the issue of the brain being assisted by information processing by other organs such as occurs in the extremely sophisticated eye-system of the mantis shrimp which free up the space in the brain which would have gone to that information processing.
Wowbagger wrote:And then add to this that mammals have a "fancier" brain architecture, for what that's worth.

What do you mean exactly by "fancier" brain architecture? Isn't significant, noticeable differences intelligence more than happening to belong to a biological class more indicative of fancy brain architecture especially for the sake of argument we are comparing animals with nearly identical brain-sizes?
Wowbagger wrote:Obviously I can't tell you for sure that these things are relevant, and to what extent they are, but doesn't it seem the most plausible estimate if we take these factors into consideration?

Sure we can take into account brain-size, 'brain-architecture', and specifically 'degree of similarity in brain-architecture to humans', but more than any of those three factors I think intelligence is an even stronger probabilistic measure for what you are trying to get a probabilistic measure using those other factors. Of course, it's no surprise then that intelligence also correlates in probability to brain-size, 'brain-architecture', and specifically 'degree of similarity in brain-architecture to humans'.
Wowbagger wrote:Since we can't measure qualia, we have to use indirect evidence to see whether beings have them. Holding evolutionary relatedness important isn't phylumism, it's a valid probabilistic argument, given that WE certainly are sentient, and given that small changes to brain architecture are less likely to change that than large changes. That doesn't mean that beings in different phyla can't be sentient, of course.

In the absence of more direct evidence of consciousness and sentience at all or by degree or intensity, I agree that the degree of similarity to humans, particularly in issues involving the brain and would-be mind, human-like similarity gives us a probabilistic estimate of a species's or subspecies or even individual's PAIS. This is why from the get-go I have been arguing for using among other things human-like intelligence and human-like behavior seemingly associated with our sophisticated consciousness (which for instance make dolphins more like us than giraffes or cows regardless of recentness of a common ancestor). I think intelligence, and particularly human-like intelligence, is a more accurate probabilistic indicator then brain-size and human-like brain architecture of PAIS.
Wowbagger wrote:High intelligence or self-awareness might not increase how much it hurts you when you get hurt. Why would it? Brain size seems most relevant, and mammalian vs. arthropod brain architecture too seems relevant.

Big brain size or human-like brain architecture (or evolutionary history) "might not increase how much it hurts you when you get hurt; why would it?" Intelligence, especially human-like intelligence, seems to me most relevant, and human-like behavior seemingly associated to our consciousness too seems relevant.
Wowbagger wrote:Picture a spider and a rabbit, doesn't there seem to be an order of magnitude of difference?

Yes, at least for most species of spider. I think the order of that magnitude is roughly proportionally equal to or less than the order of magnitude of difference between rabbits and humans.
Wowbagger wrote:As for humans and rabbits, sure, in many regards the differences are huge as well. But are they so in regard to how intense suffering is experienced? That's not at all clear to me.

Why is it not clear to you that the huge differences between humans and rabbits are not "in regard to how intense suffering is experienced" but it is clear to you that similarly huge differences between rabbits and mantis shrimp are in regard to how intense suffering is experienced and that the similarly huge (but maybe even smaller overall since they are both anthropods) differences between mantis shrimp and spiders is so extremely in regard to how intense suffering is experienced? It's this inconsistency that is the biggest hole I see in your model as I understand it. Again, it would be one thing if you were arguing all these animals have nearly identically PAIS be it the same as humans or lower. I would disagree but it would seem we were both just choosing a different way of measuring PAIS. But you have presented some specific proportional anomalies which to me seem to be a special pleading fallacy unless some specific explanation is given. As far as I can tell (and I can be wrong), you have not given a criteria for measuring PAIS that would consistently lead to the extremely out-of-proportion estimations that you have given, i.e. some set of traits that is proportionally different between humans, rabbits, mantis shrimp, and spiders at all like the way you have given them such that the difference between rabbits and humans is proportionally less than that between mantis shrimp and rabbits and that the difference between spiders and mantis shrimp is very, very extremely greater than the difference between any combination of the the other three. The few traits you have suggested seem to be ad hoc explanations only of one difference between a single one-on-one combination of those 4 creatures but lead to contradictions if attempted to use to explain all four. Ironically, my suggestion of intelligence seems to come closest to matching your model, except that depending on one's exact measurement of intelligence of each species might put mantis shrimp a little ahead of rabbits instead of behind, would probably slightly increase the difference between humans and both mantis shrimp and rabbits, and would greatly lessen the difference between spiders and mantis shrimp and rabbits. Even with all those adjustments, the numbers given by my suggestion of intelligence seems to come closer to matching your numbers than the traits you have proposed: brain size and brain architecture induced via biological taxonomy (e.g. phylum) or otherwise closeness of most common ancestor.
Favorite Philosopher: Eckhart Aurelius Hughes Signature Addition: View official OnlineBookClub.org review of In It Together: The Beautiful Struggle Uniting Us All

View Bookshelves page for In It Together: The Beautiful Struggle Uniting Us All
#90156
Thanks for coming up with the cool acronymes. I don't have time to respond in detail, just some further thoughts:

You place way too much weight on my comments about evolutionary relatedness. I just used that to influence PS, it has nothing, or only indirectly, to do with EIS. And I used it to influence PS because small changes to something that definitely is conscious are less likely to make it stop being conscious than large changes. It's really not important, you can ignore the part about evolutionary relatedness.

The Dawkins article I cited argues against there being a correlation between intelligence and suffering (at least after a certain level of intelligence). You still haven't addressed that.

If adult humans have PAIS 100 (as it makes a lot of sense to define it), note that this isn't the optimum. A blue whale might well have PAIS 200, assuming PS of 99.9% and EIS of 200. You're most likely aware of this, I just wanted to highlight it. We don't know whether humans are at the top on that scale, and some reasons suggest they might not be.

You said I gave mantis shrimps PS 97%, that's too high for me, I only said that I'd give them a non negligible probability for being sentient. 25% maybe. You say you read a lot of impressive stuff about mantis shrimps and now you think they might have a higher PAIS than rabbits... I doubt that, rabbits have an impressive range of behavior as well. (Spiders, by the way, do not.)

Difference between spiders and mantis shrimps: Range of behavior, SIZE. The mass of a spider is ten thousands of times less than that of a mantis shrimp! Does that not at all concern you?

You write: "I think intelligence would be a better probabilistic measure than brain-size, let alone physical size which includes all whale blubber, for a variety of reasons. Brain-size and intelligence correlate with each other of course, but intelligence may reflect proportions better since we can imagine the issue of size becomes exponentially greater as we get to smaller scales where the issue of the efficiency of brain use becomes moot due to the size limiting the power even of a brain that makes for a super-efficient use of brain. Where a brain is so small that it has an exponentially limiting effect on consciousness wouldn't it also have a proportionally limiting effect on intelligence? So where we can measure intelligence by some way more accurate than estimating intelligence on brain-size, I think intelligence gives a more accurate probabilistic estimation of PAIS than brain-size, namely since intelligence gives us all the information of brain-size and then some."

I can agree with that, actually. But I think the correlation only works until you hit an upper ceiling, and after that, brain size becomes more important. And that's because the correlation is due to indirect reasons. As Dawkins explains, there's just no reason why intelligent creatures would have to feel more pain!

EDIT: Scott, a two month old infant is hardly more intelligent than an adult rabbit, right? So would you be willing to let thousands of infants undergo a highly painful procedure in order to avert the same (external suffering) procedure for one adult human? Do you think the difference here is as big as the difference between rabbit and spider? I have really strong intuitions that this isn't the case.
Favorite Philosopher: Peter Singer _ David Pearce
#90187
Wowbagger, I assume you are talking about this article. I will critique it now.
Dawkins wrote:The great moral philosopher Jeremy Bentham, founder of utilitarianism, famously said,'The question is not, "Can they reason?" nor, "Can they talk?" but rather, "Can they suffer?" Most people get the point, but they treat human pain as especially worrying because they vaguely think it sort of obvious that a species' ability to suffer must be positively correlated with its intellectual capacity. Plants cannot think, and you'd have to be pretty eccentric to believe they can suffer. Plausibly the same might be true of earthworms. But what about cows?
I think that is an amazingly well presented and beautifully point by Bentham and a great well-written way for Dawkins to start his his article. However, already in the rest of the paragraph I fear there is an implication that sentience and specially the capacity to feel as in consciously experience pain is black-and-white as opposed to something that comes in degrees and varying intensities which would call for different levels of sympathy and different numerical representations in a utilitarian equation.
Dawkins wrote:Nevertheless, most of us seem to assume, without question, that the capacity to feel pain is positively correlated with mental dexterity - with the ability to reason, think, reflect and so on. My purpose here is to question that assumption. I see no reason at all why there should be a positive correlation. Pain feels primal, like the ability to see colour or hear sounds. It feels like the sort of sensation you don't need intellect to experience.
I have a few points here:

1) This argument at this point is heavily relying on what is exactly meant by the vague concept of 'experience pain'. One presumably doesn't need much intellect, or even much if any consciousness at all, to have the relatively mechanical system of nociception such as in the typical fruit fly. But mere nociception -- the mechanics and basic outward behaviors of pain and the tell-tale quasi-reflexive action of which it consists -- is different than suffering as the conscious experience of pain by a being with a mind. This is also shown by his example of seeing color. The camera on my cell phone can see color, and react differently depending on the color inputs. What Bentham presumably meant by suffering doesn't mean the ability to observe and react to would-be painful stimuli like my camera sees color, but to actually have a mental, conscious existence to feel not in the mechanical sense in which even a camera phone takes in input data but in the emotional, mental sense in we which all are familiar. That other sense does seem to require intellect to experience. A camera needn't be smarter than an insect to record or even react -- parallel to nociception -- to streams of light, but to have the actual mental experience of conscious displeasure or conscious pleasure does seem to require smarts OR at least requires that which also is required for smarts OR itself is the cause of sophisticated intelligence.

2) Dawkins said he was going to argue against a correlation, but he seems to be only arguing against causality.
Dawkins wrote:I can see a Darwinian reason why there might even be be a negative correlation between intellect and susceptibility to pain. I approach this by asking what, in the Darwinian sense, pain is for. It is a warning not to repeat actions that tend to cause bodily harm. Don't stub your toe again, don't tease a snake or sit on a hornet, don't pick up embers however prettily they glow, be careful not to bite your tongue. Plants have no nervous system capable of learning not to repeat damaging actions, which is why we cut live lettuces without compunction.
The idea here seems to me to be that mother nature would treat dumber animals with a more Pavlovian training mechanism while leaving smarter animals to figure it out for themselves. For instance, humans might not need to feel much pain when shoving their hand in fire because we are smart enough not to do it because we realize the long-term effects even if our hand is so filled up with anesthetic that we don't feel anything. There's a few problems with this idea. For one, while I've personified evolution as mother nature, evolution isn't so clever. Our pain mechanism might not be needed, but evolution isn't so quick to get rid of such a metaphorical appendix. Secondly, while the immediate pain reflex may not be as needed in smarter animals, that doesn't mean it is not helpful at all or detrimental. Thus, we would need some kind of evolutionary reason for the backtracking from pain after we gained our intelligence. Moreover, Dawkins still seems to be talking mostly about nociception, which seem irrelevant to Bentham's question about [the conscious, emotional experience of] suffering. In that way, sure it makes sense that dumber creatures would have a more reflexive, unconscious aversion to destructive stimuli and more consciously intelligent creatures would have a more consciously emotional experience of aversion to the occurring of what they have at least in large part consciously determined not to want, at least if evolution was a lot more of a intelligent designer enough to backtrack itself into such a system. But even so that would seem to still support the correlation between the degree of intensity of conscious suffering and intelligence, while less conscious, more reflexive nociception would be useful for dumb creatures like fruit flies.
Dawkins wrote:At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt.
This "conclusion" doesn't seem to fit with the rest of the article. In the article, Dawkins hasn't actually named and rebutted any reason why intelligence would correlate to the intensity of conscious suffering. In this article, Dawkins hasn't provided any explanation of how he has tried to find such a proposed reason but come up empty to induce that such a reason does not exist. He just all the sudden is concluding none exists. Thus, his so-called conclusion seems more like a case of ipse dixit. But here are some possible reasons, the very existence of which seem to prove Dawkins so-called conclusion wrong:

1) The degree, intensity and/or sophistication of consciousness may contribute to the formation of intelligence. When we observe or conclude intelligence, particularly by noticing intelligent, planned, and/or apparently thoughtful behavior, it's actual source may be consciousness. One specific kind of evidence for this might be that we find the behavior of a sleep-walking person is not as intelligent as a waking person, even though it provides for basic actions, reflexes, interaction with the world via sensory input, and learned habits like cooking or driving a car which they all do. Of course, I think most of us are familiar with the feeling of getting lost in our thoughts while cooking or even driving while awake while our subconscious or unconscious takes over which doesn't happen when something is more intellectually demanding like playing chess. It would be great if we could get a sleep-walking person to take an IQ test or try to play chess well.

2) Intelligence may set a limit on the the degree, intensity and/or sophistication of consciousness, meaning the existence of intelligence at least enables and presumably contributes to the formation of consciousness. Indeed, it's hard to imagine someone with as active a mindful, mental existence who lacks very basic reasoning skills. In animals, the limits of information processing that in part determine their intelligence may also limit or reduce their minds capacity to create concepts, interact with a culture, come up with a sense of self, a theory of mind and all the other abstract workings of a mind that make one conscious at all and where conscious would determine the degree, intensity and/or sophistication of consciousness. One way to illustrate the plausibility if not probability of this is to consider the question of when during the embryonic, fetal, birthing and growing up stage a human first becomes conscious at all and then how over time that consciousness increases in degree, intensity and/or sophistication. (This thought experiment is a lot more poignant if one doesn't believe in some supernatural dualism that magically and apparently unnoticeably injects a soul into a baby at some arbitrary point.) Presumably, a single sperm cell or even zygote, being molecular equivalent to a plant in that it doesn't have even the intellectual capacity of an ant would not be conscious. As it it first develops a nervous system and appears to have reflexive actions to stimuli, we might think it is a conscious as a fruit fly if that is conscious at all. We would seem to hardly think that a fruit fly or embryo is conscious in degree, intensity and/or sophistication even comparable to a mantis shrimp because of the lack of sophistication in the power of the brain. In fact, without the mechanisms of intellect, it seems impossible to even suppose consciousness exists. And where a sort of 'embryonic' semi-consciousness exists even if not coming into play until after birth any significant degree or intensity would seem impossible to even function without the powers of intelligence and intellect to learn and analyze concepts even if at very rudimentary and non-verbal level.

Please note, 1 and 2 needn't be seen as exclusive as there could a mutually causal connection between intelligence and consciousness and between the degree, intensity and/or sophistication of consciousness and the degree and sophistication of intelligence.

3) Intelligence and the degree, intensity and/or sophistication of consciousness may have a mutual cause and thus be correlated. At the very least, I think we can see it is clear that sophisticated brain-functioning even where more robotic than conscious is a prerequisite -- proportionally -- to the capacity and degree to which one can have either intelligence or consciousnesses. On this token, many of Dawkins points become especially moot. Dawkins seems to want to reject only that intelligence is not required to causally lead to the formation of suffering, but he doesn't seem to acknowledge that correlation needn't be directly causal. Intelligence, especially intelligent behavior, is perhaps more measurable then these other various ideas so we use it to make probabilistic estimate of these other traits which presumably go hand-in-hand with intelligence. Despite Dawkins implications to the contrary, it doesn't need to be the case (even though it may be) that intelligence creates the evolutionary reason to develop suffering but that intelligence happens to be caused or emerge from the same things that create consciousness. This could perhaps include things like more efficient brain-functioning even in a non-intelligent, more raw data processing sense, like what basic man-made calculators do. This non-intelligent brain power itself could simultaneously be the catalyst of both intelligence and consciousness. Indeed, in humans part of our evolutionary seems to be that our intelligence spiked because we started eating more protein after learning to kill meaty animals with rocks and such. Yet, eating more protein and having a bigger brain may not have immediately made us so more intelligent, particularly in terms of genetic predisposition, but created the evolutionary stepping stone for both intelligence and consciousness which even if not directly causally related could each have separate contributions to evolutionary fitness which could now be evolutionary developed using this new potential created by bigger brains created by the influence of protein. In other words, in this one hypothetical example, the sudden existence of extra brain power, even through diet rather than genetic mutation, not being used for intelligence or consciousness could cause both to develop evolutionary over generation as natural selection narrows down the use of this extra brain space/power which could go towards anything be it consciousness, intelligent behavior or whatever else that brains do that isn't one of those two things.
Dawkins wrote:Practices such as branding cattle, castration without anaesthetic, and bullfighting should be treated as morally equivalent to doing the same thing to human beings.
This last sentence is a non-sequitur, in regards to the article in which it is placed. It simply doesn't follow from Dawkins other claims or arguments. It's also very disagreeable, and so dangerous as to adamantly contradict with the self-described "moral" opinions and values of almost everyone. In fact, through argumentum ad absurdum, any valid argument that did lead to such a strongly seemingly false conclusion is evidence that at least one of the premises of that argument must be false. It'd be one thing to encourage people to be nicer to animals, and consider their suffering more, but to utterly describe these as completely equal, which would seem to provide warrant for the craziest of animal rights activists who slaughter human farmers and human cops on the scene to free some chickens from a factory farm as if they were fighting with lethal violence to free slaves from the South during the time leading up to the civil war in the United States. Anti-abortionists used the same type of extremist, black-and-white, unreasonable rhetoric to lead to things like the murder of Dr. Tiller. To say, torturing or slaughtering an intelligent thinking church-going doctor is equivalent to abortion is dangerous. Dangerous doesn't mean false, but it does mean Dawkins needs a strong argument to support a dangerous, disagreeable, against common sense, mainstream cultural values, common opinion and current common factual beliefs, which he has not at all provided. It would be one thing if Dawkins said without argument supporting the claim that harming cows is nearly as "morally" bad as doing it to humans, or is bad for the same reasons but to a lesser degree, but to call it equivalent without argument is a dangerous way to unwittingly argue against himself by starting an argumentum ad absurdum. I would now be making a fallacious appeal to popularity, except that Dawkins is talking in "moral" language which is itself apparently a vague appeal to some kind of common cultural, subjective or emotional state of affairs. Seeing as he has no argument to back up this claim, that's all I can say: this seeming conclusion is actually the starting point in an unfinished argumentum ad absurdum. I wish Dawkins would enlighten us into the rest of this argumentum ad absurdum to see what set of premises, assuming it's a valid argument, he is collectively disproving with this would-be conclusion.
Favorite Philosopher: Eckhart Aurelius Hughes Signature Addition: View official OnlineBookClub.org review of In It Together: The Beautiful Struggle Uniting Us All

View Bookshelves page for In It Together: The Beautiful Struggle Uniting Us All
#475241
This article brings up a crucial yet often overlooked aspect of ethical living. While many of us focus on plant-based choices, we rarely question how wild ecosystems function without human intervention. It’s a good reminder that balance in nature is far more complex than our lifestyle labels suggest. Personally, I try to stay mindful in other areas too—whether it's sustainable eating or choosing eco-conscious travel options like a luxury yacht charter that minimises environmental impact.
#475371
Wowbagger wrote: May 1st, 2012, 7:49 am This thread is mainly adressed to people who have internalized that speciesism is wrong. People who believe that there's no justification for giving a being less ethical consideration simply because it looks different, has a different amount of legs, or has different DNA. People who don't share this view are welcome to comment as well, but they might have a hard time accepting the arguments that follow, because they'll be quite counterintuitive.

In River out of Eden, Richard Dawkins wrote the following:

“The total amount of suffering per year in the natural world is beyond all decent contemplation. During the 
minute that it takes me to compose this sentence, thousands of animals are being eaten alive, others are 
running for their lives, whimpering with fear, others are being slowly devoured from within by rasping 
parasites, thousands of all kinds are dying of starvation, thirst and disease. It must be so.” 
[My emphasis]

His "it must be so" is merely a factual observation regarding the nasty to truth of how evolution works. But what if we interpret "it must be so" as an ethical statement? Isn't it a preposterous thing to say? Why should the world have to be full of suffering? There is no plan in nature, no ultimate good. Nature is all about the successful copying of genes, not about the well-being of individuals. Gaia theory views, or group selectionism (by which I don't mean multilevel selection) are completely wrong and have been disproven. These views that romanticize nature have been exploiting the human bias for wishful thinking. As for why these views are wrong, the long answer can be found in books on evolutionary biology, i.e. The Selfish Gene.

Nature is full of suffering.I wasn't aware of this, but when people make nature documentaries for TV, they often cut out scenes because they're too cruel. The audience might enjoy the lion chasing the zebra (after all, the Romans had also greatly enjoyed the cruel fights in the Colosseum), but once the chase is over, who really enjoys watching how the zebra is twitching and still alive while being eaten, sometimes for twenty minutes or more? There is a video of a wildebeest being killed by hyenas, it's really disturbing. A woman filming says towards the end "Oh at least one's going for its neck now, thank god for that!" I don't think I've ever heard a more ironic statement.

So again, nature is full of suffering. And by the principle of anti-speciesism, that suffering *matters*. If you oppose factory farms, you should also oppose what happens to animals in the wild. Suffering doesn't become less bad just because it happens for natural causes. On some farms, animals have a life much better than similar species are having in the wild. They still suffer unnecessarily from all the procedures that come with exploiting animals for profit, so this isn't an argument that can be used against veganism. But it can be used against the view that nature is all good. We've been indoctrinated to believe that, but it's not true.

Humans also suffer from status quo bias. We like things the way they are, no matter what way they are. For those interested, scroll down to the podcast by Nick Bostrom gives tricks as to how we can spot status quo bias, and how it can be countered. I believe that status quo bias plays a big role for why people are extremely reluctant to approve of intervention in nature.

If human beings on the planet are dying from thirst, hunger and diseases, we want to help them. If a street dog attacks a group of toddlers, we'd instantly kill the dog in order to save the toddlers. Why should any of that change when the victims are non human animals?

Forget all the technical difficulties for the moment. If there was a magic button that would instantly turn nature into a vegan paradise, where predators eat vegan food (or artificial meat magically created), and where overpopulation is not an issue, would you press the button?

My hope is that vegans and vegetarians, and even meat eaters, will answer "yes", even though the issue might seem counterintuitve at first.

We are already intervening in nature on a massive scale. Some of the intervention is destruction because of human greed. That's not what we want, even though a case could be made that non-existence is better than a life in suffering. Vegans would accept that for farm animals -- after all, if the world goes vegan, there'll be much less cows, pigs and chicken. So this argument could be brough in support of habitat destruction. However, it would be counterproductive to advocate something like that because opposition would be huge. And there are also empirical difficulties, rainforest destruction leads to more global warming, and global warming might well increase the overall amount of sentient life on earth (becaus there'll be more energy ready to get converted). Instead of getting rid of nature, we should focus on making nature more humane (a very ironic word in this context).

The other way in which we already massively intervene in nature is conservation biology. Conservation is a harmful ideology. If only two pandas are left in the world, and you had to choose between violently killing the pandas or violently killing hundreds deer, would the pandas be worth more just because they belong to the species "panda"? A species doesn't have interests, only indviduals have interests. Only individuals can suffer and be harmed. Conservation biology cares only about the abstract concept of "species", not about the actual individuals. Yes, there is indirect value in biodiversity and "healthy" (wich means cruel and full of suffering) ecosystems, and in the pleasure it gives us humans to know that there are cute pandas. But let's not confuse intrinsic value with indirect value, and let's understand that human aesthetic preferences in no way compare to vital interests of animals to not want to be eaten alive, for instance.

We have now explored the main aspects of the problem. What can be done about it? It seems important to replace conservation biology by compassionate biology. In the following text, David Pearce gives an outline for this projectI talked about some of what he wrote in that text already, but he mentions many more details and additional arguments, the text is strongly recommended. He outlines how populations could be micro-managed through immunocontraception and even advanced technology, and he also talks about reprogramming predators.

As of now, our knowledge of ecosystems isn't big enough, and our technologies aren't powerful enough to enable us to compassionately intervene large-scale in nature. No one is proposing to rush through with something if it is going to mess things up. Whenever I talk about this to people, they bring up all kinds of practical objections. Practical objections are here to be taken care of. Let's influence science and politics to give more funding towards studies in compassionate ecology. We need an international research project.

Technology grows exponentially, and on some not unplausible estimates, we're only decades away from the point were we could make it happen. In the meantime, the most important thing that can be done is to spread awareness. Mainly among vegans, vegetarians, and rationalists who read Dawkins' books on evolutionary biology. Here's btw a video with Dawkins interviewing Peter Singer, the moral philosopher who popularized the term "speciesism"
Maybe the time isn't yet right to also mention this to people who don't care about animals, as they're only looking for (more) reasons to consider vegans insane. But note how it is often meat eaters who bring up wild animals, as reductio ad absurdum. By that, I don't mean the idiotic "but lions eat meat too" (ducks "rape", and chimps do all kinds of nasty stuff), I mean the: "Should we feed foxes with tofu? Should we save gazelles from cheetahs?"

Some people object on the grounds that large-scale intervention in nature is "playing god", and that that's somehow something bad. They say it is "arrogant to press our human standards on nature". But the argument doesn't work. Once we have the technological means to do it, we will be "playing god" either way, whether we do something or not. With power and knowledge comes responsibility. When we decide to not do anything, we'll be implicitly judging nature "ideal". We'd be forcing our moral standards on all the animals in nature in the sense that we let their suffering go on forever even though we could change it. The idea that "pressing our morality on them" is bad can only work if nature is somehow good. As I argued earlier, this view is simply mistaken, but unfortunately very common. The "arrogance objection" is also common because humans are indeed arrogant, or rather, selfish, in that they're destroying the planet. Many people who care about animals and the environment (see the "and" here? Isn't it incompatible to care about both, at least if the idea is to leave the "environment" untouched?) have a low regard of their fellow human beings. But even if you hate humans because you love animals, if the arguments I put forward here are sound, humanity happens to be the only hope for wild animals. (Except maybe a life-ending asteroid.)

I recently saw a sticker saying "Veganism: 51 billion animals like this" (with a Facebook thumb-up symbol). If vegans care about wild animals too, the number of animals liking it will go up into the trillions! The scale of the issue is huge, beyond imagination.

If you agree with the main arguments here, please consider spreading the meme to philosophical-minded people. Comments and criticism are very welcome, even though I fear that the length of this post might scare people away...
Vegetarian here - nice post.

It's heartening to know there are people trying to grasp not just the morality but also practical implications of how we do and could manipulate animal welfare in nature.

For me the practicalities and implications of messing with such a vastly intricate ecosystem are waaay beyond my ken, and so we should tread carefully. But seeing as you give me a Magic Button, I'd press it for the reasons you lay out so thoroughly.

Current Philosophy Book of the Month

Minimum Wage Millionaire

Minimum Wage Millionaire
by Eckhart Aurelius Hughes
July 2025

2025 Philosophy Books of the Month

Thoroughly Modern Money

Thoroughly Modern Money
by Genesis Fosse
December 2025

Minimum Wage Millionaire

Minimum Wage Millionaire
by Eckhart Aurelius Hughes
July 2025

Anticipation Day

Anticipation Day
by Jeff Michelson
June 2025

The Contentment Dilemma

The Contentment Dilemma
by Marcus Hurst
May 2025

On Spirits

On Spirits
by Dr. Joseph M. Feagan
April 2025

Escape To Paradise and Beyond

Escape To Paradise and Beyond
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


The problem is that our system has got lazy. I[…]

If demonstrable truths are a functional aspect of […]

My interest is Neurophilosophy

Intro Question for The Online Philosophers’ Club[…]

This topic is about the July 2025 Philosophy Book[…]