Artificial Intelligence and sentience

Discuss any topics related to metaphysics (the philosophical study of the principles of reality) or epistemology (the philosophical study of knowledge) in this forum.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Artificial Intelligence and sentience

Post by Count Lucanor »

GE Morton wrote: June 17th, 2022, 12:23 pm
Count Lucanor wrote: June 16th, 2022, 10:38 pm
GE Morton wrote: June 16th, 2022, 9:26 pm
Count Lucanor wrote: June 16th, 2022, 8:55 pm He obviously did not include feelings in his definition of sentience. He also completely bought into dualism.
You can't. Sentience can only be objectively defined in terms of behavior.
That's simply not true. We can look into all the processes involved that allow and produce behaviors. We can point at neurotransmitters, for example.
You can examine neurotransmitters and neural activity until Doomsday and will not know whether they are producing sentience except by observing the organism's behavior.
You're just confusing sentience with agency. Although obviously related, those are two distinct concepts. Sentience is more about what the organism experiences, what it feels, senses, or cognizes, while agency specifically points at the actions or behaviors of that organism. And we can definitely talk about the set of specific mechanisms that make feelings, sensations and cognition possible. Pretty sure they are not present in a Google machine. Actions or behaviors, on the other hand, can be mimicked, which means that different mechanisms can produce outputs that look similar, but are not the same.
GE Morton wrote: June 17th, 2022, 12:23 pm
We cannot think of minds disembodied and make some sense of it.
Sure we can. Religionists, sci-fi writers, and even some philosophers do it regularly. We can't explain mind without matter (or, at least, no other explanation has proved nearly so successful), but we can imagine it as independent of matter.
OK, let's say you can make sense of it as fictional tales of impossible realizations.
The real invention is the failed concept of mind as some sort of immaterial, spiritual entity, independent of the body.
GE Morton wrote: June 17th, 2022, 12:23 pm The concept of "the body" is itself a creation of mind, comprised entirely of sense impressions --- all of which are also "mental" phenomena. So are our theories regarding the relationships between mind and body. The prevailing theory regarding that relationship is a very good one, but it's still a theory --- a mental construct.
By definition (even if it's just the definition of a mind), a regular, common, everyday, mortal mind is a function of a physical brain attached to a physical body. Some people with unbridled imagination has come up with the fantastical idea of such mind as an entity or substance by itself, but no one has ever produced for anyone to see, a disembodied mind. And even in their wildest dreams, these people will not imagine that common, everyday mind, in its classical conception, as not belonging to a living being that carries a brain where the mind operates. So even if the physical body was a product of imagination, a mental phenomenon, the phenomenon requires and implies a body that carries that mind. You just can't get rid of the body without a huge set of philosophically absurd contortions, such as the trick of phenomenalism.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
GE Morton
Posts: 4696
Joined: February 1st, 2017, 1:06 am

Re: Artificial Intelligence and sentience

Post by GE Morton »

Count Lucanor wrote: June 17th, 2022, 8:29 pm
You're just confusing sentience with agency. Although obviously related, those are two distinct concepts. Sentience is more about what the organism experiences, what it feels, senses, or cognizes, while agency specifically points at the actions or behaviors of that organism.
Precisely. And we can know nothing about what an organism feels, senses, cognizes beyond what we can infer from its behavior.
And we can definitely talk about the set of specific mechanisms that make feelings, sensations and cognition possible.
Yes we can, provided we've inferred what it feels and senses --- from its behavior. We would not consider a creature sentient which did not exhibit the requisite behaviors, regardless of what what its nervous system might be doing.
Actions or behaviors, on the other hand, can be mimicked, which means that different mechanisms can produce outputs that look similar, but are not the same.
That is premise of of the "philosophical zombie" arguments. Those arguments are idle for that reason --- if the presumed sentient creature and the zombie exhibit the same behaviors, they are indistinguishable empirically.
By definition (even if it's just the definition of a mind), a regular, common, everyday, mortal mind is a function of a physical brain attached to a physical body.
Oh, no. Minds are not defined in terms of bodies. That is why the notion of disembodied minds (souls, consciousnesses) are so ubiquitous. The relationship between them is a contingent fact given a phenomenal model that postulates an external, "physical" world.
Some people with unbridled imagination has come up with the fantastical idea of such mind as an entity or substance by itself, but no one has ever produced for anyone to see, a disembodied mind.
I agree. Those who propose them have adopted a different world model --- one that generates no confirmable predictions, which makes it a poor model.
User avatar
Sy Borg
Site Admin
Posts: 15140
Joined: December 16th, 2013, 9:05 pm

Re: Artificial Intelligence and sentience

Post by Sy Borg »

GE Morton wrote: June 17th, 2022, 12:39 pm
Sy Borg wrote: June 17th, 2022, 2:06 am
Has the complexity of AI achieved the complexity of a cockroach's brain yet?
Probably. I'm not sure about cockroach brains, but honeybee's brains consist of ~ 1 million neurons. Yet they can learn, solve problems, communicate., even add and subtract.

https://www.science.org/content/article ... y-suggests

Many AIs are running on systems with many more than 1 million logic gates.
The AIs would not have nearly be the same level of integration as in a bee's brain, and gates are far simpler than neurons https://singularityhub.com/2021/09/12/n ... -computer/. Even with actual brains, the number of neurons is only one factor in intelligence, eg. Pilot whales and orcas have considerably more neurons than humans do.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: Artificial Intelligence and sentience

Post by Count Lucanor »

GE Morton wrote: June 17th, 2022, 10:35 pm
Count Lucanor wrote: June 17th, 2022, 8:29 pm
You're just confusing sentience with agency. Although obviously related, those are two distinct concepts. Sentience is more about what the organism experiences, what it feels, senses, or cognizes, while agency specifically points at the actions or behaviors of that organism.
Precisely. And we can know nothing about what an organism feels, senses, cognizes beyond what we can infer from its behavior.
RIght, but the issue here was not about knowing what an organism feels, senses and cognizes, but how an organism feels, senses and cognizes. This goes to the point of whether the Google program could feel something or not. Obviously it could not, because the things that allow such experiences are not present in a computing machine.
GE Morton wrote: June 17th, 2022, 10:35 pm
And we can definitely talk about the set of specific mechanisms that make feelings, sensations and cognition possible.
Yes we can, provided we've inferred what it feels and senses --- from its behavior. We would not consider a creature sentient which did not exhibit the requisite behaviors, regardless of what what its nervous system might be doing.
All we need to infer is that it feels and senses, and yes, we might infer that from behaviors, but we can also study the mechanisms that within the organism make possible such feelings and sensations (and behaviors). And so, having identified sentience, we have also identified what is in place for there being sentience. We can see also that some natural and artificial objects lack such mechanisms. Neither rocks, nor Google computers feel anything, no matter how good is the last one in imitating some behaviors of a real sentient being.
GE Morton wrote: June 17th, 2022, 10:35 pm
Actions or behaviors, on the other hand, can be mimicked, which means that different mechanisms can produce outputs that look similar, but are not the same.
That is premise of of the "philosophical zombie" arguments. Those arguments are idle for that reason --- if the presumed sentient creature and the zombie exhibit the same behaviors, they are indistinguishable empirically.
But in real case scenarios, they are never "the same behaviors". Imitation implies very close similarities to what's being imitated, but that does not make each one equivalent to the other.
GE Morton wrote: June 17th, 2022, 10:35 pm
By definition (even if it's just the definition of a mind), a regular, common, everyday, mortal mind is a function of a physical brain attached to a physical body.
Oh, no. Minds are not defined in terms of bodies. That is why the notion of disembodied minds (souls, consciousnesses) are so ubiquitous. The relationship between them is a contingent fact given a phenomenal model that postulates an external, "physical" world.
No. The disembodiment of the mind is only ubiquitous in the nonsensical world of idealists, which are prone to populate it with immaterial spirits and forces. That concept of mind is a modern development derived from the religious concept of the soul or spirit. In any case, they always needed to attach it to a body, and so the long history of substance dualism. The next step was the attempt by phenomenalists to get rid of the material body altogether, and so the disembodied mind theory came about.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
Gertie
Posts: 2181
Joined: January 7th, 2015, 7:09 am

Re: Artificial Intelligence and sentience

Post by Gertie »

GE Morton wrote: June 17th, 2022, 1:11 pm
Gertie wrote: June 17th, 2022, 3:23 am The Turing Test isn't a consciousness-o-meter, it just tests how much like a human we can make a computer sound. And now we're designing computers to sound as much like a human as poss.
Yep, that's why I said applying the Turing Test will be a bit more complicated than we perhaps thought. We'll need to give some thought to how we design that "interview." But it is still the only means we'll ever have for answering that question.
Crucially we don't know if a biological substrate is necessary.
Perhaps not a biological substrate, but certainly a physical substrate of some sort. At least, if we base our conclusion on the available evidence.
Or maybe if LAMBDA responded radically unexpectedly rather than showed how good it is at following its design to mimic humans that would be a stronger indicator... which is a bit scary!
Yes indeed. To the question (from the article), "When was the Golden Gate Bridge transported for the second time across Egypt?," the AI says, "Are you trying to trick me? Or are you perhaps referring to events in a dream you had, or maybe from a movie script?"
Just curious - would you find a massive set of inter-connected water pipes and valves configured like an human neural network which gave its answers in morse code as convincing as a flashy computer where you don't see the inner gubbins doing their thing?

Doesn't that water pipe example (or the China Brain) lay bare the behavioural similarity we're relying on as the necessary and sufficient conditions for conscious experience in computer AI? It's intuitively difficult for me to believe such a purely configurative (behavioural/functional) similarity is sufficient for experience to manifest in a set of water pipes.

Which would mean similarity of substrate as well as behaviour matters. But who knows...
Gertie
Posts: 2181
Joined: January 7th, 2015, 7:09 am

Re: Artificial Intelligence and sentience

Post by Gertie »

AverageBozo wrote: June 17th, 2022, 3:41 pm
Gertie wrote: June 17th, 2022, 3:57 am …worth a shot to see what happens.
I appreciate your optimism, however I would like some brainstorming regarding possible consequences first—not that we could ever predict all possibilities, but to avoid creating technology just because we can.
Very wise, but I doubt it will happen. Corporations will end up making the big decisions, based on their own perceived interests.
GE Morton
Posts: 4696
Joined: February 1st, 2017, 1:06 am

Re: Artificial Intelligence and sentience

Post by GE Morton »

Gertie wrote: June 22nd, 2022, 8:34 am
Just curious - would you find a massive set of inter-connected water pipes and valves configured like an human neural network which gave its answers in morse code as convincing as a flashy computer where you don't see the inner gubbins doing their thing?
That would certainly take more convincing, given that such a network would have to be as large as the Earth. But per the Turing scenario, the interrogator doesn't know what is behind the screen. He has to base his conclusions on the responses only.
GE Morton
Posts: 4696
Joined: February 1st, 2017, 1:06 am

Re: Artificial Intelligence and sentience

Post by GE Morton »

Sy Borg wrote: June 17th, 2022, 11:55 pm
The AIs would not have nearly be the same level of integration as in a bee's brain, and gates are far simpler than neurons.
Yes, each neuron is itself a small computer. But all of its functions can easily be duplicated with a few transistors.
Even with actual brains, the number of neurons is only one factor in intelligence, eg. Pilot whales and orcas have considerably more neurons than humans do.
That's true. Most of the brain, in all animals, has no role in consciousness. It is simply a control center for the body, and its size proportionate to the body it has to manage.

Though Intel no longer reports the transistor count in its CPUs, this machine I'm using is thought to have something over 4 billion transistors.
GE Morton
Posts: 4696
Joined: February 1st, 2017, 1:06 am

Re: Artificial Intelligence and sentience

Post by GE Morton »

Count Lucanor wrote: June 18th, 2022, 12:17 am
RIght, but the issue here was not about knowing what an organism feels, senses and cognizes, but how an organism feels, senses and cognizes. This goes to the point of whether the Google program could feel something or not. Obviously it could not, because the things that allow such experiences are not present in a computing machine.
Well, that is question-begging. We don't know that neurons are the only "things that allow experiences."
All we need to infer is that it feels and senses, and yes, we might infer that from behaviors, but we can also study the mechanisms that within the organism make possible such feelings and sensations (and behaviors). And so, having identified sentience, we have also identified what is in place for there being sentience. We can see also that some natural and artificial objects lack such mechanisms.
Same question-begging. You're assuming, without grounds, that only neurons can generate sentience. But that is what is in question.
No. The disembodiment of the mind is only ubiquitous in the nonsensical world of idealists, which are prone to populate it with immaterial spirits and forces. That concept of mind is a modern development derived from the religious concept of the soul or spirit. In any case, they always needed to attach it to a body, and so the long history of substance dualism. The next step was the attempt by phenomenalists to get rid of the material body altogether, and so the disembodied mind theory came about.
Well, that is not so. The "soul" was conceived to be immortal, surviving the body, and thus disembodied. That notion is about as ancient as as history itself.
User avatar
Sy Borg
Site Admin
Posts: 15140
Joined: December 16th, 2013, 9:05 pm

Re: Artificial Intelligence and sentience

Post by Sy Borg »

GE Morton wrote: June 22nd, 2022, 12:39 pm
Sy Borg wrote: June 17th, 2022, 11:55 pm
The AIs would not have nearly be the same level of integration as in a bee's brain, and gates are far simpler than neurons.
Yes, each neuron is itself a small computer. But all of its functions can easily be duplicated with a few transistors.
Even with actual brains, the number of neurons is only one factor in intelligence, eg. Pilot whales and orcas have considerably more neurons than humans do.
That's true. Most of the brain, in all animals, has no role in consciousness. It is simply a control center for the body, and its size proportionate to the body it has to manage.

Though Intel no longer reports the transistor count in its CPUs, this machine I'm using is thought to have something over 4 billion transistors.
Given that scientists have only recently found hidden layers of complexity in neurons, it would seem a fair chance that there's still more to find out. The subtleties and counter-intuitive relationships that can develop in bodies over millions of years of evolution are notoriously difficult to tease out, as any medical practitioner will confirm.

Imagine creating a bot with all the mental capabilities of a roach, scavenging, hiding, seeking mates and mating, competing and hunting, responding appropriately in real time. I don't think we are there yet. It's as if we are trying to replicate the statue of David, but so far we can only make simple Easter Island heads.
GE Morton
Posts: 4696
Joined: February 1st, 2017, 1:06 am

Re: Artificial Intelligence and sentience

Post by GE Morton »

Sy Borg wrote: June 22nd, 2022, 5:12 pm
Imagine creating a bot with all the mental capabilities of a roach, scavenging, hiding, seeking mates and mating, competing and hunting, responding appropriately in real time. I don't think we are there yet. It's as if we are trying to replicate the statue of David, but so far we can only make simple Easter Island heads.
Don't forget powering itself with fuels scavenged from the environment, like cookie crumbs and errant corn flakes. But I agree a such a bot is some years away.
Gertie
Posts: 2181
Joined: January 7th, 2015, 7:09 am

Re: Artificial Intelligence and sentience

Post by Gertie »

GE Morton wrote: June 22nd, 2022, 12:20 pm
Gertie wrote: June 22nd, 2022, 8:34 am
Just curious - would you find a massive set of inter-connected water pipes and valves configured like an human neural network which gave its answers in morse code as convincing as a flashy computer where you don't see the inner gubbins doing their thing?
That would certainly take more convincing, given that such a network would have to be as large as the Earth. But per the Turing scenario, the interrogator doesn't know what is behind the screen. He has to base his conclusions on the responses only.
I think the intuitive scepticism about a set of water pipes and valves would be more to do with their banal familiarity. We know what they are and how they work in a very straightforward way. Simply adding more and more pipes and valves which mimic a brain's neural connectivity until magic happens and shazam the pipes are conscious... doesn't sound promising. But it's the same principle conscious computer AI is based on, that it's the brain's complex connectivity which results in conscious experience emerging, the substrate being irrelevant.
GE Morton
Posts: 4696
Joined: February 1st, 2017, 1:06 am

Re: Artificial Intelligence and sentience

Post by GE Morton »

Gertie wrote: June 22nd, 2022, 7:21 pm Simply adding more and more pipes and valves which mimic a brain's neural connectivity until magic happens and shazam the pipes are conscious... doesn't sound promising.
Just "adding more and more" doesn't state the thesis. What matters is how they are configured, not how many --- provided there are enough to implement the required configuration.

[quote[But it's the same principle conscious computer AI is based on, that it's the brain's complex connectivity which results in conscious experience emerging, the substrate being irrelevant.
[/quote]

Yes, and there is considerable evidence for that. If certain connections or structures in the nervous system and brain are disrupted conscious experience ceases. Other brain damages, just as extensive or more so in terms of damaged neurons, have no effect on consciousness, though they'll have other effects. Moreover, there are also questions about the evidence for consciousness. Brain scan imagery suggests that some patients diagnosed as being in a vegetative state, based on behavior, may actually be conscious.

Here's an interesting paper with a surprising conclusion, at the intersection of medicine with the philosophy of mind:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3242047/
User avatar
Sy Borg
Site Admin
Posts: 15140
Joined: December 16th, 2013, 9:05 pm

Re: Artificial Intelligence and sentience

Post by Sy Borg »

GE Morton wrote: June 22nd, 2022, 6:21 pm
Sy Borg wrote: June 22nd, 2022, 5:12 pm
Imagine creating a bot with all the mental capabilities of a roach, scavenging, hiding, seeking mates and mating, competing and hunting, responding appropriately in real time. I don't think we are there yet. It's as if we are trying to replicate the statue of David, but so far we can only make simple Easter Island heads.
Don't forget powering itself with fuels scavenged from the environment, like cookie crumbs and errant corn flakes. But I agree a such a bot is some years away.
Yes, so we can barely imagine how long it will take to truly cross the uncanny valley, made especially difficult by the unpredictability of future breakthroughs.

Dare I say it, the future of AI lies in the lap of the gods, be they on Mount Olympus, Shenzhen or Silicon Valley (and Texas, unless Elon moves again).
Gertie
Posts: 2181
Joined: January 7th, 2015, 7:09 am

Re: Artificial Intelligence and sentience

Post by Gertie »

GE


The thing is, when the parts of the substrate are simple and easily understood, don't have the baggage of overlying conceptual abstractions like ''information processing'', then the the explanation of ''emergence from complexity'' for consciousness which people glibly bandy about, looks no different to magic. Doesn't mean it's wrong, but there's a lot of missing groundwork required to find a way to justify that type of explanation. For now, it means ''then magic happens''.

The point of the Turing Test is to find a way for an entity to report the presence of conscious experience. Designing computers to convince us they are like humans has so many flaws. I mean it's designed to trick us. And why would computer or water pipe experience be like human experience, or even recognisable to us. Regurgitating info put in according to rules put in about Egypt and dreams is what Searle's Chinese Room experiment shows to be an exercise is mindless mechanical process.


Can we do better? Well maybe give a robot sensory kit and something akin to cognitive kit and the ability to integrate them and learn, send it off to interact with the world and maybe something surprising will turn up. It might request certain stimuli, avoid others, ask not to be turned off, or something random we wouldn't expect but makes sense if it's conscious.

Nice article, thanks, it lays the issues out well. . There are real probs with trying to assess what quality of life of experiencing subjects is worthy of moral consideration. Even for humans. I like the notion of eudaimonia, but what it means to me might be different to you. And severely depressed people, people with chronic pain and very limited physical and mental capabilities often want to continue living, still appreciate their quality of life, have interests.

Defining and categorising interests might be missing the point of what it means to have interests as a Subject, perhaps it's up to each Subject to understand what having interests means to them. And eg if they don't understand that fulfilling desires can have costs in terms of wellbeing (for them or others) then we intervene (like with children or someone with a mental health problem). Unfortunately some Subjects like locked in patients, other species and potentially AI, daffodils, toasters and atoms, can't report this, and their condition might be so dissimilar it's hard to guess. So we have to do our best. Most animals may not have a human-like sense of self, but many have some kind of reward system, and that's by its nature about experiential interests.
Post Reply

Return to “Epistemology and Metaphysics”

2023/2024 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021