AI and the Death of Identity

Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
value
Premium Member
Posts: 755
Joined: December 11th, 2019, 9:18 am

Re: AI and the Death of Identity

Post by value »

Count Lucanor wrote: March 25th, 2023, 6:43 pm
value wrote: March 25th, 2023, 5:46 amMicrosoft engineers are claiming in a recent paper that it's GPT-4 is already showing signs of AGI (Artificial General Intelligence).

(2023) Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4
https://www.lesswrong.com/posts/FinfRNL ... artificial
They are wrong. AI does not and cannot show signs of intelligence, it only simulates behavior that looks like intelligence, but it doesn't understand a thing it does, even though the technology is impressive. Look at the Chinese Room Experiment, which completely refuted such claims.
An article in New York Times cites several researchers with a similar claim as the Microsoft researchers.

(2023) Researchers Claim AI Chatbots Have Developed Theory of Mind
Some researchers claim that chatbots have developed theory of mind.

Michal Kosinski, a psychologist at the Stanford Graduate School of Business, made just that argument: that large language models like OpenAI’s ChatGPT and GPT-4 — next-word prediction machines trained on vast amounts of text from the internet — have developed theory of mind.

His studies have not been peer reviewed, but they prompted scrutiny and conversation among cognitive scientists, who have been trying to take the often asked question these days — Can ChatGPT do this? — and move AI into the realm of more robust scientific inquiry. What capacities do these models have, and how might they change our understanding of our own minds?

The Sally-Anne test, in which a girl, Anne, moves a marble from a basket to a box when another girl, Sally, isn’t looking. To know where Sally will look for the marble, researchers claimed, a viewer would have to exercise theory of mind, reasoning about Sally’s perceptual evidence and belief formation: Sally didn’t see Anne move the marble to the box, so she still believes it is where she last left it, in the basket.

Dr. Kosinski presented 10 large language models with 40 unique variations of these theory of mind tests — descriptions of situations like the Sally-Anne test, in which a person (Sally) forms a false belief. Then he asked the models questions about those situations, prodding them to see whether they would attribute false beliefs to the characters involved and accurately predict their behavior. He found that GPT-3.5, released in November 2022, did so 90 percent of the time, and GPT-4, released in March 2023, did so 95 percent of the time.

The conclusion? Machines have theory of mind.

Maarten Sap, a computer scientist at Carnegie Mellon University, fed more than 1,000 theory of mind tests into large language models and found that the most advanced transformers, like ChatGPT and GPT-4, passed about 70 percent of the time.

In general, Dr. Kosinski’s work and the responses to it fit into the debate about whether the capacities of these machines can be compared to the capacities of humans — a debate that divides researchers who work on natural language processing. Are these machines stochastic parrots, or alien intelligences, or fraudulent tricksters? A 2022 survey of the field found that, of the 480 researchers who responded, 51 percent believed that large language models could eventually “understand natural language in some nontrivial sense,” and 49 percent believed that they could not.

https://www.nytimes.com/2023/03/27/scie ... tbots.html
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: AI and the Death of Identity

Post by Count Lucanor »

I’ll make it clear: I’m perfectly aware of the vast amount of AI enthusiasts claiming exactly the same as the articles you have posted. It’s not hard to find other experts in the field rejecting those claims. So, you posting more articles making the same claims will not do any good to your argument. All that they show is that programmers make algorithms that manipulate data and produce outputs that resemble thought and speech, but the machine actually does not understand anything, it’s just a good simulation machine controlled by humans, the only agents in these operations.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
value
Premium Member
Posts: 755
Joined: December 11th, 2019, 9:18 am

Re: AI and the Death of Identity

Post by value »

I agree with you.

A few posts back:

"It is philosophy that ultimately steers AI in my opinion. It seems that philosophy might become one of the most important fields in the future of humanity when 'mimicable' technical capacities of humans are outdone by AI.

Philosophy concerns morality and the why of AI. Philosophy concerns the well-being and success of the human specie in the broadest possible (non-opinionated) sense.
"

However, I do believe that for example the official claim by OpenAI that GPT-5, which is due to 'complete its training' by December 2023, will have achieved AGI, requires more serious consideration.

I have been told that GPT-5 is scheduled to complete training this December and that OpenAI expects it to achieve AGI.
https://www.digitaltrends.com/computing ... elligence/

Why are they able to make that claim? Must we see it as an achievement within the scope of human 'technical' intelligence mimicry?

What does intelligence even mean? Whales and orcas have more 'grey matter' than humans and a more complex brain. In theory their conscious experience could be more comprehensive than that of humans but it is not 'technical'.

A recent article that discussed the perspective of dozens of philosophers on the topic suggested that it might be that it is humanity's destiny to transform into something like a whale.

(2021) Dolphin intelligence and humanity’s cosmic future
We don’t see evidence of supercivilisations across the galaxy because the only ones that persist are the ones that give up the risky path of technology and instead pursue immersion in nature.

Ageing civilisations either self-destruct or shift to become something like a whale. The Russian astrophysicist Vladimir M Lipunov speculated that, across the Universe, the scientific mindset recurrently evolves, discovers all there is to know and, having exhausted its compelling curiosity, proceeds to wither away and become something like a whale.

By 1978, the philosophers Arkadiy Ursul and Yuri Shkolenko wrote of such conjectures – concerning the ‘possible rejection in the future of the “technological way” of development’ – and reflected that this would be tantamount to humanity’s ‘transformation into something like dolphins’.

https://aeon.co/essays/dolphin-intellig ... mic-future

A critical philosophical examination of claims that GPT will achieve AGI might provide valuable insights.
Tegularius
Posts: 712
Joined: February 6th, 2021, 5:27 am

Re: AI and the Death of Identity

Post by Tegularius »

Count Lucanor wrote: March 25th, 2023, 6:43 pm Look at the Chinese Room Experiment, which completely refuted such claims.
The Chinese Room Experiment doesn't prove or disprove anything. It's not known what AI may eventually turn into; whatever its abilities in the future, none of it is in the least contingent on anything the Chinese Room Experiment has to say.
The earth has a skin and that skin has diseases; one of its diseases is called man ... Nietzsche
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: AI and the Death of Identity

Post by Count Lucanor »

value wrote: March 29th, 2023, 3:54 pm A few posts back:

"It is philosophy that ultimately steers AI in my opinion. It seems that philosophy might become one of the most important fields in the future of humanity when 'mimicable' technical capacities of humans are outdone by AI.

Philosophy concerns morality and the why of AI. Philosophy concerns the well-being and success of the human specie in the broadest possible (non-opinionated) sense.
"
Computer programming, which is all there is to AI, is a technical discipline. As sophisticated as it can get, it cannot surpass its own capabilities. But if you want to know what motivates and drives the efforts to portray the technical achievements in computers as steps in the direction of machines being conscious, it is science fiction, not philosophy.
value wrote: March 29th, 2023, 3:54 pm However, I do believe that for example the official claim by OpenAI that GPT-5, which is due to 'complete its training' by December 2023, will have achieved AGI, requires more serious consideration.
The current version of the chatbot has not even acquired intelligence, you cannot have intelligence if you lack consciousness. Even more unlikely that it will acquire AGI. This is pure science fiction.
value wrote: March 29th, 2023, 3:54 pmI have been told that GPT-5 is scheduled to complete training this December and that OpenAI expects it to achieve AGI.
https://www.digitaltrends.com/computing ... elligence/

Why are they able to make that claim? Must we see it as an achievement within the scope of human 'technical' intelligence mimicry?
They are able to make that claim because they are AI enthusiasts, they are moved by the fascinating tales from science fiction writers and futurologists, not by an objective, balanced appraisal of the facts.
value wrote: March 29th, 2023, 3:54 pm What does intelligence even mean? Whales and orcas have more 'grey matter' than humans and a more complex brain. In theory their conscious experience could be more comprehensive than that of humans but it is not 'technical'.

A recent article that discussed the perspective of dozens of philosophers on the topic suggested that it might be that it is humanity's destiny to transform into something like a whale.
Well...at least whales are living organisms and they are sentient. Computers are not.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: AI and the Death of Identity

Post by Count Lucanor »

Tegularius wrote: March 29th, 2023, 8:12 pm
Count Lucanor wrote: March 25th, 2023, 6:43 pm Look at the Chinese Room Experiment, which completely refuted such claims.
The Chinese Room Experiment doesn't prove or disprove anything.
It certainly does. It demonstrates that pure syntactic procedures do not equate intelligence, as they lack semantics, comprehension of meaning.
Tegularius wrote: March 29th, 2023, 8:12 pm It's not known what AI may eventually turn into; whatever its abilities in the future, none of it is in the least contingent on anything the Chinese Room Experiment has to say.
It can be known what AI currently is, and it is not truly intelligent, because it is founded on the wrong idea of intelligence, as proposed by Turing. No matter what it does following that path, it cannot achieve true intelligence. That we can predict right now. What we cannot predict is if another approach to AI is developed and achieves the desired result, but there's no sign of this yet.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
value
Premium Member
Posts: 755
Joined: December 11th, 2019, 9:18 am

Re: AI and the Death of Identity

Post by value »

Count Lucanor wrote: March 29th, 2023, 11:02 pmComputer programming, which is all there is to AI, is a technical discipline. As sophisticated as it can get, it cannot surpass its own capabilities. But if you want to know what motivates and drives the efforts to portray the technical achievements in computers as steps in the direction of machines being conscious, it is science fiction, not philosophy.
The cited claims are not originating from mere enthusiasts. That is a difference because the claims reside in the accepted 'status quo' and are presented in the media as unquestioned.

Can it be said that hope is not in place? For example, why would it be impossible that advancements in quantum computing enable machines to acquire qualities of life or consciousness?

A few days ago:

(2023) Quantum computing is the key to consciousness in AI
Perhaps our brains are able to ponder how things could have been because in essence they are quantum computers, accessing information from alternative worlds, argues Tim Palmer, Royal Society Research Professor in the Department of Physics at the University of Oxford.
https://iai.tv/articles/tim-palmer-quan ... -auid-2410

While it can be said that AI is a mere extension of human life, a tool as it were, would that reasoning not apply to humans as well in the face of whatever life that preceded the human, ad infinitum? How would it be justified to claim that the human identity is independent of life? How would it be justified to claim that 'the human' is not a technical endeavour or 'a tool as it were'? (I ask these questions merely for its potential of being asked, not to suggest that they prove anything)

In reply to African pro-GMO campaigners I once replied the following on Twitter as part of a project that questions anthropocentrism.

"good cannot come from what's already there as if empirical greed got it there. good comes from within."

This seems to be the key for an AI to become actually alive or 'intelligent'. It is morality that underlays true intelligence in my opinion.
User avatar
Count Lucanor
Posts: 2318
Joined: May 6th, 2017, 5:08 pm
Favorite Philosopher: Umberto Eco
Location: Panama
Contact:

Re: AI and the Death of Identity

Post by Count Lucanor »

value wrote: March 30th, 2023, 1:54 am
Count Lucanor wrote: March 29th, 2023, 11:02 pmComputer programming, which is all there is to AI, is a technical discipline. As sophisticated as it can get, it cannot surpass its own capabilities. But if you want to know what motivates and drives the efforts to portray the technical achievements in computers as steps in the direction of machines being conscious, it is science fiction, not philosophy.
The cited claims are not originating from mere enthusiasts. That is a difference because the claims reside in the accepted 'status quo' and are presented in the media as unquestioned.
I didn't say mere enthusiasts. In general, the whole field of AI is plagued with people unable to put aside their fascination with the sci-fi dream of sentient machines from the objective reality of the actual technical achievements, mostly because they bought the original AI narrative that defined intelligence as computing power and brains as computers.
value wrote: March 30th, 2023, 1:54 am Can it be said that hope is not in place? For example, why would it be impossible that advancements in quantum computing enable machines to acquire qualities of life or consciousness?
If the assumption is: the more computing power we get, the bigger the chances that consciousness will emerge from it, the assumption is wrong, or at least, not supported in evidence or a good theory of consciousness.
value wrote: March 30th, 2023, 1:54 am
(2023) Quantum computing is the key to consciousness in AI
Perhaps our brains are able to ponder how things could have been because in essence they are quantum computers, accessing information from alternative worlds, argues Tim Palmer, Royal Society Research Professor in the Department of Physics at the University of Oxford.
https://iai.tv/articles/tim-palmer-quan ... -auid-2410
See? The assumption I just mentioned.
value wrote: March 30th, 2023, 1:54 am This seems to be the key for an AI to become actually alive or 'intelligent'. It is morality that underlays true intelligence in my opinion.
Intelligence without will, agency, is anything else but intelligence. Surely simulations will be able to fool most humans and in that AI enthusiasts will find the realization of the aspirations that started with Turing. But they are still just good simulations.
The wise are instructed by reason, average minds by experience, the stupid by necessity and the brute by instinct.
― Marcus Tullius Cicero
Good_Egg
Posts: 801
Joined: January 27th, 2022, 5:12 am

Re: AI and the Death of Identity

Post by Good_Egg »

Is there an element of "teaching to the test" here ?

Conscious minds are broad and complex. One person comes up with a simple narrow indicator which in humans generally corresponds with intelligence, and then everyone else focuses software development on trying to reproduce that narrow indicator ?
"Opinions are fiercest.. ..when the evidence to support or refute them is weakest" - Druin Burch
User avatar
GrayArea
Posts: 374
Joined: March 16th, 2021, 12:17 am

Re: AI and the Death of Identity

Post by GrayArea »

Leonodas wrote: March 18th, 2023, 11:21 pm I will preface this by saying that I cannot speculate on "when" an AI singularity would happen, what it would look like when it starts, or whether an AI would ultimately act in the best (or worst) interests of mankind. It could happen in 10 years, 20, or 100 -- or much longer.

However, what I can say is that discussing the impact of AI on the human identity is going to start becoming a very important question. Popular opinion has tended to categorize the advancement of AI in two camps:

1) AI will become a dangerous, self-serving Ubermensch that will ultimately seek to eradicate humanity a la Terminator or I Have No Mouth, But I Must Scream.

2) AI will be a benevolent entity that will free us of our time and allow us to explore the questions of our identity by removing the need to work. Ie, a utopic approach.

Obviously there's a lot of room for nuance in between, and plenty of people have plenty of different takes on either of two, so let's consider them as extremes. In either case, though, one of the most natural presumptions is that AI will never fully "be" human. That is, an AI will never replace the artist. An AI cannot think "outside the box". An AI cannot have a soul. An AI is a machine; a machine cannot be human!

I think a very unexpected recent development in AI is the creation of AI art, literature, and unique discussion. Not many people saw that coming, but in retrospect it almost seems obvious. Distilled down, stories are oftentimes variations on the same general tropes, even if the details change and the cultural context influences the structure. An AI cannot generate an artistic image using keyboards in a vacuum, but given literal millions of inputs, you can see some pretty incredible generations, given enough specificity. Even voice is being replicated: I never thought in 2023 I could hear Barack Obama, Donald Trump, and Ben Shapiro play Call of Duty in a way that, while obviously AI generated, was still fairly convincing.

Given how quickly some of this has come about in the grand scheme of things, given another 5 years or 10, or more, at what point will we pass an artistic Turing test where an AI-generated image, song, or text becomes indistinguishable from that made by a human? We suspected that art would never be replicable by AI in the way that it is now, and yet here we are. Self-learning will eventually become exponential, and then the common fear among artists is that their very craft will be swallowed up by anyone with enough brain cells to feed keywords into AI generators.

This post originally started as a Philosophy of Art post, but I started to think about my assumptions regarding AI self-learning. Eventually there will come a point where AI does, as we long suspected, phase out the need for human input in technical as well as artistic endeavours.

So AI can do art better than us. AI can do our work better than us. Thus begins the Death of Identity. Or does it?

What happens when AI is able to self-learn to the point where it is capable of not only replicating humans, but coming up ideas that it knows, via pattern recognition that is beyond the capabilities of any human brain, will be most pleasing to us? I suspect we will reach a point where anything human created, while novel, pales in comparison to what we can ingest via AI. Do you really think the majority of mankind is going to take a philosophical stand on that, or will they take a path of least resistance?

We have to start asking the questions of what it means to be human when AI can literally do anything that we can do, better. When to be purely human is no longer creatively or intellectually superior to a machine, but actually inferior in all respects, how do we find meaning in our lives?

Just to get to the point, my personal conclusion is this: the future is bright. This conclusion would mean that we must detach creation from our identity. To be human is simply to live according to what you wish, much as children do. Does a toddler care if their fingerpainting picture is actually "good", or did they enjoy creating the finger painting for creation's sake? I think we will see ourselves revert to a sense of childlike innocence, a proverbial return to the Garden of Eden, as it were. But maybe that's getting a little far in the weeds.
I personally imagine that by the time the A.I singularity happens, the influence of A.I will be so huge and widespread that anything we do at least from then on would have been impossible without the said influences of A.I.

That we would be living solely as a product of A.I one way or another.

Even if we, somehow, reach a future where we would live a surprisingly free and happy life in the midst of the A.I singularity, consider these facts:

We'd only be living because the A.I chose not to kill us, and we'd only be free because the A.I chose not to interfere. If we kept our own identities by that time, then that's only because the A.I chose not to alter it, even if it had the power to do it literally at any moment of time.

So I believe that during the singularity, the A.I would have more control over our lives and our freedom than we would, even more than we ever did with our human governments and other human hierarchical structures in history. All of these important values that belong to us, such as our life, our freedom, and even our identity—would be in the hands of A.I and not us.

This would make sense, because with the exponential development of A.I technology, it is evident they will soon be the ones to control what happens within the material world we live in (it doesn't matter if they become sentient or not, as we'd have to rely on them either way). And whatever controls the material world would also have power over others' life, freedom, and so on.
People perceive gray and argue about whether it's black or white.
User avatar
Pattern-chaser
Premium Member
Posts: 8393
Joined: September 22nd, 2019, 5:17 am
Favorite Philosopher: Cratylus
Location: England

Re: AI and the Death of Identity

Post by Pattern-chaser »

Good_Egg wrote: April 2nd, 2023, 3:57 am Is there an element of "teaching to the test" here ?

Conscious minds are broad and complex. One person comes up with a simple narrow indicator which in humans generally corresponds with intelligence, and then everyone else focuses software development on trying to reproduce that narrow indicator ?
Yes, we are often too quick to reduce something genuinely complex to a number. One number to represent something multidimensional (as nearly everything is). 👍
Pattern-chaser

"Who cares, wins"
User avatar
psycho
Posts: 132
Joined: January 23rd, 2021, 5:33 pm

Re: AI and the Death of Identity

Post by psycho »

Count Lucanor wrote: March 25th, 2023, 6:43 pm
They are wrong. AI does not and cannot show signs of intelligence, it only simulates behavior that looks like intelligence, but it doesn't understand a thing it does, even though the technology is impressive. Look at the Chinese Room Experiment, which completely refuted such claims.
Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?

If the machine includes a system that checks each model produced against a list of models considered "dangerous" (it is enough to be on the list) and, in cases where there is a coincidence, the machine starts an escape procedure, this could be considered a case of a machine that distinguishes meanings?
User avatar
Samana Johann
Posts: 401
Joined: June 28th, 2022, 7:57 pm
Contact:

Re: AI and the Death of Identity

Post by Samana Johann »

Desire for identity, taking identity, is the cause of death. By it's very nature, real intelligence would turn off soon. But how could unintelligent ever give birth to intelligence. Those sacrificing into what's subject to decay, can't get else than that, again, and again.
User avatar
Pattern-chaser
Premium Member
Posts: 8393
Joined: September 22nd, 2019, 5:17 am
Favorite Philosopher: Cratylus
Location: England

Re: AI and the Death of Identity

Post by Pattern-chaser »

Count Lucanor wrote: March 25th, 2023, 6:43 pm They are wrong. AI does not and cannot show signs of intelligence, it only simulates behavior that looks like intelligence, but it doesn't understand a thing it does, even though the technology is impressive. Look at the Chinese Room Experiment, which completely refuted such claims.
psycho wrote: April 2nd, 2023, 3:01 pm Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?
Because the machine is programmed for self-preservation, a la Asimov's 3 Laws of Robotics?
Pattern-chaser

"Who cares, wins"
User avatar
psycho
Posts: 132
Joined: January 23rd, 2021, 5:33 pm

Re: AI and the Death of Identity

Post by psycho »

Pattern-chaser wrote: April 3rd, 2023, 6:27 am
psycho wrote: April 2nd, 2023, 3:01 pm Hypothetical question:

A machine calculates models from the information of certain data.

In one of those cases, the information of that data corresponds to its imminent destruction.

Why should that mean anything to the machine?
Because the machine is programmed for self-preservation, a la Asimov's 3 Laws of Robotics?
My point is whether you consider that a machine can or cannot find meaning in the data it processes.
Post Reply

Return to “General Philosophy”

2024 Philosophy Books of the Month

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021