Can a man-made computer become conscious?

Discuss any topics related to metaphysics (the philosophical study of the principles of reality) or epistemology (the philosophical study of knowledge) in this forum.
Post Reply
Tamminen
Posts: 1347
Joined: April 19th, 2016, 2:53 pm

Re: Can a man-made computer become conscious?

Post by Tamminen »

Steve3007 wrote:Tamminen:
I am only one Tamminen now and will be the same one Tamminen after copying. The other Tamminen will be a foreigner for me after copying. This is the internal point of view you almost found.
Incidentally, I would claim that I didn't almost find this internal point of view. I totally found it. :)
So there is no problem any more. What about your latest question? Is it still open?
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Can a man-made computer become conscious?

Post by Steve3007 »

I had to scroll back a long way to find my latest question. I think it was this:
As I've said, I think that conscious minds other than my own exist. Don't you? If you do, can you imagine objectively observing this experimental process happening to me instead?
I think you answered it by saying that yes you do think that conscious minds other than your own exist and that you therefore can imagine the experiment the other way around. So no, that question is not still open.

It was kind of a rhetorical question in any case. I thought it highly unlikely that the answer would be no, given that you're typing words in here as if there are conscious minds at the other end.
Belindi
Moderator
Posts: 6105
Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post by Belindi »

Tamminem wrote:
It was kind of a rhetorical question in any case. I thought it highly unlikely that the answer would be no, given that you're typing words in here as if there are conscious minds at the other end.
The alternative is solipsism. There may be a disproof of solipsism, I feel that I read one, once.

However even if solipsism were true that would not alter how we make up our minds what to do about intelligent machines.
Togo1
Posts: 541
Joined: September 23rd, 2015, 9:52 am

Re: Can a man-made computer become conscious?

Post by Togo1 »

Chili wrote:Wow, that's of mere assertions.
I thought that's what we were doing? Your post came up with the following:

If scientists approached human behaviour dispassionately, they would not believe in consciousness
Behavioural scientists reach conclusions about consciousness only to the extent that they carry forward their initial unscientific patterns of thought.
Determinism is merely the presence of cause and effect
Lack of determinism can only mean randomness
Consciousness involves dualism and 'mind over matter'

All of which were 'mere assertions'. Or is this a standard that only applies to people who disagree with what you consider obvious?
Chili wrote:
Togo1 wrote: I can assure there are plenty of scientists who treat people with far fewer preconceptions than meterologists treat thunderstorms, and indeed regard such fields as dangerously lacking in experimental controls. I'm not sure if you've ever worked with animal behaviourists, but they are the ones who by default reject the very concept of internal states (i.e. values held internally) except where it can be experimentally verified. We can assume computers hold values internally, but not mice.

You really did nothing to argue effectively with what I said before.
<shrug> What you posted before wasn't an arguement, just an assertion of how a particular scientific discpline functions. Given that you were incorrect on the facts, there wasn't really much more I could say.
Chili wrote:If you approach the world in terms of physics, a mouse is not really different from a stone or a computer - these are all made of particles which follow paths, and obey more or less Newtonian rules of not moving unless they are moved.
Yes. However, mice do move, so you need to account for that. The question then becomes how. In physics you create a model in which there is almost no internal activity going on within a stone, and newtonian physics, incorrect as it is, becomes a decent approximation as to the stones behaviour.

If you do that with a mouse, it doesn't work. That is to say it doesn't accurately predict the mouse's observed behaviour. Which shouldn't surprise us, because there's a lot of complicated systems built into a mouse that don't appear in, say, a stone. So animal behaviourists approach the problem by trying to posit as few internal states as possible, and then testing that hypothesis via experiments. Once you have confirmed the kinds of internal states that best fit the observations, you start to end up with something that looks like, and has the general shape of, conscious decision making.
Chili wrote:

There are processes, closely correlated with subject reports of conscious experience, that take up time in the decision process, change the modality of the decision process, light up the brain in ways common to each other and not common to the decision process, consume large amounts of energy, and change the expected result. Scientists call this consciousness because they have to call it something, and that's really the only label that fits.

When humans or dictionaries use the word "consciousness", nearly always some type of subjectivity or sentience is implied. No subjectivity is implied by looking at complex processes in a computer, a mouse, or a human brain. I could say that a vending machine is conscious because "that's the only label that fits" and you would probably laugh.
Yes, I would, because there are models of vending machines that would fit better, that work purely on a stimulus-response basis. (enter coin, get drink). The question is what is implied when such a model does not fit, or fits very poorly?
Chili wrote:To the extent that the world is "non deterministic" then effects do not follow intelligible causes, and then we just stop doing science.
Quantum mechanics is non-determinisitc. Have we stopped doing science in particle physics?
Chili wrote: Determinism is the thesis that everything works in a strict chain of cause and effect, with softer forms of the thesis allowing for specific exemptions. It's a very popular way of looking at the universe, and works very well as a rule of thumb. To use your phrase, it's a habit of human thought that doesn't really have anything to do with the science.
Science only exists as causes can be ferreted out for effects. The more randomness you inject, the less science you can do.[/quote]

Science exists as matching events to local causes. Determinism is the thesis that causation is entirely fixed. Strictly speaking, the two are in conflict, becuase entirely fixed implies non-local. Are you assuming that everything is either fixed or random?
Chili
Posts: 392
Joined: September 29th, 2017, 4:59 pm

Re: Can a man-made computer become conscious?

Post by Chili »

Tamminen wrote: I find myself now in Helsinki. Tomorrow I will find myself in London. Tomorrow I do not find myself in Paris. I, as I am now here in Helsinki, will never live in Paris. I will perhaps never visit Paris. However, one of my copies will live in Paris. Why not me?

This is the difference between the subjective and objective points of view.
Perhaps this is similar to the conundrum of the Star Trek transporter. What if it is simply killing Captain Kirk and creating an exact copy at the other end? To the world there is no difference, but the original Kirk's subjective "thread" of life comes to an end. The world may have no way of knowing, and then the original Kirk's Ka goes to visit its afterlife just as if he were simply killed.
Belindi
Moderator
Posts: 6105
Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post by Belindi »

Togo1 wrote:
I'm not sure if you've ever worked with animal behaviourists, but they are the ones who by default reject the very concept of internal states (i.e. values held internally) except where it can be experimentally verified. We can assume computers hold values internally, but not mice.
Values held internally are mediated by an internal symbolic system which in the cases of humans is often perhaps usually a linguistic symbolic system.

Concepts are not possible unless there is a symbolic system to mediate something which is not present, or is present only from the subject's perspective. Thus I can talk to some other person about some event that ceased to happen hours or days ago. And thus I can talk to some other person about the feeling tone which is at the present time and place part of my perception of the colour orange.

Presumably other animals cannot use symbolic systems. Washo and other chimps have however learned a few symbols for things that are not present to view. This may be disputed. It may be claimed that the signs made by Washo are part of what he learned of causes and effects, as if he has trained his humans to fetch the treat. When a chimp uses a stick as a tool he may not have planned to do so until he felt hungry, picked up the stick and then went into habit mode of digging out termites with it. Similarly with the clever behaviour of crows.

However all this applies to social values. Many animal species remember without recourse to mediating- symbols that's to say, concepts. The simple stimulus and response of one, the mobility system event (nerves, muscles, bones, joints) and two, the simultaneous correlation with the animal's vegetative system event is sufficient to show that the animal is sentient.

I would have thought that intelligent machines are terribly good at symbolic systems if they are good at anything. This is part of the problem, they can tell deliberate lies if programmed to do so. There has to be something that deliberates. What deliberates, I suggest , is a programme that overrides all other programmes, other programmes which the programmer purposes to be accurate.

Human beings can tell lies, and the cleverer ones do so. What inhibits the human who can profit from telling lies? I am going to suggest that what overrides the powerful lie-telling programme is the social programme which is usually named 'morality'.

I can see no logical bar to an intelligent machine's being set up with a socially active moral programme.

I can also imagine that some large enough non-human animal can be fixed up with a computerised lie-telling unit, something that fits under the skin like a heart pacemaker; and also with a social conditioning prosthesis so that the poor thing becomes a de facto man. Truly, Frankenstein's monster is piteous, especially when you add in sentience.

I described how we diagnose sentient effects in animals. Unless the intelligent machine demonstrates avoidance behaviour simultaneously with visible stress within its machinery, it is not sentient. And if it does demonstrate those then it is sentient. We can do no more.
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Can a man-made computer become conscious?

Post by Steve3007 »

Chili:
Perhaps this is similar to the conundrum of the Star Trek transporter. What if it is simply killing Captain Kirk and creating an exact copy at the other end? To the world there is no difference, but the original Kirk's subjective "thread" of life comes to an end. The world may have no way of knowing, and then the original Kirk's Ka goes to visit its afterlife just as if he were simply killed.
And of course it naturally follows that this could be happening to us all the time. Every 5 minutes I might be copied and one of the copies killed off. If the killed copy is disposed of instantly and without pain, what, if anything, does it mean to say that it's happening? Alternatively, suppose the copy that is killed is first put through terrible agonies. Every 5 minutes a copy of me goes through terrible agonies and dies. Does this mean anything? Does suffering about which we will never know and can do nothing about mean anything?
Chili
Posts: 392
Joined: September 29th, 2017, 4:59 pm

Re: Can a man-made computer become conscious?

Post by Chili »

Steve3007 wrote:Chili:
Perhaps this is similar to the conundrum of the Star Trek transporter. What if it is simply killing Captain Kirk and creating an exact copy at the other end? To the world there is no difference, but the original Kirk's subjective "thread" of life comes to an end. The world may have no way of knowing, and then the original Kirk's Ka goes to visit its afterlife just as if he were simply killed.
And of course it naturally follows that this could be happening to us all the time. Every 5 minutes I might be copied and one of the copies killed off. If the killed copy is disposed of instantly and without pain, what, if anything, does it mean to say that it's happening? Alternatively, suppose the copy that is killed is first put through terrible agonies. Every 5 minutes a copy of me goes through terrible agonies and dies. Does this mean anything? Does suffering about which we will never know and can do nothing about mean anything?
What do you mean by mean? The world may never know, just like it may never know how many fairies dance on pins' heads.

For the tortured, it would mean quite a lot.

The "many worlds" interpretation of Quantum Mechanics has universes being created constantly that nobody in other universes will ever know about.
Togo1
Posts: 541
Joined: September 23rd, 2015, 9:52 am

Re: Can a man-made computer become conscious?

Post by Togo1 »

Belindi wrote:Togo1 wrote:
I'm not sure if you've ever worked with animal behaviourists, but they are the ones who by default reject the very concept of internal states (i.e. values held internally) except where it can be experimentally verified. We can assume computers hold values internally, but not mice.
Values held internally are mediated by an internal symbolic system which in the cases of humans is often perhaps usually a linguistic symbolic system.

Concepts are not possible unless there is a symbolic system to mediate something which is not present, or is present only from the subject's perspective. Thus I can talk to some other person about some event that ceased to happen hours or days ago. And thus I can talk to some other person about the feeling tone which is at the present time and place part of my perception of the colour orange.

Presumably other animals cannot use symbolic systems. Washo and other chimps have however learned a few symbols for things that are not present to view. This may be disputed. It may be claimed that the signs made by Washo are part of what he learned of causes and effects, as if he has trained his humans to fetch the treat. When a chimp uses a stick as a tool he may not have planned to do so until he felt hungry, picked up the stick and then went into habit mode of digging out termites with it. Similarly with the clever behaviour of crows.
While this is a possible explanation, it doesn't seem very likely. Crows will make tools. Crows who have used a hooked twig in the past, to get food, will make a hooked stick (travel to a bush well away from the food, break a twig off a bush, sheer off the leaves, bend the end into a hook, return to the food). That's hard to fit into a stimulus response model, because it's novel behaviour arising from minimal stimulus. Similarly, a chimp can learn techniques they have never performed before by watching other chimps, and teach other chimps to perform tasks that they can't do by themselves. They're also are quite capable of learning to use currency to buy food treats, access to other chimps, to buy and sell sexual favours amongst themselves, and even buy access to porn (although they're quite picky about the porn). That feels like a symbolic system to me.

Again it's possible to fit all this into a stimulus response model, in the same way that it's possible to model the solar system with the sun going around the earth. It just doesn't fit very well. At some point you have to adjust your theory to fit the observations, even if we can argue about what point that adjustment should take place.
Belindi wrote: I would have thought that intelligent machines are terribly good at symbolic systems if they are good at anything. This is part of the problem, they can tell deliberate lies if programmed to do so. There has to be something that deliberates. What deliberates, I suggest , is a programme that overrides all other programmes, other programmes which the programmer purposes to be accurate.

Human beings can tell lies, and the cleverer ones do so. What inhibits the human who can profit from telling lies? I am going to suggest that what overrides the powerful lie-telling programme is the social programme which is usually named 'morality'.

I can see no logical bar to an intelligent machine's being set up with a socially active moral programme.
No, nor me. There's been a lot of work on this by various academics studying AI from the perspective of how to control and/or mitiagate the risk that it poses. The problem, I think, is symbolic logic can struggle with conflicting symbols. One example (theoretical, alas) was of an AI asked to look through pictures to choose images humans would find reassuring.
The result, was this: http://www.bbc.co.uk/programmes/p032nvf4
Belindi wrote: I described how we diagnose sentient effects in animals. Unless the intelligent machine demonstrates avoidance behaviour simultaneously with visible stress within its machinery, it is not sentient. And if it does demonstrate those then it is sentient. We can do no more.
Quite hard to pin down what 'stress' is though. I mean, if you take two ants, and squash one, the other will run away very fast, weaving back and forth, and often getting very lost. That seems like avoidance behaviour and visible stress, but how do you really tell?
Belindi
Moderator
Posts: 6105
Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post by Belindi »

Togo1 wrote:
Belindi wrote:Togo1 wrote:


(Nested quote removed.)


Values held internally are mediated by an internal symbolic system which in the cases of humans is often perhaps usually a linguistic symbolic system.

Concepts are not possible unless there is a symbolic system to mediate something which is not present, or is present only from the subject's perspective. Thus I can talk to some other person about some event that ceased to happen hours or days ago. And thus I can talk to some other person about the feeling tone which is at the present time and place part of my perception of the colour orange.

Presumably other animals cannot use symbolic systems. Washo and other chimps have however learned a few symbols for things that are not present to view. This may be disputed. It may be claimed that the signs made by Washo are part of what he learned of causes and effects, as if he has trained his humans to fetch the treat. When a chimp uses a stick as a tool he may not have planned to do so until he felt hungry, picked up the stick and then went into habit mode of digging out termites with it. Similarly with the clever behaviour of crows.
While this is a possible explanation, it doesn't seem very likely. Crows will make tools. Crows who have used a hooked twig in the past, to get food, will make a hooked stick (travel to a bush well away from the food, break a twig off a bush, sheer off the leaves, bend the end into a hook, return to the food). That's hard to fit into a stimulus response model, because it's novel behaviour arising from minimal stimulus. Similarly, a chimp can learn techniques they have never performed before by watching other chimps, and teach other chimps to perform tasks that they can't do by themselves. They're also are quite capable of learning to use currency to buy food treats, access to other chimps, to buy and sell sexual favours amongst themselves, and even buy access to porn (although they're quite picky about the porn). That feels like a symbolic system to me.

Again it's possible to fit all this into a stimulus response model, in the same way that it's possible to model the solar system with the sun going around the earth. It just doesn't fit very well. At some point you have to adjust your theory to fit the observations, even if we can argue about what point that adjustment should take place.
Belindi wrote: I would have thought that intelligent machines are terribly good at symbolic systems if they are good at anything. This is part of the problem, they can tell deliberate lies if programmed to do so. There has to be something that deliberates. What deliberates, I suggest , is a programme that overrides all other programmes, other programmes which the programmer purposes to be accurate.

Human beings can tell lies, and the cleverer ones do so. What inhibits the human who can profit from telling lies? I am going to suggest that what overrides the powerful lie-telling programme is the social programme which is usually named 'morality'.

I can see no logical bar to an intelligent machine's being set up with a socially active moral programme.
No, nor me. There's been a lot of work on this by various academics studying AI from the perspective of how to control and/or mitiagate the risk that it poses. The problem, I think, is symbolic logic can struggle with conflicting symbols. One example (theoretical, alas) was of an AI asked to look through pictures to choose images humans would find reassuring.
The result, was this: http://www.bbc.co.uk/programmes/p032nvf4
Belindi wrote: I described how we diagnose sentient effects in animals. Unless the intelligent machine demonstrates avoidance behaviour simultaneously with visible stress within its machinery, it is not sentient. And if it does demonstrate those then it is sentient. We can do no more.
Quite hard to pin down what 'stress' is though. I mean, if you take two ants, and squash one, the other will run away very fast, weaving back and forth, and often getting very lost. That seems like avoidance behaviour and visible stress, but how do you really tell?
I accept your explanation of how a symbolic system, perhaps a cultural, model is better than a SR model.
I can't comment on any problem with symbolic logic as I am not very good at it. I accept what you say, Togo. I will read your link but hope it's text not a video.

By "stress" I meant empirically observable defects in the objective functioning of the system.Is there such as thing as an ideal ant? Ideal ant behaviour? Aristotelian but no matter. I would have thought that we humans have the responsibility to pronounce upon ideal behaviour of ants, and scientists do so by working with the criterion of homeostasis. The evil is not the stress itself but the degree of stress. Is there anything wrong with applying the clinical criterion of saving of life and prevention of suffering? I know that this doesn't sound right as applied to ants, but tweak the vocabulary a little and you have " maintaining the integrity of the system and redirecting the hue of the qualia, if any".
User avatar
Elan vit
New Trial Member
Posts: 6
Joined: August 10th, 2012, 5:22 pm

Re: Can a man-made computer become conscious?

Post by Elan vit »

What is "conscious"? What is a "soul".
This is supposed to be a philosophy discussion forum. Not a "Socrates Cafe" session.
User avatar
Sy Borg
Site Admin
Posts: 14997
Joined: December 16th, 2013, 9:05 pm

Re: Can a man-made computer become conscious?

Post by Sy Borg »

From Christof Koch (chief scientific officer of Seattle's Allen Institute for Brain Science): technologyreview.com/s/531146/what-it-w ... conscious/
[Interviewer] If I build a perfect software model of the brain, it would never be conscious, but a specially designed machine that mimics the brain could be?

[Christof Koch]Correct. This theory clearly says that a digital simulation would not be conscious, which is strikingly different from the dominant functionalist belief of 99 percent of people at MIT or philosophers like Daniel Dennett. They all say, once you simulate everything, nothing else is required, and it’s going to be conscious.

I think consciousness, like mass, is a fundamental property of the universe. The analogy, and it’s a very good one, is that you can make pretty good weather predictions these days. You can predict the inside of a storm. But it’s never wet inside the computer. You can simulate a black hole in a computer, but space-time will not be bent. Simulating something is not the real thing.

It’s the same thing with consciousness. In 100 years, you might be able to simulate consciousness on a computer. But it won’t experience anything. Nada. It will be black inside. It will have no experience whatsoever, even though it may have our intelligence and our ability to speak.

I am not saying consciousness is a magic soul. It is something physical. Consciousness is always supervening onto the physical. But it takes a particular type of hardware to instantiate it. A computer made up of transistors, moving charge on and off a gate, with each gate being connected to a small number of other gates, is just a very different cause-and-effect structure than what we have in the brain, where you have one neuron connected to 10,000 input neurons and projecting to 10,000 other neurons. But if you were to build the computer in the appropriate way, like a neuromorphic computer [see “Thinking in Silicon”], it could be conscious.
I had a moment of clarity (?) about IIT while reading an article about an object found in space at the size threshold between a planet and a brown dwarf. Due to gravity, every planet has a hot "mini-star" at its core. Even Mars's "cold" core is believed to be over 1000C.

With stars the situation is simple - if a proto star builds enough mass then it will start to fuse hydrogen into helium at its core, and when that happens the star ignites. Strong cosmic winds and magnetic fields can stymie such star formation but, failing such interference, if there's enough mass in an object then it ignites, "comes to life".

That's the physical domain - add enough mass and you have ignition. Stellar life, so to speak.

Nature has demonstrated that when certain thresholds are reached then a new phenomenon emerges. So, perhaps if sufficient informational "mass" is present in a system, ie. amount of information and the degree to which it is interconnected, then it will naturally become what we think of as conscious - that consciousness will "ignite"?

If there is not enough information, as in our technology today, then it won't happen, just as planets are too small to ignite. Further, if too much information in a system is not sufficiently interconnected then consciousness won't ignite, just as a molecular cloud that has not yet formed "stellar nurseries" will not ignite with nuclear reactions.
Belindi
Moderator
Posts: 6105
Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post by Belindi »

Greta wrote:
perhaps if sufficient informational "mass" is present in a system, ie. amount of information and the degree to which it is interconnected, then it will naturally become what we think of as conscious - that consciousness will "ignite"?
Besides quantity of information is there quality of information? I'm not claiming that neurons are better than silicone things, only that , for reasons which I don't understand, neurons are creative and silicone things aren't. And also, creativity and subjective qualia are twinned.
User avatar
Sy Borg
Site Admin
Posts: 14997
Joined: December 16th, 2013, 9:05 pm

Re: Can a man-made computer become conscious?

Post by Sy Borg »

Belindi wrote:Greta wrote:
perhaps if sufficient informational "mass" is present in a system, ie. amount of information and the degree to which it is interconnected, then it will naturally become what we think of as conscious - that consciousness will "ignite"?
Besides quantity of information is there quality of information? I'm not claiming that neurons are better than silicone things, only that , for reasons which I don't understand, neurons are creative and silicone things aren't.
Yes, the quality of the information is also apparently critical, as per the above Koch quote. I thought he put it extremely well:
In 100 years, you might be able to simulate consciousness on a computer. But it won’t experience anything. Nada. It will be black inside. It will have no experience whatsoever, even though it may have our intelligence and our ability to speak.
... it takes a particular type of hardware to instantiate it. A computer made up of transistors, moving charge on and off a gate, with each gate being connected to a small number of other gates, is just a very different cause-and-effect structure than what we have in the brain, where you have one neuron connected to 10,000 input neurons and projecting to 10,000 other neurons. But if you were to build the computer in the appropriate way, like a neuromorphic computer [see “Thinking in Silicon”], it could be conscious.
The idea seems that if you want machines to "wake up" then, like any other conscious entity, they must learn via experience in the physical world rather than be screen-based. That makes sense to me. The physical world has stakes, and stakes are what makes conscious awareness advantageous. It must evolve its own programming rather than be a programmed copy.
Belindi wrote:And also, creativity and subjective qualia are twinned.
Does subjective qualia equal self awareness?
User avatar
JamesOfSeattle
Premium Member
Posts: 509
Joined: October 16th, 2015, 11:20 pm

Re: Can a man-made computer become conscious?

Post by JamesOfSeattle »

I have a problem with Koch's analogy (and I think Searle used it first) that you can simulate water, but the simulation isn't wet. If you simulate an information process the result is exactly the same as what you are simulating. When a simulated calculator performs a calculation, the result, say, 4012, is exactly the same as the 4012 you get when you do the calculation on the actual calculator, (or when you do it in your head).

Some of us have reason to believe that consciousness is about (semantic) information processing (information integration?), so simulated consciousness is real consciousness.

*
Post Reply

Return to “Epistemology and Metaphysics”

2023/2024 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021