Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Discuss any topics related to metaphysics (the philosophical study of the principles of reality) or epistemology (the philosophical study of knowledge) in this forum.
#474007
Sy Borg wrote: April 22nd, 2025, 7:16 pm I used to program with Javascript, and I also learned machine language and BASIC many years ago.
A language is just a tool you use because it's appropriate to the task at hand; you (or a senior designer) choose what language to use for a particular project. As far as software design is concerned, the language used is a minor descriptive detail. This is what I spent my whole working life doing. I honestly do have a reasonable idea of what I'm talking about. And here, that is software design.

Please can we leave this one alone? It doesn't contribute to the topic subject, that I can see.
Favorite Philosopher: Cratylus Location: England
#474015
Pattern-chaser wrote: April 23rd, 2025, 7:14 am
Sy Borg wrote: April 22nd, 2025, 7:16 pm I used to program with Javascript, and I also learned machine language and BASIC many years ago.
A language is just a tool you use because it's appropriate to the task at hand; you (or a senior designer) choose what language to use for a particular project. As far as software design is concerned, the language used is a minor descriptive detail. This is what I spent my whole working life doing. I honestly do have a reasonable idea of what I'm talking about. And here, that is software design.

Please can we leave this one alone? It doesn't contribute to the topic subject, that I can see.
Why should I let it go? To allow the The Great Software Authority to take the last word on AI intelligence, even if his experiences will presumably only concern much simpler systems? (If you designed AI, you would have said that straight way to gain authority in the discussion).

The best engineers in the world have no idea what's in the black box of a trained AI's neural networks. Thus, you cannot claim to have "a reasonable idea" about the question of AI's (obvious) intelligence. No one knows how it works. Unless you are claiming to be the world's best engineer (hitherto unrecognised for his genius). Is that your claim?

Perhaps your argument is more suitable for arguing that humans are not actually conscious, because consciousness can be seen as is just a synergistic collection of unconscious reflexes. However, that reductive argument, like your reductive argument, ignores the very real phenomenon of emergence.

Meanwhile ...
David Chalmers says AI systems could be conscious because the brain itself is a machine that produces consciousness, so we know this is possible in principle
https://www.reddit.com/r/OpenAI/comment ... hatgpt.com

Given that, I'm pretty sure that if DC was pressed to give a Yes/No answer to the question of AI consciousness, and he'd lose his house if he was wrong, then he would go with most experts' opinions and say "no".
#474022
Sy Borg wrote: April 23rd, 2025, 3:50 pm Why should I let it go?
Because your knowledge of software design is insufficient for your current needs, in this topic. I do not claim to know all there is to know of current AIs, but I do know enough to comment on your misunderstandings of how electronic kit — i.e. hardware + software — works, or doesn't work. I have worked on small systems and large, across many application areas. And some of the knowledge I so acquired is useful and relevant here.

This topic asks "Can a man-made computer become conscious?", and the smallest details of software design are not helpful to most readers, I suspect. So let's let it go.
Favorite Philosopher: Cratylus Location: England
#474030
Pattern-chaser wrote: April 24th, 2025, 8:13 am
Sy Borg wrote: April 23rd, 2025, 3:50 pm Why should I let it go?
Because your knowledge of software design is insufficient for your current needs, in this topic. I do not claim to know all there is to know of current AIs, but I do know enough to comment on your misunderstandings of how electronic kit — i.e. hardware + software — works, or doesn't work. I have worked on small systems and large, across many application areas. And some of the knowledge I so acquired is useful and relevant here.

This topic asks "Can a man-made computer become conscious?", and the smallest details of software design are not helpful to most readers, I suspect. So let's let it go.
Trouble is, your knowledge is also insufficient - you know no more about the black box nature of neural networks than I do. If you did, you'd be a billionaire.

Your only argument is, "Trust me, I'm an expert" when you obviously are not an expert in AI design. Your appeal to authority is noted, as is the complete lack of substance of your claims.

The fact is that AI is very intelligent in limited fields. It's not sentient, but it's intelligent.
#474041
Sy Borg wrote: April 24th, 2025, 7:50 pm ...you know no more about the black box nature of neural networks than I do.
I'm not sure that's true.

The reason why you have read that no human can understand the workings of a neural net is because the writer has misunderstood what they are, and how they work. There is no need for humans to understand their inner workings. They are quite different from other computer hardware, in the way that they are 'programmed'.

A conventional microprocessor is usually programmed in some language or other, and that code is translated by a compiler into what we call "object code", the executable code that the microprocessor understands as instructions. This has mostly been the case since programmable computers were invented.

A neural net, on the other hand, is not "programmed" in that way, it is "trained", as you have already mentioned. It begins with a default state, and the AI software adjusts that state by adjusting the connections between the simulated neurons. The training proceeds by running the net, and giving the final result a figure of merit. Then the AI software changes some aspect(s) of the net's configuration, and the net is executed once more. If its performance exceeds the previous configuration, the change is retained; if not, it is discarded. This process is repeated many, many times, until the net has beeen optimised to perform its particular assigned task. "Training", as you say.

The training substitutes for 'programming'. It still ends up getting executable code to where it belongs, but in a very different way. So of course no human can understand its 'programming', as it really doesn't have any. It has instead its own post-training configuration, derived as described.

We could talk of all manner of other details, like hardware neuron simulation versus neurons simulated wholly in software. [I don't think they use the latter much, as it is heavily processing-intensive.] We could compare the number and nature/function of connections to these neurons to those of human neurons, and so on.

But I have already suggested, several times, that if you really want to delve into this stuff, you need to find someone who knows more than I do, and (stating the obvious 😉) more than you do too. I know more than you, thanks to a lifetime of experience of designing and building electronic hardware and software, but I don't know enough to offer AI-specific knowledge here in this topic.

I have opinions, and so do you. Mine are informed opinions, to the extent that I know what I don't know. I'm not sure that you have yet reached even that stage/state of ignorance. If you want to discuss the detailed operatioon of AI hardware and software, I suggest a new topic would be more appropriate?
Favorite Philosopher: Cratylus Location: England
#474044
The problem for you is that you only know bits an pieces about the original design of AI, but you have NO IDEA what's happening after an AI is trained. That's no indictment on you because nobody knows how trained AI works, what the trained configuration s, or the details of its operation.

In short, no one knows enough to reverse engineer a trained AI. You are in no better a situation than I am in that regard, so you can put away your superior attitude. Your knowledge is not important in context. If I want to know how a programmed thing world, I'll ask you, but your thinking is too locked in to handle this topic.

The fact is that AI is intelligent. That's it's job - to be intelligent. It's not conscious, but it has limited kind of intelligence. We have limited intelligence too, but we have different limitations.
#474055
Sy Borg wrote: April 25th, 2025, 2:55 pm The problem for you is that you only know bits an pieces about the original design of AI, but you have NO IDEA what's happening after an AI is trained. That's no indictment on you because nobody knows how trained AI works, what the trained configuration s, or the details of its operation.

In short, no one knows enough to reverse engineer a trained AI. You are in no better a situation than I am in that regard, so you can put away your superior attitude. Your knowledge is not important in context. If I want to know how a programmed thing world, I'll ask you, but your thinking is too locked in to handle this topic.

The fact is that AI is intelligent. That's it's job - to be intelligent. It's not conscious, but it has limited kind of intelligence. We have limited intelligence too, but we have different limitations.
You seem determined that everyone share your ignorance. Fair enough; I can't make you do otherwise, and don't want to. But you are mistaken if you think that the internals of AI are magic. They aren't.

As I explained in my previous note, reverse-engineering an AI is trivial. But what you get when you do is not a "program", but a connection-map (of how the neurons are inter-connected). And because this map was derived more or less randomly, it 'makes no sense'. There is no planned structure to it, as a 'program' would have. All we know is that it is tested (trained) and found to work. But that is not the limit of what we know about the AIs we design and build, only about its core 'programming', which isn't programming at all.

I'm sorry that you see a "superior attitude" in what I write. Oddly enough, I see that in your responses, quite strongly. Perhaps I am mistaken, as you are? Anyway, I meant/intended no offence, and I apologise if I failed in that.
Favorite Philosopher: Cratylus Location: England
#474059
Pattern-chaser wrote: April 26th, 2025, 6:30 am
Sy Borg wrote: April 25th, 2025, 2:55 pm The problem for you is that you only know bits an pieces about the original design of AI, but you have NO IDEA what's happening after an AI is trained. That's no indictment on you because nobody knows how trained AI works, what the trained configuration s, or the details of its operation.

In short, no one knows enough to reverse engineer a trained AI. You are in no better a situation than I am in that regard, so you can put away your superior attitude. Your knowledge is not important in context. If I want to know how a programmed thing world, I'll ask you, but your thinking is too locked in to handle this topic.

The fact is that AI is intelligent. That's it's job - to be intelligent. It's not conscious, but it has limited kind of intelligence. We have limited intelligence too, but we have different limitations.
You seem determined that everyone share your ignorance. Fair enough; I can't make you do otherwise, and don't want to. But you are mistaken if you think that the internals of AI are magic. They aren't.

As I explained in my previous note, reverse-engineering an AI is trivial. But what you get when you do is not a "program", but a connection-map (of how the neurons are inter-connected). And because this map was derived more or less randomly, it 'makes no sense'. There is no planned structure to it, as a 'program' would have. All we know is that it is tested (trained) and found to work. But that is not the limit of what we know about the AIs we design and build, only about its core 'programming', which isn't programming at all.

I'm sorry that you see a "superior attitude" in what I write. Oddly enough, I see that in your responses, quite strongly. Perhaps I am mistaken, as you are? Anyway, I meant/intended no offence, and I apologise if I failed in that.
Keep your insincere apologies to yourself. You now resort to an ad hom. Also you indulge in misrepresentation - I have not claimed that trained AI operations were "magic", just that they are not known.

You do not know what's happening in a trained AI, despite your programming expertise. If you understood how a trained AI works, then you could then reverse engineer it. But you can't, just as no one can fully reverse engineer life or consciousness.

The simple fact that you cannot reverse engineer trained AI renders moot your argument that they are just programmed things (and you know the programming, so you know best). QED.

Anyone who regularly uses AI knows that it is very obviously intelligent, far beyond non-trainable AI we have used in the past. An old school chatbot that is entirely programmed is a very different beast to today's LLMs. The former were not intelligent, although they could provide responses that were ostensibly intelligent. Modern LLMs are not just upscaled chatbots - they are trained, and they learn, becoming more flexible and adaptive.
#474063
Sy Borg wrote: April 26th, 2025, 6:44 pm If you understood how a trained AI works, then you could then reverse engineer it. But you can't, just as no one can fully reverse engineer life or consciousness.
Can you "reverse-engineer" the recorded combinations that come up on a one-armed bandit? Not even if you understand how the one-armed bandit works? No. And doing the same to an AI? Also no, and for similar reasons.

The neuron configuration (i.e. interconnections) of a neural net is decided by 'rolling a die', and then its operation is tested to see if it works. The configuration contains no 'sense' to be understood by reverse engineering, because there is no 'sense' there. We can easily read the configuration, but puzzling out how it works is impossible once the net has exceeded a minimum number of neurons. [That minimum number, beyond which understanding is practically possible, is much smaller than you'd think, as permutations increase.]




This property of neural nets was/is not new to software designers. I first encountered it years ago, in reverse, in so-called "Monte Carlo testing". Test parameters are generated pseudo-randomly, to give coverage of a large operational area without needing a very large number of tests. But this means that what is tested will be different every time, and there's no telling what combination of properties will be verified by the randomly-generated tests. It also means that a test failure is not repeatable, again because of the random aspect(s) of the tests. Many designers eschew Monte Carlo testing because of this indeterminacy.

This isn't identical to neural nets, but it is the same problem showing up in a different circumstance. And it introduces the same design compromises: if it works, we don't know why or how. And so, if (say) the design needs extending or enhancing, as is almost always the case in our commercial world, this isn't possible without just starting again.
Favorite Philosopher: Cratylus Location: England
#474067
Pattern-chaser wrote: April 27th, 2025, 6:49 am
Sy Borg wrote: April 26th, 2025, 6:44 pm If you understood how a trained AI works, then you could then reverse engineer it. But you can't, just as no one can fully reverse engineer life or consciousness.
Can you "reverse-engineer" the recorded combinations that come up on a one-armed bandit? Not even if you understand how the one-armed bandit works? No. And doing the same to an AI? Also no, and for similar reasons.

The neuron configuration (i.e. interconnections) of a neural net is decided by 'rolling a die', and then its operation is tested to see if it works. The configuration contains no 'sense' to be understood by reverse engineering, because there is no 'sense' there. We can easily read the configuration, but puzzling out how it works is impossible once the net has exceeded a minimum number of neurons. [That minimum number, beyond which understanding is practically possible, is much smaller than you'd think, as permutations increase.]




This property of neural nets was/is not new to software designers. I first encountered it years ago, in reverse, in so-called "Monte Carlo testing". Test parameters are generated pseudo-randomly, to give coverage of a large operational area without needing a very large number of tests. But this means that what is tested will be different every time, and there's no telling what combination of properties will be verified by the randomly-generated tests. It also means that a test failure is not repeatable, again because of the random aspect(s) of the tests. Many designers eschew Monte Carlo testing because of this indeterminacy.

This isn't identical to neural nets, but it is the same problem showing up in a different circumstance. And it introduces the same design compromises: if it works, we don't know why or how. And so, if (say) the design needs extending or enhancing, as is almost always the case in our commercial world, this isn't possible without just starting again.
" puzzling out how it works is impossible once the net has exceeded a minimum number of neurons"

There's the rub. It's too complex for us to understand.

There is very little debate about this. The vast majority of people dealing with AI sees it as intelligent - not human - intelligent. Likewise, the vast majority don't see it as conscious.

AI's intelligence goes far beyond a poker machine's random number generation. Poker machines rely on pre-programmed odds and simple algorithms. LLMs can learn from vast datasets, understand context, generate creative responses, and adapt to new tasks. It can write essays, translate languages, and play chess at a superhuman level, which requires reasoning and pattern recognition far beyond a poker machine's capabilities.
#474076
Sy Borg wrote: April 27th, 2025, 4:18 pm " puzzling out how it works is impossible once the net has exceeded a minimum number of neurons"

There's the rub. It's too complex for us to understand.

There is very little debate about this. The vast majority of people dealing with AI sees it as intelligent - not human - intelligent. Likewise, the vast majority don't see it as conscious.

AI's intelligence goes far beyond a poker machine's random number generation. Poker machines rely on pre-programmed odds and simple algorithms. LLMs can learn from vast datasets, understand context, generate creative responses, and adapt to new tasks. It can write essays, translate languages, and play chess at a superhuman level, which requires reasoning and pattern recognition far beyond a poker machine's capabilities.
I'm sorry. This exchange has far exceeded my abilities to explain. There's so much that you don't know. We can't all know everything!

There is no comparison between an AI and a "poker machine". It is in the AI's training that the randomness plays its part, while the poker machine uses randomness in its *operation*. And so on.

We can't "understand" the connection-map of a neural net because there's nothing there to 'understand'. To understand something like this, we need some structure there, else what is it that we will try to understand? And a pseudo-randomly generated connection map has no structure; that's why we can't understand it. And so on.

I think we're done here, in this exchange. We're not getting anywhere...
Favorite Philosopher: Cratylus Location: England
#474082
Search for an article published on PhysOrg (phys.org) called "Increased AI use linked to eroding critical thinking skills". It's worth reading, I think...?
Justin Jackson (on PhysOrg) wrote: A study by Michael Gerlich at SBS Swiss Business School has found that increased reliance on artificial intelligence (AI) tools is linked to diminished critical thinking abilities. It points to cognitive offloading as a primary driver of the decline.

AI's influence is growing fast. A quick search of AI-related science stories reveals how fundamental a tool it has become. Thousands of AI-assisted, AI-supported and AI-driven analyses and decision-making tools help scientists improve their research.

[...]

In the study "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking," published in Societies, Gerlich investigates whether AI tool usage correlates with critical thinking scores and explores how cognitive offloading mediates this relationship.
Favorite Philosopher: Cratylus Location: England
#474090
Pattern-chaser wrote: April 28th, 2025, 7:47 am
Sy Borg wrote: April 27th, 2025, 4:18 pm " puzzling out how it works is impossible once the net has exceeded a minimum number of neurons"

There's the rub. It's too complex for us to understand.

There is very little debate about this. The vast majority of people dealing with AI sees it as intelligent - not human - intelligent. Likewise, the vast majority don't see it as conscious.

AI's intelligence goes far beyond a poker machine's random number generation. Poker machines rely on pre-programmed odds and simple algorithms. LLMs can learn from vast datasets, understand context, generate creative responses, and adapt to new tasks. It can write essays, translate languages, and play chess at a superhuman level, which requires reasoning and pattern recognition far beyond a poker machine's capabilities.
I'm sorry. This exchange has far exceeded my abilities to explain. There's so much that you don't know. We can't all know everything!

There is no comparison between an AI and a "poker machine". It is in the AI's training that the randomness plays its part, while the poker machine uses randomness in its *operation*. And so on.

We can't "understand" the connection-map of a neural net because there's nothing there to 'understand'. To understand something like this, we need some structure there, else what is it that we will try to understand? And a pseudo-randomly generated connection map has no structure; that's why we can't understand it. And so on.

I think we're done here, in this exchange. We're not getting anywhere...
I'd already made clear that intelligence is not about the processes, it's about the results - not that I should have even had to say it because it's pretty obvious. Alas, you are far too superior to pay attention to a mere mortal like me.

Again, reductionism is invalid as analysis tool in context. You could just as easily claim that neurons are not intelligent and that the cerebral cortex, white matter limbic system are lacking in structure - and that's in a system that does not only produce intelligence (and far more of it) but also consciousness.

You know you're wrong. Why not just admit it?
#474091
Pattern-chaser wrote: April 28th, 2025, 10:32 am Search for an article published on PhysOrg (phys.org) called "Increased AI use linked to eroding critical thinking skills". It's worth reading, I think...?
Justin Jackson (on PhysOrg) wrote: A study by Michael Gerlich at SBS Swiss Business School has found that increased reliance on artificial intelligence (AI) tools is linked to diminished critical thinking abilities. It points to cognitive offloading as a primary driver of the decline.

AI's influence is growing fast. A quick search of AI-related science stories reveals how fundamental a tool it has become. Thousands of AI-assisted, AI-supported and AI-driven analyses and decision-making tools help scientists improve their research.

[...]

In the study "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking," published in Societies, Gerlich investigates whether AI tool usage correlates with critical thinking scores and explores how cognitive offloading mediates this relationship.
This is off topic and should be a new thread, not that it's anything new. Humans have been delegating their intelligence to machines (and society), just as they delegated physical tasks for a long time. Thus, humans have smaller brains and muscles than their ancestors. Humans were once generalists without machines. Now humans are increasingly specialised in their abilities, and machines cover the generalities.
  • 1
  • 236
  • 237
  • 238
  • 239
  • 240

Current Philosophy Book of the Month

The Contentment Dilemma

The Contentment Dilemma
by Marcus Hurst
May 2025

2025 Philosophy Books of the Month

The Contentment Dilemma

The Contentment Dilemma
by Marcus Hurst
May 2025

On Spirits

On Spirits
by Dr. Joseph M. Feagan
April 2025

Escape To Paradise and Beyond

Escape To Paradise and Beyond
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


I think that my understanding of gender identity […]

What Makes Art Therapy?

That seemed 'dreadful', like the 'end of history[…]

Emotional Intelligenge

You called Musk "evil". That is a n[…]