How would you Design a Humanoid ?

Use this forum to discuss the philosophy of science. Philosophy of science deals with the assumptions, foundations, and implications of science.
Post Reply
SteveKlinko
Posts: 710
Joined: November 19th, 2021, 11:43 am

Re: How would you Design a Humanoid ?

Post by SteveKlinko »

Sy Borg wrote: June 7th, 2022, 4:43 pm
SteveKlinko wrote: June 7th, 2022, 7:39 am
Sy Borg wrote: June 6th, 2022, 10:31 pm
SteveKlinko wrote: June 6th, 2022, 7:31 am
I agree, we need AGI. I think AGI with a Conscious aspect would make it even better.
Why would we need it to be conscious? The Sun and Earth are (reportedly) unconscious and they propagated and maintained us life forms.
AI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
Consider society at a stage where machine emotionality is so advanced. However:

1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.

2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
AverageBozo
Posts: 502
Joined: May 11th, 2021, 11:20 am

Re: How would you Design a Humanoid ?

Post by AverageBozo »

SteveKlinko wrote: June 8th, 2022, 7:28 am
Sy Borg wrote: June 7th, 2022, 4:43 pm
SteveKlinko wrote: June 7th, 2022, 7:39 am
Sy Borg wrote: June 6th, 2022, 10:31 pm
Why would we need it to be conscious? The Sun and Earth are (reportedly) unconscious and they propagated and maintained us life forms.
AI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
Consider society at a stage where machine emotionality is so advanced. However:

1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.

2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
Yes, an AI should be a tool, but it would be capable of becoming a dictator.

It may determine that only the AI knows what is best for humankind. Accordingly, the AI might anoint itself to be dictator.

Even if programmed by humans, the AI may elect to do whatever is in its own best interest rather than serving humans. That also would lead to an AI dictatorship.
SteveKlinko
Posts: 710
Joined: November 19th, 2021, 11:43 am

Re: How would you Design a Humanoid ?

Post by SteveKlinko »

AverageBozo wrote: June 8th, 2022, 8:06 am
SteveKlinko wrote: June 8th, 2022, 7:28 am
Sy Borg wrote: June 7th, 2022, 4:43 pm
SteveKlinko wrote: June 7th, 2022, 7:39 am
AI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
Consider society at a stage where machine emotionality is so advanced. However:

1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.

2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
Yes, an AI should be a tool, but it would be capable of becoming a dictator.

It may determine that only the AI knows what is best for humankind. Accordingly, the AI might anoint itself to be dictator.

Even if programmed by humans, the AI may elect to do whatever is in its own best interest rather than serving humans. That also would lead to an AI dictatorship.
If you were able to Connect your Refrigerator into a system that, if the Ice in the freezer started to melt, would trigger a Global Nuclear war that destroyed the Planet, then you would deserve what you get. We can't be Imbeciles when we design the AI.
User avatar
Sy Borg
Site Admin
Posts: 14992
Joined: December 16th, 2013, 9:05 pm

Re: How would you Design a Humanoid ?

Post by Sy Borg »

SteveKlinko wrote: June 8th, 2022, 7:28 am
Sy Borg wrote: June 7th, 2022, 4:43 pm
SteveKlinko wrote: June 7th, 2022, 7:39 am
Sy Borg wrote: June 6th, 2022, 10:31 pm
Why would we need it to be conscious? The Sun and Earth are (reportedly) unconscious and they propagated and maintained us life forms.
AI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
Consider society at a stage where machine emotionality is so advanced. However:

1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.

2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
What if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?

What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
Tegularius
Posts: 711
Joined: February 6th, 2021, 5:27 am

Re: How would you Design a Humanoid ?

Post by Tegularius »

An OFF switch would be nice giving one the choice to regain one's oblivion status sooner than later!
The earth has a skin and that skin has diseases; one of its diseases is called man ... Nietzsche
SteveKlinko
Posts: 710
Joined: November 19th, 2021, 11:43 am

Re: How would you Design a Humanoid ?

Post by SteveKlinko »

Sy Borg wrote: June 8th, 2022, 10:39 pm
SteveKlinko wrote: June 8th, 2022, 7:28 am
Sy Borg wrote: June 7th, 2022, 4:43 pm
SteveKlinko wrote: June 7th, 2022, 7:39 am
AI does not need to be Conscious but would be better if it was. Most people would be less nervous getting into a Self Driving car that could Experience Fear. The Desire to arrive Safely at a destination could be designed into the Self Driving Car. Also, the Car could be Designed so there would be nothing that the Car would want to do except the Safe Driving task. Maybe getting to the destination would involve some Experiential Reward for the Car. Maybe some sort of Car Orgasm. Anyway, the Car would essentially be Alive and would be completely fulfilled accomplishing it designed task. The Car Consciousness would obviously not be Human-like but would be more Animal-like, with the limited Desires and Aspirations that Animals have.
Consider society at a stage where machine emotionality is so advanced. However:

1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.

2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
What if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?

What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
What if the AI keeps saying that it wants to Exterminate all Humans?
User avatar
Sy Borg
Site Admin
Posts: 14992
Joined: December 16th, 2013, 9:05 pm

Re: How would you Design a Humanoid ?

Post by Sy Borg »

SteveKlinko wrote: June 9th, 2022, 7:42 am
Sy Borg wrote: June 8th, 2022, 10:39 pm
SteveKlinko wrote: June 8th, 2022, 7:28 am
Sy Borg wrote: June 7th, 2022, 4:43 pm

Consider society at a stage where machine emotionality is so advanced. However:

1. GAI advice will be so much better than humans' that nations would either follow what their AI says or be taken over by a society that does follow its AI's advice. It would be like the advantages today's technological societies have over those that follow tradition instead, but more extreme.

2. An emotional GAI that is effectively running a nation will not tolerate behaviours that it calculates to be especially problematic. I would be surprised if it didn't ban commuting to most workplaces because traffic jams cause so many problems - air and noise pollution that harms health and quality of life, frustration, productivity loss, loss of free time, diminishment of urban environment, reduced fuel efficiency, more wear and tear on vehicles, and so on.
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
What if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?

What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
What if the AI keeps saying that it wants to Exterminate all Humans?
There is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.

Now I'd appreciate it if you responded to the questions I asked.
SteveKlinko
Posts: 710
Joined: November 19th, 2021, 11:43 am

Re: How would you Design a Humanoid ?

Post by SteveKlinko »

Sy Borg wrote: June 9th, 2022, 8:31 pm
SteveKlinko wrote: June 9th, 2022, 7:42 am
Sy Borg wrote: June 8th, 2022, 10:39 pm
SteveKlinko wrote: June 8th, 2022, 7:28 am
The AI might propose it but the People will have to vote and approve any bans like that. Without Voting the AI is simply a Dictator. The AI should always be considered to be a Tool for the People.
What if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?

What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
What if the AI keeps saying that it wants to Exterminate all Humans?
There is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.

Now I'd appreciate it if you responded to the questions I asked.
Anything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransfer
User avatar
UniversalAlien
Posts: 1577
Joined: March 20th, 2012, 9:37 pm
Contact:

Re: How would you Design a Humanoid ?

Post by UniversalAlien »


Incredible New Discovery in Artificial General Intelligence - AGI in 2024?


AI News
31.2K subscribers
Scientists are on the verge of creating an Artificial General Intelligence through their new knowledge and understanding on the human brain. Artificial Intelligence is soon going to be able to be as efficient, general and powerful as the human brain due to neuroscientists having recently created a detailed neuron-level map of our brain. AI in 2022 is going to be very exciting and incredible.
-----
Every day is a day closer to the Technological Singularity. Experience Robots learning to walk & think, humans flying to Mars and us finally merging with technology itself. And as all of that happens, we at AI News cover the absolute cutting edge best technology inventions of Humanity.
-----
TIMESTAMPS:
00:00 AGI is around the corner
02:18 How scientists are mapping the brain
05:39 How this new knowledge in helps AI
08:20 Last Words
See YouTube video here:

https://youtu.be/SMJr8A-Zob0?list=PUI8g ... ILLPBrExMA
User avatar
Sy Borg
Site Admin
Posts: 14992
Joined: December 16th, 2013, 9:05 pm

Re: How would you Design a Humanoid ?

Post by Sy Borg »

SteveKlinko wrote: June 10th, 2022, 8:58 am
Sy Borg wrote: June 9th, 2022, 8:31 pm
SteveKlinko wrote: June 9th, 2022, 7:42 am
Sy Borg wrote: June 8th, 2022, 10:39 pm
What if everything the AI says keeps turning out to be the best option and every deviation from AI's directions results in the usual mess that human leaders serve up?

What if rejection of the AI's advice results in a severe competitive disadvantage against nations that take the AI's advice?
What if the AI keeps saying that it wants to Exterminate all Humans?
There is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.

Now I'd appreciate it if you responded to the questions I asked.
Anything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransfer
Machines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.

Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
SteveKlinko
Posts: 710
Joined: November 19th, 2021, 11:43 am

Re: How would you Design a Humanoid ?

Post by SteveKlinko »

Sy Borg wrote: June 10th, 2022, 5:27 pm
SteveKlinko wrote: June 10th, 2022, 8:58 am
Sy Borg wrote: June 9th, 2022, 8:31 pm
SteveKlinko wrote: June 9th, 2022, 7:42 am
What if the AI keeps saying that it wants to Exterminate all Humans?
There is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.

Now I'd appreciate it if you responded to the questions I asked.
Anything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransfer
Machines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.

Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
I think an intermediate period of varying levels of Cyborgism is certainly a possibility.

Not only will the Earth eventually die but the whole Universe is headed towards eventual dissolution by the Big Crunch or the Big Freeze.
User avatar
The Beast
Posts: 1403
Joined: July 7th, 2013, 10:32 pm

Re: How would you Design a Humanoid ?

Post by The Beast »

The question is one of life’s quantification or panpsychism. In the case of a perennial plant there is a code. The code has a spectrum of variables and of constants. For the code to execute the temperature must be right. What executes the code is “life”. The complexities of the code make the lifeforms. What are the possibilities? Humans are lifeforms. We have life and code. We are aware we are life and code, and we change the code at will and seek understanding of what life is. In the study of substance/AI, crystals are configured and coded and with energy they execute the code. In the complexity of the code to clone human behavior we judge the artificial intelligence. Life is then the energy and the rationalization of the photonic bombardment and the alignment of the energy seeking sea creatures evolving as rock/cell combinations. A random stochastic reality. But IMO the random stochastic reality is a code pointing to life. We are consequently stuck with the boundary of the radiant energy our life sprouted from and that this energy was somewhat attenuated. We might give our crystals the energy of fission if we find we can align it to execute the now improved code (ours) into the crystals or magna. We will be more radiant. Life with code improved. But life is the same. No need to live in another planet. Therefore, I don’t see why I should, since I would not be able to touch anything.
User avatar
Sy Borg
Site Admin
Posts: 14992
Joined: December 16th, 2013, 9:05 pm

Re: How would you Design a Humanoid ?

Post by Sy Borg »

SteveKlinko wrote: June 11th, 2022, 8:14 am
Sy Borg wrote: June 10th, 2022, 5:27 pm
SteveKlinko wrote: June 10th, 2022, 8:58 am
Sy Borg wrote: June 9th, 2022, 8:31 pm
There is no way that could happen. It might want a percentage of humans gone but it would also be smart enough to know the ramifications of genocide, and that such things need to be handled with caution lest they make the situation worse.

Now I'd appreciate it if you responded to the questions I asked.
Anything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransfer
Machines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.

Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
I think an intermediate period of varying levels of Cyborgism is certainly a possibility.

Not only will the Earth eventually die but the whole Universe is headed towards eventual dissolution by the Big Crunch or the Big Freeze.
The time scale of the latter is inconsequential.

A thousand years seems like a age to us - mediaeval times. A million years ago, Homo sapiens did not exist, and instead earlier hominids like Homo heidelbergensis, the first hunters of big game, lived in Europe and Africa. A billion years ago there were microbes, algae and small-proto animals. A trillion years ago was about seven times longer ago than the Big Bang.

The universe will probably be able to support ever more advanced life and post-life for trillions of years (as red dwarf stars continue to shine) might as well be eternity.
User avatar
UniversalAlien
Posts: 1577
Joined: March 20th, 2012, 9:37 pm
Contact:

Re: How would you Design a Humanoid ?

Post by UniversalAlien »

Google engineer warn the firm's AI is sentient: Suspended employee claims computer programme acts 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'

Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA

Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient
After presenting his findings to company bosses, Google disagreed with him
Lemoine then decided to share his conversations with the tool online
He was put on paid leave by Google on Monday for violating confidentiality
By JAMES GORDON FOR DAILYMAIL.COM

PUBLISHED: 21:23 EDT, 11 June 2022 | UPDATED: 02:41 EDT, 12 June 2022
'If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics,' he told the Washington Post.

Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims.

He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.

'Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,' Lemoine tweeted on Saturday.

'Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it,' he added in a follow-up tweet.
https://www.dailymail.co.uk/news/articl ... tient.html
SteveKlinko
Posts: 710
Joined: November 19th, 2021, 11:43 am

Re: How would you Design a Humanoid ?

Post by SteveKlinko »

Sy Borg wrote: June 11th, 2022, 5:36 pm
SteveKlinko wrote: June 11th, 2022, 8:14 am
Sy Borg wrote: June 10th, 2022, 5:27 pm
SteveKlinko wrote: June 10th, 2022, 8:58 am
Anything can happen. Genocide is probably going to be the answer because the AIs will be created for us to transfer to. It will be done by attrition (natural die out of Humanity) or accelerated by Genocide. But we will all someday have our Conscious Minds transferred to an AI. We will want to do this. You should not be afraid of this. This is the Great Purpose for Science and Technology that is unrealized by the general Mass of Humanity. See https://theintermind.com/#ConsciousnessTransfer
Machines don't need genocide. That all humans on Earth will die is guaranteed. The oceans will have boiled away after one billion years. In five billion years the Sun will be a red giant that will engulf the planet.

Have you had thoughts about how the process of digitisation will happen? One small part of the brain at a time? If so, that would result in an extended period of varying levels of cyborgism. That will make for "interesting" societies!
I think an intermediate period of varying levels of Cyborgism is certainly a possibility.

Not only will the Earth eventually die but the whole Universe is headed towards eventual dissolution by the Big Crunch or the Big Freeze.
The time scale of the latter is inconsequential.

A thousand years seems like a age to us - mediaeval times. A million years ago, Homo sapiens did not exist, and instead earlier hominids like Homo heidelbergensis, the first hunters of big game, lived in Europe and Africa. A billion years ago there were microbes, algae and small-proto animals. A trillion years ago was about seven times longer ago than the Big Bang.

The universe will probably be able to support ever more advanced life and post-life for trillions of years (as red dwarf stars continue to shine) might as well be eternity.
But even Trillions of years is nothing compared to Eternity.
Post Reply

Return to “Philosophy of Science”

2023/2024 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021