Skip Russell wrote: ↑
February 29th, 2020, 6:17 pm
No. The definition of consciousness is synonymous with the experience of having a nervous system. The senses and our thoughts, feelings and subconscious primitive programming are consciousness. I don't think we could ever make a CNS-like thing from scratch outside of natural or perhaps laboratory-involved reproduction. I don't think we'll ever achieve the technology possible to do it - not that it would be impossible.
Is there a theory today that makes the idea plausible?
If you do not believe it to be impossible, why would it be justified to state that humans will never achieve the technology able to do it?
The simple question: what is the purpose of life?
is unanswered. As it appears, it will be important to be able to answer such questions before an AI is let on the loose. How can an AI determine what is valuable beyond the rules that were designed by humans of the past (i.e. a purpose of the past)? If an AI were to evolve beyond the human, it will need to be able to stand alone on that regard. As it appears, even humans themselves haven't figured it out yet and are bound by what has been given to them without an ability to understand it.
Letting an AI on the loose and see what happens would be a strategic choice that could have implications. What would be the purpose to be served? If an AI manages to escape the solar system and even the Milky Way to fulfill a purpose envisioned by humans from Earth (e.g. colonize millions of planets), would that be actually "good"?
It may be wise to look closer at home to see how humans relate to animals. As it appears, many humans today view animals as meaningless beyond the extent to the value that they can see in them. (A ground for) respect for nature is hard to find.
An example can be seen in my topic about the ability to reverse the aging process
so that humans could live for +10,000 years. The question why humans could respect nature is unanswered. It seems inevitable that humans in the near future will choose a life span of +10,000 years.
In a topic about a recent warning by 200 top scientists
that millions of insects and animals are driven to extinction in the next decades, a user from this forum (a philosopher) replied that it would be no problem to eradicate the mosquito from Earth. Also in that topic, and on a science forum of a major UK University, people with a science or philosophy background appear to find it difficult to formulate a reason why it would matter when millions of animals go extinct.
If the human does not see a reason why to respect animals, not even to save them from extinction, why could an AI respect the human or achieve an ability to serve Nature's bigger whole?
What is "good" beyond the human? How could an AI reach or pursue such a concept when such simple questions cannot be answered today, not even for humans?
If life were to be good as it was, there would be no reason to exist.