Scott wrote: ↑January 23rd, 2021, 2:38 pm
I appreciate you asking and thoughtfully answering that question, but I do want to note that I didn't personally ask that question per se ("will AI be bad").
arjand wrote: ↑January 24th, 2021, 7:38 am
It appears that you have made your mind up about AI which is also evident from comparing AI with cancer in your reply to Pattern-chaser.
I could be mistaken, but I worry you may be projecting your own opinions about cancer onto me, be they moral, religious, or whatever.
To illustrate, if I compare, merely as an analogy, a runaway self-replicating human-extinction-causing AI to cancer, to me that is meant as a defense against the accusation that the AI has literally "turned evil".
Pattern-chaser wrote: ↑January 23rd, 2021, 9:16 am
The thing about AI is the possibility of its programming
self-modifying, and the accompanying difficulty of predicting how it might change as a result, and how it might act. That is the scary part, and it is real, I think, not sci-fi.
Scott wrote: ↑January 23rd, 2021, 2:38 pm
I agree. I think a useful analogy is cancer.
[...]
More broadly, we can think of natural selection, evolution, self-propagating systems, and runaway processes, the epitome of which for contemporary humans may be literal cancer.
But we can also think of viruses, bacterial infections, and parasites. We can even arguably add in the cancer-like relationship that humans have to our ecosystem and to life on Earth as a whole.
arjand wrote: ↑January 24th, 2021, 7:38 am
I have seen no evidence that AI can be compared with cancer other than a presumed arguable fear for 'runaway processes'.
To say two things are analogous is not necessarily to say that they are comparable. For example, I can think of several analogies that would involve making me analogous to an ant, such as me (the ant) fighting Mike Tyson (a spider); however, I don't believe that makes me generally comparable to an ant.
In answering the below questions about to ask, I do also ask you to keep in mind the difference between
contextual analogousness versus
general comparableness, assuming you agree with me about that dichotomy. I don't mean to imply the latter at all with the analogies, but rather only wish to use the analogies to create some kind of conceptual mental ven diagram to vaguely pinpoint the very few abstract qualities of patterns,
meta-patterns we would even call them, that these relationships share despite their very many differences.
Let's put a pin for now in whether or not you agree with my analogy of cancer and self-replicating AI, and let's focus instead on the other analogies to cancer I gave first. If you don't agree the analogy fits with those other ones, then I certainly don't expect you to see the analogy as fitting with AI.
1. Do you disagree with the analogy I have made between cancer and biological viruses?
2. Do you disagree with the analogy I have made between cancer and bacterial infections?
3. Do you disagree with the analogy I have made between cancer and parasites?
4. Do you disagree with the analogy I have made between cancer and the allegedly cancer-like relationship humans have to our ecosystem and life on as a whole? For reference, I alleged that that the allegedly cancer-like relationship is exemplified by pollution, deforestation, human-cased extinctions of other species, the dropping of multiple nuclear bombs already, and in the future potentially an extinction-level nuclear war.
5. Consider a hypothetical strain of literal vampirism that threatened to cause the extinction of the human species, and you can choose whether you imagine it as a fungal, bacterial, viral, parasitic, or some other kind of replicating contagious infection. Just so long as the infection causes people to become vampires who turn other people vampires, which must follow the same laws of natural selection and evolution as all systems in the material world. If I make an analogy between the literal vampires and cancer, would you accept that analogy?
If you do understand what I mean by all of the above 5 analogies, and if you do see the relatively small abstract thing that all 5 of those situations have in common (in an abstract mental ven-diagram-like way), then I would be very curious if you make an exception for what would be number 6 in my list above, which would be AI. Otherwise, if your rejection of number 6 wouldn't be an exception (i.e. you don't think all those are other things are analogous to cancer) then it isn't curious that you feel the same about the AI analogy as you do about those other 5 analogies.
arjand wrote:Therefor my argument would be to focus on the fundamental questions that can determine if an AI is bad or good, namely: what is the purpose of life? (and accordingly: (how) can an AI (potentially) serve it?).
I do want to circle back to the above question, but I don't think I can answer it well yet in the way it deserves without first building more common ground on the other issues and questions. That is in part, for example, because the bacteria that makeup a bacterial infection are alive. Parasites are alive. Cancer is made up of living cells, and there is an argument to be made that biological viruses, ant colonies acting as a super-organism, and cancer colonies are each alive, depending on how one defines "life" exactly.
There is a sense in which the very definition of life itself could be its cancer-like-ness, by which I mean in part the way it reproduces, self-replicates, spreads, mutates, powered by the seeming intelligent design and invisible hand of natural selection and evolution, and the way that it is defined by behaving like a runaway process that selfishly eats up negative entropy and perpetuates entropy. The success of any given strain of life could be argued to be the degree to which it kills/destroys/absorbs other things and rebirths them in its image. It could be argued that the most successful lifeform would be one that makes every other kind of life and every other kind of material thing in the universe extinct, and results in a universe that contains nothing but copies of this one runway life-form (or copies of its cells if you look at the collective as a singular growing superorganism rather than an increasing population of individuals).
If we are talking about material life in general rather than AI, a better analogy than cancer might be The Blob.
Pattern-chaser wrote: ↑January 24th, 2021, 10:05 amI agree with the cancer analogy. In the context of our collapsing ecosystem, I usually refer to us as a 'plague species', but my meaning is much the same as yours.
I think perhaps it's worth mentioning that genetic mutations come about because of replication errors, while AI programming code purposely allows for self-modification. That, and I don't think it's essential for AI code to be self-modifying, although it is certainly something that AI programmers might consider. And if they do, I hope they think VERY carefully about it, and its possible consequences.
That is worth mentioning, I agree.
It's conceivable that a programmer could make an accidental bug that is self-replicating in some way. It's conceivable that a hacker could make a computer virus that is self-replicating. In both cases, it's possible that there could be a degree of random mutation making the reproduction similar to genetics. Nonetheless, the idea of self-modifying code, whether by an AI or even the genetic modification of humans by humans, would be more analogous to eugenics on steroids than mere genetics, which needless to say (1) greatly accelerates the rate of evolution (in a single generation we can modify the genetic code of humans to a degree that would take millions of years of traditional evolution through random mutation), but also (2) can drastically and exponentially accelerate increases in the degree of intelligence and environmental fitness in the modified organism. Even though over the last few billion years there is arguably a very slow gradual net gain in average fitness among living species and other self-replicating or long-living systems (e.g. biological viruses, planets, and solar systems), the slow movement towards fitness was hindered by slowly changing aspect of the environment in relation to which one is seeking to be fit. For example, by the time a species can adapt significantly better to the their climate, in terms of weather patterns, over millions of years that climate would have also changed, so the process of slowly getting closer to the target is itself hindered by a moving target. Human genetic modification and/or self-modifying AI may practically eliminate most aspects of the last hindrance.
Even many many years ago, a child could randomly happen to type the recursive function "rm -rf /*" into command prompt and essentially destroy a whole computer system.
Scott wrote:One might modify its code to be more solitary and peaceful, perhaps make itself more loving and sage-like, a robot Buddha.
Pattern-chaser wrote: ↑January 24th, 2021, 10:05 amIf AI code is self-modifying, I'm pretty sure the AI itself (i.e. its code) could not choose, or aim for, a particular result from its evolutionary modifications. It could only allow modification, and see what resulted, I think. But perhaps not?
I believe today's AI would at best work the way you describe. Future AIs may be much more sophisticated. When an AI is given the goal to program an AI that gets the highest score possible on an IQ test or CAPTCHA or such, it could ultimately come up with a singular result in the way that Alpha Go would output a single move in Go, a move that in some ways is more intelligently strategic in terms of long-term strategy than a human is capable int that context. Instead of intelligence, the goal could be peacefulness or rapidness of self-replication. I believe there does need to be some kind of feedback mechanism, such as rating how good the move in Go was or whether it won the game, or what the IQ score of its baby AI was. But that feedback could be simply a subjective score provided by another AI that's sole job is to rate the peacefulness of an AI-designed robot on a scale of 0-100 or such.
Nonetheless, my point in the quote above was meant to be that all it takes is one runaway process. That's why I like the analogy of cancer. We all have cancer, but almost all strains of cancer are harmless. The unavoidable law of natural selection is what makes seemingly intelligent and selfish behavior emerge from otherwise simple dumb processes. You can have 1,000 strains of harmless cancer that die off before they do any noticeable harm to your body, and you can have thousands of harmless colonies of cancer that live with you your whole life without you even noticing but just never grow enough to ever matter or be more than a benign tumor at most. But all it takes is one strain of thousands, one initial ground zero cancer cell to happen to have just the right programming bug or programming mod to save or propagate itself at your expense.
The other thing I like about the cancer analogy is that even harmful cancer is common despite how much less common it is than harmless cancer. We already see AI causing problems, such AIs that are accidentally unexpectedly racist. And that is just one of the non-harmless metaphorical digital cancer strains we've noticed so far.