Pages

Tuesday, March 3, 2020

Would you become a robot for the chance at a longer life?

It’s a tough question to answer. There are so many angles to consider—so many existential issues to ponder, like “What does it mean to be human?”

After considering this question for a little while, I’ve come to a few important contemplation-points. Hopefully my musings on the subject will help you in your pondering, for if—or maybe even when—you have to make this decision for yourself.

In Mark Alpert’s The Six, a number of terminally-ill teenagers have their minds copied and transferred into robots. This book is quite useful for one’s own consideration of this issue because the author crafts a contemplative tone, offering a number of perspectives from different characters, showing both positive and negative sides.

The main positive is the most obvious one: performing this procedure could be seen as saving people’s lives. This is what Adam’s father feels (51, 75). In theory, the brain can be completely copied, including every aspect that makes us who we are, so we could be uploaded into robots and still remain basically ourselves. I suppose such science could be possible since AI’s machine learning capabilities are relatively close to human brain development.

These human-robot hybrids would be more
internal hybrids than external like this guy is,
but I would still consider them cyborgs.
Another positive side to this has more to do with humanity than the individual. Some people believe that we would more successfully be able to make safe AI if they had a human center because they would already have a sense of ethics and morals, plus an understanding of humans. This idea could be taken even further, considering these cyborgs (so to speak) as bridges between humans and AI. This is especially important as we approach the Singularity point—the theoretical time when artificial, machine intelligence will surpass our own (49).

We would, however, have to chose very carefully who was transferred. Certain people would be no better for humanity in terms of ethics and morals than AIs would. No one wants someone like evil Dr. Zola from the Marvel Cinematic Universe to have the superhuman capabilities of an AI. (see Captain America: The Winter Soldier)

Maybe what a soul looks like?
Although Adam’s dad raises an important point about the potential to save lives, Adam’s mom brings in the opposite perspective, believing more in their destruction. While she argues with Adam, she raises a heart-stopping question: what about the soul? (76). If your body dies but your mind lives on in a metal shell, what happens to your soul? It is a somewhat terrifying concept to consider, especially when you have no idea what the answer might be.

As I said before, The Six helpfully offers a number of different perspectives on the issue of transferring humans to AI. The juxtaposition of Adam’s parent’s responses is very important for readers who are attempting to consider all possible sides to this issue.

The soul is typically considered a vital part of who we are as humans. Along with the idea of potentially losing that part of us during the transfer comes the issue of the body. I think a lot of people would feel they had lost so much by losing their bodies. What about physical contact (of any kind)? You couldn’t have human-feeling contact in a robot body. Even if our technology did advance far enough to include detailed sensors, would it really be the same?

Much of Adam’s response relates to his body. Before and after the procedure, that is what his mind is absorbed by.

All my attention is focused on my right hand, which now rests on my thigh.
I grasp the meager flesh there, the stiff band of dead muscle, and squeeze it as hard as I can.
Though it’s broken and dying, this is my body. How could I exist without it? (67)

Adam’s before-procedure musings bring up an important point. Our bodies are a fundamental part of who we are. Would we be human without them?

And wouldn’t most of us miss our bodies, even if we could survive without them?

I’ve been a machine for less than fifteen minutes,
but already I want to be human again. (126)

After Adam undergoes the procedure, he sees his new form and feels that it can’t truly be him (121). When Adam goes to find his body, he laments its loss. He doesn’t feel whole anymore.

I’ve lost the best part of me. I’ve lost it forever. (125)

Could we truly be human as just data in a computer? With no human body and potentially no soul?

And one more ethical issue for your contemplation: Given all we’ve learned so far, would it be right to do such things to a human—take away their body and potentially their soul, strip them down to coding—even if it were voluntary? Could such action be considered adulterating or desecrating human life?

Personally, I don’t think I would agree to have myself uploaded to a computer. I like being human in the fullest way; I wouldn’t want to lose my body or soul. And if it were a loved one...I suppose if they really wanted to do it, I would support their decision, but I certainly wouldn’t push if they were not willing themselves.

So now, after all this, I return you to my title question. Would you?


Note:

Tuesday, February 18, 2020

Classified: Sensitive Information Regarding AI


 This report has been created by someone who closely follows the studies of the Illuminae Group, writing in an effort to analyze the implications of the events they address.

Briefing note:
Before I dive too deeply into the complex issues addressed in this report, I highly recommend taking AIDAN’s lead (p. 304) and listening to Mozart’s Requiem in D Minor while reading this report or the accompanying section in the Illuminae account. Nothing gets you in the zone of psychotic robots, zombie-like infected, and inescapable doom and death quite like “Dies Irae.”

Focus of Analysis:
Briefly describe AIDAN's unique abilities/features/"personality" as an Artificial Intelligence (up to pg 344). What is significant about AIDAN? What critical, ethical problems arise for the characters (and us as readers) as a result of AIDAN and AIDAN's actions?


At the beginning, AIDAN seems to be just as flat a character as an AI should be—no emotion, no real personality. It doesn’t even have a particular kind of voice; its voice is described by Ezra as “sexless,” with “perfect tone and inflection and pronunciation” and without any particular age or accent (p. 45). However, it soon becomes clear that the damage AIDAN sustained during the initial battle has somewhat altered its personality. First, AIDAN begins to act without orders. Then, after it is awakened from the comatose nap of its shutdown, the rest of its new personality is revealed (starting on page 264).

AIDAN’s personality changes can mostly be summarized by one assessment: AIDAN seems much more human. First, its language becomes more descriptive and poetic, with phrases like “A strand of spider silk. Fragile as spun sugar” (p. 279). Then the more concerning traits appear. AIDAN is increasingly described as “insane” by several people (Ezra p. 137, Kady p. 241, Torrence p. 304, Boll p. 326), and the further you read, the more inclined you are to agree with their assessment. Logic has become less present in AIDAN’s actions. Specifically, its act of taking over the Alexander and slaughtering the command crew seems more vengeful than it does logical for the good of the fleet.

One of the most significant things about AIDAN is the level of power it has. It can control a whole ship for space’s sake! AIDAN’s power becomes terrifyingly evident when AIDAN takes over the Alexander, blocking the humans’ attempts at control, and releases the infected people, directing them towards the ship’s command crew (p. 294-306).

The other significant thing about AIDAN that makes it so dangerous is its talent in machine learning. Multiple times AIDAN learns from previous occurrences—the most important instance is when AIDAN decides that it needs to act against the humans before they attempt to shut it down again (p. 292-294).

The changes in AIDAN’s personality combined with its tremendous level of power and learning capabilities create incredible issues. Aside from the obvious horrific result of the incredible death toll, one problem that stands out is the deeper ethical one. The Alexander’s command crew repeatedly makes unethical decisions in an effort to conceal their AI trouble. They often lie to the passengers and crew, and they even go so far as to execute some of the pilots who disobeyed the AI in an effort to do the right thing (p. 66, 91-97).

These ethical issues lead to many questions for readers, the main one being “How far can humans go to protect a secret before they end up causing more harm than the secret’s release would?” That’s a hard question to answer, although I would argue that in this case the command crew definitely went too far as they caused the loss of many lives in spite of their hope for preserving peace.

Another question raised by the issues addressed in this novel is “To what extent should AI be involved in and in control of the technology of our lives?” AI can be incredibly helpful and can do things humans can’t at speeds we can only dream of, but if something goes wrong . . . You could end up with barely functioning tech that your life depends on, or tech that is effectively rebelling against you. This is another question that is tricky to answer—tricky to find a middle ground. But we’ll have to address it at some point.


Briefing Note:
*Subject matter: Illuminae, by Amie Kaufman and Jay Kristoff
*Classified stamp image from https://www.vectorstock.com/royalty-free-vector/square-grunge-red-classified-stamp-vector-16651800
*Image of Morph as BEN from Disney's Treasure Planet

Monday, February 3, 2020

Robot Logic


       What role does logical thinking play in the fiction we have read about robots/artificial intelligence? How prepared do you think ordinary people are to use logic effectively to live/work with AI? What should we do about this?

The Three Laws of Robotics
1)    A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2)    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3)    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
–Isaac Asimov, “Runaround”

           Logic is a core aspect of robot functionality; robots’ responses to occurrences are typically portrayed as operating under a sense of programmed logic. Given its importance, logic plays a large role in fiction that explores the potential of life with robots. Isaac Asimov’s Three Laws of Robotics, first presented in “Runaround,” are the ideal rules that most predominantly govern a robot’s sense of logic. The Three Laws are numbered in order of importance and are stressed accordingly in the robot. In “Runaround,” the main conflict results from Speedy the robot’s response to a logical imbalance in relation to these laws. While following Donovan’s orders to retrieve selenium from a pool near their location on Mercury, Speedy encounters a chemical dangerous to his mechanics. Donovan’s weakly worded command is not quite strong enough to outweigh the expensive robot’s increased tendency toward self-preservation, so Speedy circles the pool, stuck in a logical rut. The only way the men are able to snap him out of his daze is to use logic like he would; they create human danger and use the First Law to outweigh both others, effectively drawing Speedy towards them and out of danger.
            Similarly, in a more recent piece by Andrea Phillips entitled “Three Laws,” the main conflict also results from robot logic. However, in this case, the logic is only flawed in the eyes of humans. The humans suspect some sort of malfunction when the robot, Iris, kills her employer, seemingly violating the First Law. When Iris explains her actions, though, they are completely logical and still in line with the Laws. She did not violate the First Law (which in the case of this story specifies the original “a human being” as the robot’s “owner”) because Mr. Won was not her owner; she is owned by the company shareholders, and she killed Mr. Won to protect their interests. In this situation, thinking with the same logic as the robot would help people to prevent further mishaps in the future.
            In terms of current reality, I doubt that humans are prepared enough to use logic effectively with robots. Some people are closer than others since some naturally think more logically in everyday life. For the most part, however, humans would never be as logical as robots. Emotions and gut reactions are so deeply rooted in human nature that we frequently use those in determining our actions. I think it would be difficult for many people to override that natural tendency and translate every action and response into logic. In order to efficiently (and safely) live and work with robots, people would need to be trained to temporarily dampen their feelings and focus more on honing their logical thoughts.