Pages

Tuesday, February 18, 2020

Classified: Sensitive Information Regarding AI


 This report has been created by someone who closely follows the studies of the Illuminae Group, writing in an effort to analyze the implications of the events they address.

Briefing note:
Before I dive too deeply into the complex issues addressed in this report, I highly recommend taking AIDAN’s lead (p. 304) and listening to Mozart’s Requiem in D Minor while reading this report or the accompanying section in the Illuminae account. Nothing gets you in the zone of psychotic robots, zombie-like infected, and inescapable doom and death quite like “Dies Irae.”

Focus of Analysis:
Briefly describe AIDAN's unique abilities/features/"personality" as an Artificial Intelligence (up to pg 344). What is significant about AIDAN? What critical, ethical problems arise for the characters (and us as readers) as a result of AIDAN and AIDAN's actions?


At the beginning, AIDAN seems to be just as flat a character as an AI should be—no emotion, no real personality. It doesn’t even have a particular kind of voice; its voice is described by Ezra as “sexless,” with “perfect tone and inflection and pronunciation” and without any particular age or accent (p. 45). However, it soon becomes clear that the damage AIDAN sustained during the initial battle has somewhat altered its personality. First, AIDAN begins to act without orders. Then, after it is awakened from the comatose nap of its shutdown, the rest of its new personality is revealed (starting on page 264).

AIDAN’s personality changes can mostly be summarized by one assessment: AIDAN seems much more human. First, its language becomes more descriptive and poetic, with phrases like “A strand of spider silk. Fragile as spun sugar” (p. 279). Then the more concerning traits appear. AIDAN is increasingly described as “insane” by several people (Ezra p. 137, Kady p. 241, Torrence p. 304, Boll p. 326), and the further you read, the more inclined you are to agree with their assessment. Logic has become less present in AIDAN’s actions. Specifically, its act of taking over the Alexander and slaughtering the command crew seems more vengeful than it does logical for the good of the fleet.

One of the most significant things about AIDAN is the level of power it has. It can control a whole ship for space’s sake! AIDAN’s power becomes terrifyingly evident when AIDAN takes over the Alexander, blocking the humans’ attempts at control, and releases the infected people, directing them towards the ship’s command crew (p. 294-306).

The other significant thing about AIDAN that makes it so dangerous is its talent in machine learning. Multiple times AIDAN learns from previous occurrences—the most important instance is when AIDAN decides that it needs to act against the humans before they attempt to shut it down again (p. 292-294).

The changes in AIDAN’s personality combined with its tremendous level of power and learning capabilities create incredible issues. Aside from the obvious horrific result of the incredible death toll, one problem that stands out is the deeper ethical one. The Alexander’s command crew repeatedly makes unethical decisions in an effort to conceal their AI trouble. They often lie to the passengers and crew, and they even go so far as to execute some of the pilots who disobeyed the AI in an effort to do the right thing (p. 66, 91-97).

These ethical issues lead to many questions for readers, the main one being “How far can humans go to protect a secret before they end up causing more harm than the secret’s release would?” That’s a hard question to answer, although I would argue that in this case the command crew definitely went too far as they caused the loss of many lives in spite of their hope for preserving peace.

Another question raised by the issues addressed in this novel is “To what extent should AI be involved in and in control of the technology of our lives?” AI can be incredibly helpful and can do things humans can’t at speeds we can only dream of, but if something goes wrong . . . You could end up with barely functioning tech that your life depends on, or tech that is effectively rebelling against you. This is another question that is tricky to answer—tricky to find a middle ground. But we’ll have to address it at some point.


Briefing Note:
*Subject matter: Illuminae, by Amie Kaufman and Jay Kristoff
*Classified stamp image from https://www.vectorstock.com/royalty-free-vector/square-grunge-red-classified-stamp-vector-16651800
*Image of Morph as BEN from Disney's Treasure Planet

Monday, February 3, 2020

Robot Logic


       What role does logical thinking play in the fiction we have read about robots/artificial intelligence? How prepared do you think ordinary people are to use logic effectively to live/work with AI? What should we do about this?

The Three Laws of Robotics
1)    A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2)    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3)    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
–Isaac Asimov, “Runaround”

           Logic is a core aspect of robot functionality; robots’ responses to occurrences are typically portrayed as operating under a sense of programmed logic. Given its importance, logic plays a large role in fiction that explores the potential of life with robots. Isaac Asimov’s Three Laws of Robotics, first presented in “Runaround,” are the ideal rules that most predominantly govern a robot’s sense of logic. The Three Laws are numbered in order of importance and are stressed accordingly in the robot. In “Runaround,” the main conflict results from Speedy the robot’s response to a logical imbalance in relation to these laws. While following Donovan’s orders to retrieve selenium from a pool near their location on Mercury, Speedy encounters a chemical dangerous to his mechanics. Donovan’s weakly worded command is not quite strong enough to outweigh the expensive robot’s increased tendency toward self-preservation, so Speedy circles the pool, stuck in a logical rut. The only way the men are able to snap him out of his daze is to use logic like he would; they create human danger and use the First Law to outweigh both others, effectively drawing Speedy towards them and out of danger.
            Similarly, in a more recent piece by Andrea Phillips entitled “Three Laws,” the main conflict also results from robot logic. However, in this case, the logic is only flawed in the eyes of humans. The humans suspect some sort of malfunction when the robot, Iris, kills her employer, seemingly violating the First Law. When Iris explains her actions, though, they are completely logical and still in line with the Laws. She did not violate the First Law (which in the case of this story specifies the original “a human being” as the robot’s “owner”) because Mr. Won was not her owner; she is owned by the company shareholders, and she killed Mr. Won to protect their interests. In this situation, thinking with the same logic as the robot would help people to prevent further mishaps in the future.
            In terms of current reality, I doubt that humans are prepared enough to use logic effectively with robots. Some people are closer than others since some naturally think more logically in everyday life. For the most part, however, humans would never be as logical as robots. Emotions and gut reactions are so deeply rooted in human nature that we frequently use those in determining our actions. I think it would be difficult for many people to override that natural tendency and translate every action and response into logic. In order to efficiently (and safely) live and work with robots, people would need to be trained to temporarily dampen their feelings and focus more on honing their logical thoughts.