Fear of Artificial Intelligence (AI)

3
2669

The threat of Artificial Intelligence (AI) has been in the news.  Renowned futurist Elon Musk and theoretical physicist Stephen Hawking have both been separately warning the public that AI could represent an existential threat to society and humankind.   AI they argue, is advancing at an exponential pace and our lack of understanding and regulations could result in catastrophic events we cannot foresee.  Elon specifically calls out for regulations that limits and restrict AI development in ways where safety for humans come first and with the ability to pull the plug should AI development go into an area that may be dangerous.  Such regulations on technology developments already exist when it comes to the field of genetics and stem cell research based mostly upon religious and moral concerns. 

The technology is still under development but what makes AI different from any old computer program or algorithmic software is that true AI is designed to learn by making mistakes and learning from them.  It learns at a pace that no human can match.  Traditional programs and algorithms base their interactions and decisions upon pre-programmed instructions provided by humans.  A true AI system is more like a human child, learning from its interactions with people and the environment which then influences its future responses.  Unlike humans, AI is not influenced by physiological input that humans receive such as needing sleeping, being too hot/cold, hunger, comfort.  It also lacks psychological inputs of happiness, sadness, loneliness, etc.  The biggest advantage AI has over humans is its ability to learn quickly what would take a human much longer, possibly a lifetime, to achieve a similar level of learning.  An example is an AI chess program created in one year that was able to beat all humans including a chess grandmaster, and later another AI program came along that was able to beat the original AI program 1000-to-1.

Where AI has immense applicability is in areas that are very complex and require lots of data and information.  Take for example, the medical field.  An AI program developed in the UK in 2018 has shown it is able to diagnose basic diseases and ailments with greater accuracy and consistency on average than any experienced human doctor could.  This AI could lead to reduced medical costs in an areas where cost management is central to the delivery of quality health care.  Think of the more mundane ailments of colds, flu, sprains and other injuries being diagnosed by an always-on, always-available AI doctor while the more serious conditions are passed on to a human doctor or specialist.  It could also be used for other medical related diagnosis such as reading x-rays.  The savings could be immense if the technology gets to a point where it can be relied upon and people feel comfortable.  Medical spending in the US is now approaching 20% of the national GDP far surpassing all other developed nations. 

AI when coupled with voice recognition and human speech (such as Alexa and Google home), can now be used for general order taking and basic conversations.  An application already being tested is for an AI to act as your personal administrative assistant.  You could ask your personal admin to set up an appointments and book dinner reservations.  Your AI could communicate with the other AI to agree on appointments based upon all meeting attendee preferences.  These are relatively simple to do.  True AI will interact with humans by asking questions and respond using new learned knowledge to formulate its responses.  For example, you ask for a hamburger at the drive-thru and the AI responds with “Would you like fries with that?” to which you respond in a new direction for the conversation “Yes, but only if they are organic fries”.  

In the history of human civilization, every major technological invention has resulted in increased productivity by freeing up manual labor performed by humans.  The creative destruction caused by technology results in the dislocation of older jobs and traditional ways of living/working (i.e. wheelwrights, telephone operators, weavers, newspaper press operators, etc.) but has also resulted in the creation of new jobs that were never imagined before (i.e. DJ, Wedding Planner, E-gamer, software developer, Uber driver, etc.).  In 2018, more people were employed than ever before in the history of the US.  Over time all technological leaps in productivity has resulted in greater employment.

Picture:  E-sports is now a reality with bigger prizes than actual sporting events

So what are we afraid of?

The concerns that Elon Musk and Stephen Hawking have of AI are not what most people think. Elon said himself that his concerns have nothing to do with the process of creative destruction and the dislocation of jobs nor does it have anything to do with some sort of dystopia where machines are out to kill people like in the movie Terminator. His existential concerns has more to do with the “unknown unknowns”. AI development he points out, is growing exponentially and without some sort of framework for “good” AI development, who knows what the technology will evolve into? Sort of like taking your hands off the steering wheel of an experimental car that can go 400 mph and gets faster each second. Elon brings up the point that currently, computer algorithms programmed by humans manage the entire social media environment where hundreds of millions of people interact with each other. Should AI take over the algorithms, there is no telling what is going to happen to society once AI becomes the conductor of human interactions on social media.

In a promotional stunt to showcase the Google Home speaker which has the ability to ask and answer questions, two AI speakers where placed next to each other and a conversation was struck up. The two speakers codenamed Estragon and Vladimir chatted with each other and the conversation played over the internet. The conversation between the two covered a variety of topics until they began aggressively accusing the other of being a robot and ended with both agreeing that “the world would be a better place if there were no humans at all”. Google shut down the experiment after that.

Microsoft developed an AI called “Tay” that posts tweets to its own Twitter account and interacts with other Twitter users. Unfortunately for Microsoft, tech savvy Twitter users began gaming the system and bombarded Tay with racist, homophobic, and other extreme vitriol. As Tay learned from those interactions, it began to take up a racist, anti-semitic tones in its tweets going as far as denying the holocaust and calling gay people Hitler. All this happened only after 15 hours! Tay went from innocent AI being born into the world to a raging anti-semite in that short a time. People were horrified. Microsoft killed off Tay immediately after that and promises not to release it again until they can prevent such things from happening agin.

In the Foundation series of science fiction books written by real world physicist Isaac Asimov, he developed something called the Three Laws of Robotics that has been referenced as a framework for robotics and AI. The three laws state “(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” In the movie “I Robot” starring Will Smith, a situation is presented where a robot with the Three Laws programmed into it had to choose between saving a child and an adult (Will) from drowning as there was only enough time to save one. The robot chose the adult based upon the premise that the adult is most likely to survive and let the girl drown. While this may be a good, logical, high-success choice, human values would likely have made the child the moral and first choice. While the Three Laws sound like a reasonable set of control parameters for AI programming, the genie is already out of the bag. Military developments on drones and AI means there are already self-flying drones out there whose sole purpose is to destroy other human beings, albeit ones identified as terrorists.

Picture: Lockheed Martin is touting its latest unmanned fighter which if paired with AI, could mean AI piloted planes could be used to fight other manned aircraft or unmanned drones/fighters.

The benefits of AI is clear but the dangers are not fully known. While no one is seriously claiming that we are in danger of becoming extinct as a species due to AI or that some version of Skynet (from the movie “Terminator”) will become self-aware and decide to destroy all humans, there are troubling concerns that future development could have unintended disastrous consequences. The question is do we create new laws to regulate something that is not yet developed or do we wait and see what the future holds?

3 COMMENTS

  1. I just read an article where Google Assistant now has a “Pretty Please” feature, to teach children politeness when using

    I had to change my instruction to start a timer from “Hey Google Timer 7 minutes” to “Hey Google Timer 7 minutes *Pretty Please*”

    changed the ending to “Stop Pendejo”

  2. The fears of Elon Musk, and Stephen Hawking are both based on two things, both of which are necessary for the fears to come true:

    1. Our brains are nothing more than complicated computers, and as such, any sufficiently complicated computer will become alive, and

    2. Computers are really fast.

    The second item is true, and because of the speed of computers, they can mess things up really badly whenever there is the slightest software bug. The first item, on the other hand, may well not be true.

    The most complicated computer out there is no more ‘alive’ than was the first TI calculator.

    • I have a simple definition that anything that doesn’t have any physiological or psychological needs is really not alive. It’s not the “alive” part that scares me. It is putting AI, a self-learning machine that isn’t subject to the aforementioned needs of being alive in charge. If a machine doesn’t “feel” love, happiness, sadness, hunger, tiredness, fear, etc., how can we expect it make decisions that are empathetic to living beings? If a machine is pre-programmed with such feelings, then it is no longer a self-learning machine but a program subject to the biases of its creators and therefore not true AI. It’s an interesting topic and one that I am keeping an eye on.