The International and Multi-Regional Membership Center of Latinos in the USA
Welcome to you all!
Should We Fear Artificial Intelligence?
PuraVidaCommunity
If you want to offer us your opinion, click on the button labeled My Opinion. Thank you.
Introduction
Before the concept of Artificial Intelligence came to be coined around 1955, people had discussed about the
nature of Intelligence for decades. The central matter was understanding what is intelligence. To offer a
definition of intelligence, one must establish first some rules or guidelines to help us achieve clarity and to
avoid ambiguity within the definition itself. The matter as part of what is called scientific methodology is vast
and outside the scope of this article, however, to establish a point of reference, here you have a scientific
definition of what intelligence is.
Intelligence is the capability of doing an appropriate action under an unpredictable stimulus.
Based on this definition of intelligence, we can find intelligence all around planet earth. If you approach a flame
to an ant, for example, it will change direction to escape from the flame. No one predicted that you will light a
match and approached it to the ant. The stimulus is completely unpredictable. The act of moving away from
the flame and not be destroyed is an appropriate act to the stimulus, thus according to the definition, the ant
displayed intelligence. The definition does not offer measurement criteria nor establishes degrees of
intelligence. According to this definition an ant has intelligence.
Away from the animal realm and turning our attention to our everyday life some of the valid questions with respect to our personal interaction with
electronic devices may be the following:
What qualifies a computer to be an intelligent device?
Better yet, is your computer intelligent?
Is your iPhone or your Android phone an intelligent device?
BY SAMUEL C. BAXTER AND DAVID J. LITAVSKY
MARCH 1, 2017
People worry that smart gadgets and similar technology will develop into super-intelligent, out-of-control machines that subjugate the world. The
answer to whether you should be concerned about this reveals a fundamental—yet little known—fact about your mind.
Checking the weather is an integral part of most morning routines. Yet you no longer need to look out the window. Instead, you simply speak into your
phone or other similar device.
“Hey, Siri, do I need my umbrella?”
“Alexa, will it be hot this afternoon?”
“OK Google, how much snow will we get?”
“Hey, Cortana, should I wear a jacket?”
Take the last question. Your smart device responds, “You may consider wearing a light jacket, as it is 43 degrees Fahrenheit with a possibility of light rain
showers.”
You have just had a conversation with a budding artificial intelligence (AI). If you are not already, get used to asking a device what to do as it may
someday learn to push your buttons, tell you what to do, or even begin to have feelings against you.
At least that is what many of the leading minds in science and technology want us to think. Stephen Hawking, Elon Musk, Bill Gates, Steve Wozniak, Neil
deGrasse Tyson, and others fear AI may take over in coming years.
Ever since the term “artificial intelligence” was coined by American computer scientist John McCarthy in 1955, the idea that computers could learn to
listen, speak, think and feel emotions has permeated pop culture. Just think of the movies 2001: A Space Odyssey, The Terminator, and The Matrix.
Recent entries include Her, Ex Machina, and Avengers: Age of Ultron.
Although they do not live up to their fictional counterparts, current AI advancements
are impressive.
It drives us around: Tesla Motors’ autopilot technology is close to providing full
autonomy, which will allow a vehicle to completely take over for the driver.
It serves as our financial advisors: there are online chat bots that provide support for
credit card or banking customers.
It reports the news: media outlets such as Associated Press, Foxand Yahoo! use
computer programs to write simple financial reports, summaries and sports and
news recaps.
Companies are also developing AI applications that provide in-depth responses to
“non-factoid” questions, such as relationship advice. Some programmers are even
including synthetic emotions to better connect with users.
What worries people most is that some computers think on a completely different level than humans. Only
recently has an AI been able to beat the best players in the ancient Chinese Game of Go. Think of it as chess on
steroids. Chess has 20 possible moves per turn. Go has 200.
The AI consistently uses moves that, at first, seem to be errors to top players. The decisions by the computer
challenge knowledge passed down over centuries about how to play the game—yet turn out to be winning
tactics.
Machines able to outthink human beings appear to be a double-edged sword. While they can help us see things in
a new light—and make giant leaps in industry, science and technology—what happens if they begin to think for
themselves?
In a short documentary titled, “The Turing Test: Microsoft’s Artificial Intelligence Meltdown,” by Journeyman
Pictures, one robot AI based on sci-fi writer Philip K. Dick provided a humorous, but telling, answer. It was asked,
“Do you think robots will take over the world?”
After pausing as if to think, the humanoid responded: “You all got the big questions today. But you’re my friend, and I will remember my friends, and I
will be good to you. So don’t worry. Even if I evolve into Terminator, I will still be nice to you. I will keep you warm and safe in my people zoo, where I can
watch you for old time’s sake.”
The exchange gave the developers a good laugh, to which the robo-author responded with a smile. Yet it summed up the fears many have about the
future of AI.
Rise of the Humanoids
Experts on artificial intelligence believe the next generation of AI will be adaptive, self-learning, intuitive
and able to change its own programming rules. They speak of a time when machines will exceed the
intelligence of human beings—a moment defined as “singularity”—which experts believe could take
place by 2035 or soon thereafter.
This could mean a brighter future for mankind. In fact, super AI may be a necessity because of the
explosion of man’s knowledge. But these advancements are a double-edged sword.
According to The Observer: “Human-based processing will be simply inefficient when faced with the
massive amounts of data we’re acquiring each day. In the past, machines were used in some industries to
complete small tasks within a workflow. Now the script has flipped: Machines are doing almost
everything, and humans are filling in the gaps. Interestingly, tasks performed by autonomous machines
require the types of decision-making ability and contextual knowledge that just a decade ago only human
beings possessed.”
“In the near future, AI-controlled autonomous unconscious systems may replace our current personal
human engagements and contributions at work. The possibility of a ‘jobless future’…might not be so far-
fetched.”
While critics see robot minds taking jobs from humans as a negative, others feel it would allow workers to focus on greater pursuits.
The author of 2001: A Space Odyssey, Arthur C. Clarke, wrote this in the 1960s: “In the day-after-tomorrow society there will be no place for anyone as
ignorant as the average mid-twentieth-century college graduate. If it seems an impossible goal to bring the whole population of the planet up to
superuniversity levels, remember that a few centuries ago it would have seemed equally unthinkable that everybody would be able to read. Today we have
to set our sights much higher, and it is not unrealistic to do so.”
A world where everyone could reach “superuniversity levels” seems appealing.
The flipside? A world where people have too much time on their hands would mean more time to delve into the darker facets of human nature.
Everywhere we turn in regard to AI, we run into similar gray areas and moral conundrums.
Uncharted Territory
Something as simple as self-driving cars creates difficult ethical
problems. If everyone had such automobiles, it would save 300,000
lives per decade in America. It would also mean the end of daily rush-
hour traffic. Also, think of everything you could accomplish during
your morning commute if you did not have to focus on the road!
Yet who is to blame for decisions a machine makes during a crash?
For example, if a driverless car suddenly approaches a crowd of people
walking across its path, should the car be programmed to minimize
the loss of life, even at the risk of the car’s occupants? Or should it
protect the occupants at all costs, even if that means hurting others?
Fortune chimed in on the debate, quoting Chris Gerdes, chief
technology officer for the U.S. Department of Transportation: “Ninety-four percent of so-called last actions during an automotive collision are the result
of human judgment (read: errors), Gerdes said. ‘Self-driving cars have this promise of removing the human from that equation,’ he said. ‘That’s not
trivial.’
“The catch: With self-driving cars you’ve shifted the error from human drivers to human programmer, Gerdes said. Machine learning techniques can
improve the result, but they aren’t perfect.
“And then there are ethical concerns. If you program a collision, that means it’s premeditated, [Patrick Lin, director of the Ethics and Emerging Sciences
Group at California Polytechnic State University,] said. Is that even legal? ‘This is all untested law,’ he said.”
Others speculate on the darker side to a post-singularity future. What if AI begins to see human beings as the problem? What if they begin to act on self-
interest? And what if those interests conflict with human interests—and they must remove us to complete a task?
Human Rights Watch issued a warning in February titled, “The Dangers of Killer Robots and the Need for a Preemptive Ban.” The report “detailed the
various dangers of creating weapons that could think for themselves” (International Business Times).
The organization also warned that “removing the human element of warfare raised serious moral issues,” such as “lack of empathy,” which would
“exacerbate unlawful and unnecessary violence” (ibid.).
“Runaway AI” is the term used to define the future moment when machines begin to develop themselves beyond the control of human beings. But how
could pieces, parts and electronics get to this point?
Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford, fleshed out a hypothetical example in his book
Superintelligence. He asks the reader to picture a machine programmed to create as many paper clips as possible.
Technology Review summarized: “Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create
new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.”
“No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides
to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented
raw-computing material (call it ‘computronium’) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the
entire earth is converted to computronium. Except for the million paper clips.”
Many do not see the threat, suggesting that we could pull the plug on these digital creatures should we begin to lose control of them.
Yet what if runaway AI cause machines to develop emotional responses and act in self-defense? Imagine if an entity more intelligent than us tapped into
the same emotions that drive humans to commit terrible crimes—lust, envy, hatred, jealousy and selfishness?
Or what if they learned to harness the full fund of knowledge and connectivity of the web, and began to reproduce?
A Slate article summarized such concerns as the fact that we fear AI “will act as humans act (which is to say violently, selfishly, emotionally, and at times
irrationally)—only it will have more capacity.”
Actions based on emotion and irrationality suggest sentience, that is, the capacity to feel, perceive or experience subjectively. This allows for a range of
human behavior, often labeled “human nature,” including acting violently and selfishly.
Therefore, to answer whether we should fear AI, we must answer another question: Is it possible for computers to gain human nature?
Thinking about this question, it is valid to post the following questions:
What is the human nature?
Where is the human nature come from?
Recognize that human nature is unique. Note the difference between human nature and the nature of animals. Why does man undisputedly have
superior intellect, creative power, and complex emotions? Retrospectively, why do animals possess instinct, an innate ability to know what to do without
any instruction or time to learn? These and others are amazing questions, do you have some answers you would like to share?
March 20 2017