Forum

Is A Robot Artificial Intelligence?

Terminator, I, Robot and 2001: Space Odyssey. What do these three successful science fiction movies have in common?

These examples predicted what might happen in the future: an artificial intelligence (A.I.) uprising.

A.I. is a term used for a robot, big or small, tailored to personal needs such as making a financial decision or planning events like family and professional get-togethers to meet guests’ needs.  

The main conflict that we may or may not be noticing is that robots can gain more knowledge than what they were initially programmed for, and they might utilize this knowledge for evil instead of good.

When analyzing why we worry about robots possibly going rogue, some people might think the responsibility falls on ourselves because we are practically playing God by creating a new breed of beings.

We programmed them to do whatever pleases our imagination; however, a fault in the system might give them an idea of turning on their creators and possibly starting an era where humankind is considered endangered.

Back in the pre-digital era, around the 1980s, artificially intelligent robots were built to help people make quick decisions with deductive reasoning that functions similar to how a modern calculator works.

Today, developers have already gone further with giving A.I. robots the ability to know names and personal interests. As mentioned before, there is a possibility someone or something can hack into it and reprogram the system to perform some unspeakable consequences.

Take International Business Machines’ Watson, for instance. Watson is an analytic A.I. robot capable of searching the internet and finding an answer in a snap of a finger. Basically, Watson is an enhanced version of Google. However, there was something in Watson that IBM forgot to install on the system: a filter that blocks out foul language. So now, Watson not only just knows every answer but every “dirty” word on the Internet. For some it might be hilarious to hear an A.I. robot talk dirty, but for me and some computer engineers it is concerning.

We might fear an uprising due to the uncertainty of whether programmers have established a limit based on how much it “learns” and whether a fail-safe switch is installed. If they learn too much anywhere from Internet slang or from reading an illegally downloaded blueprint of a government prototype weapon, they can pose a major threat to society, and that is not what the future should look like.

So keep in mind that robots should limit their knowledge; and whatever you do, do not program robots the idea of creating a robotic overlord.