Skip to main content Skip to main navigation Skip to search Skip to footer
Type to Search Subscribe View Tags

HCL Technologies

Artificial Intelligence is reaching a critical juncture

Artificial Intelligence is reaching a critical juncture
Anup Dutta - Corporate Vice President, HCL ERS | August 3, 2017
189 Views

Now is a good time to consider its limitless landscape, and impact on ‘everything’

You can sense it. The mobile-era is quickly morphing into something different – something that will be Artificially Intelligent.

Look at the AI-induced race our tech titans are currently engaged in. Google-owned robotics company, Boston Dynamics has released, in a debut-presentation of sorts, its first official video of Handle, a 6-foot-6-inches robot that vaguely resembles a human, but which can traverse any terrain (it has legs with wheels instead of feet – the best of both worlds, as the company cheekily says), lift a 100 pounds, and perform a vertical leap of 48 inches. In the offing there’s also Google’s own Home (voice controlled smart speaker) and Allo (smart messaging app) which uses its AI – the Google assistant - with ‘a’ in lower case because it’s set to pervade all of Google’s future products and services. Google assistant is set to feature in all Android phones running Marshmallow and above, in addition to LG’s G6 phones which come preloaded with the Google assistant. Nokia, Motorola, and Huawei are set to follow suit. While on AI-assistants, let’s also mention Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and of course IBM’s Watson. Facebook is using AI to enhance its tools and services, and has just committed to using AI and the community to help combat suicides. Google announced that it is using AI to help diagnose breast cancer. From autonomous cars to robotic surgeries, from drones delivering packages for logistics companies, to autonomous weapons – we are on the brink of the next big wave – Artificial Intelligence!

On the one side we have those companies that have invested in machine learning for years, in anticipation of this moment. On the other we have Stephen Hawking, Elon Musk and Bill Gates warn about Artificial Intelligence, while also suggesting that now is a good time to talk about the almost limitless landscape of AI, and take into accounts the risks associated.

With AI set to pervade almost every aspect of human life, we’re staring at many challenges, issues, and questions.

Despite Isaac Asimov’s three laws of robotics, AI and its creators face a serious set of challenges – especially the challenge to prevent AI from turning against its creators - humans. How do we create an AI-system that is responsible, transparent, auditable, predictable and incorruptible?  Needless to say that the AI systems must also be able to work safely across many contexts/scenarios – even those scenarios that are not envisioned or encountered! And of course, with a caveat – it is crucial for AI systems to be able to explain their moral decisions to us!

Challenges apart, AI raises many issues which, if things went wrong, could override the benefits that the AI systems were created for.

There’s already talk about Automation leading to job losses. With the increase in sophisticated AI systems, the world could face rampant unemployment.

Then there is the issue of having to guard against potential AI mistakes. Tesla’s autopilot failed to distinguish the white truck crossing the highway against a bright sky. The car crashed into the truck, killing the person in the car. Microsoft had to silence it’s chatbot Tay after Twitter users taught it racism.

What about the impact AI will have on human behaviour? Her (the movie) apart, have you heard of Eugene Goostman? In 2015, human users used text input to chat with an unknown entity, and then guess whether they had been chatting with a human or a machine. Eugene had half the humans fooled into thinking they had been chatting with a human! We already have bots defeating humans at Chess, and Go. Apart from impacting human behaviour, it will be a watershed moment when we make AI systems match humans in intelligence.

The more intelligent and powerful a technology becomes, the more susceptible it is to be used for an evil cause. This includes not just Iron Man army-like robots produced to replace human soldiers, or autonomous weapons, but also AI systems which can be used to caused colossal damage if used maliciously. Nick Bostrom did warn us that AI is capable of destroying humanity. With AI systems that are faster and more efficient than humans, Cybersecurity will become even more critical.

This leads to many questions. Can AI systems be taught empathy, and moral judgement? How can AI systems serve many purposes across many contexts/scenarios? How do we control AI systems, and how much can we control AI systems? Some even ask about Robot Rights, and what those should be.

Which bring us to the importance of having Ethics Boards, and the purpose they play in the AI context. Texan start-up Lucid AI has a 6-member AI Ethics Board instituted. Late last year, the Alan Turing Institute agreed to set up a UK AI ethics board in association with the UK government. Even Google’s Deepmind claims to have an AI Ethics Board instituted, but does not divulge any details about the board. What’s seen as the biggest development, and an industry-first, is consortium that’s been formed by Amazon, Facebook, Microsoft, IBM and Apple. Called the Partnership on Artificial Intelligence to Benefit People and Society, the group’s goal is to lead efforts to ensure AI’s trustworthiness – technologies that are “ethical, secure and reliable, which help rather than hurt” while also helping remove the fears and misconceptions around AI.

There is no doubt that AI is the future. We’re making strident progress to develop AI and make it as intelligent as humans, if not more. We’re also being wise about things. We should continue to direct our efforts so that when the future arrives, when AI is here in all its glory, it is an outcome of our thoughtful efforts, and not merely a leap of faith.

References:

http://gadgets.ndtv.com/social-networking/news/facebook-suicide-prevention-tools-get-ai-boost-extended-to-live-and-messenger-1665204

https://www.fastcompany.com/3065420/secrets-of-the-most-productive-people/at-sundar-pichais-google-ai-is-everything-and-everywhe

https://www.forbes.com/sites/miguelhelft/2016/05/18/inside-sundar-pichais-plan-to-put-ai-everywhere/#7e6b4ebc4a2e

https://www.forbes.com/sites/aarontilley/2016/05/18/google-home-amazon-echo/#5797c2b36978

https://www.forbes.com/sites/aarontilley/2016/05/18/google-allo-messaging-app/#5771465b7573

http://www.huffingtonpost.in/2017/02/28/ai-wars-google-takes-on-amazon-alexa-by-including-the-assistant/?utm_hp_ref=in-homepage

https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

https://www.theguardian.com/technology/2017/jan/26/google-deepmind-ai-ethics-board

http://www.npr.org/sections/alltechconsidered/2016/09/28/495812849/tech-giants-team-up-to-tackle-the-ethics-of-artificial-intelligence

http://www.nickbostrom.com/ethics/artificial-intelligence.pdf

http://www.recode.net/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine

https://www.theregister.co.uk/2016/11/07/uk_ai_ethics_board_to_launch/

http://venturebeat.com/2016/12/15/ethical-dilemmas-in-the-age-of-ai/


Contact Us
MAX CHARACTERS: 10,000

We will treat any information you submit with us as confidential.

We will treat any information you submit with us as confidential.

Sign in to Add this article to your Reading List
Register