Now is a good time to consider its limitless landscape, and impact on ‘everything’
You can sense it. The mobile-era is quickly morphing into something different – something that will be Artificially Intelligent.
Look at the AI-induced race our tech titans are currently engaged in. Google-owned robotics company, Boston Dynamics has released, in a debut-presentation of sorts, its first official video of Handle, a 6-foot-6-inches robot that vaguely resembles a human, but which can traverse any terrain (it has legs with wheels instead of feet – the best of both worlds, as the company cheekily says), lift a 100 pounds, and perform a vertical leap of 48 inches. In the offing there’s also Google’s own Home (voice controlled smart speaker) and Allo (smart messaging app) which uses its AI – the Google assistant - with ‘a’ in lower case because it’s set to pervade all of Google’s future products and services. Google assistant is set to feature in all Android phones running Marshmallow and above, in addition to LG’s G6 phones which come preloaded with the Google assistant. Nokia, Motorola, and Huawei are set to follow suit. While on AI-assistants, let’s also mention Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and of course IBM’s Watson. Facebook is using AI to enhance its tools and services, and has just committed to using AI and the community to help combat suicides. Google announced that it is using AI to help diagnose breast cancer. From autonomous cars to robotic surgeries, from drones delivering packages for logistics companies, to autonomous weapons – we are on the brink of the next big wave – Artificial Intelligence!
On the one side we have those companies that have invested in machine learning for years, in anticipation of this moment. On the other we have Stephen Hawking, Elon Musk and Bill Gates warn about Artificial Intelligence, while also suggesting that now is a good time to talk about the almost limitless landscape of AI, and take into accounts the risks associated.
With AI set to pervade almost every aspect of human life, we’re staring at many challenges, issues, and questions.
Despite Isaac Asimov’s three laws of robotics, AI and its creators face a serious set of challenges – especially the challenge to prevent AI from turning against its creators - humans. How do we create an AI-system that is responsible, transparent, auditable, predictable and incorruptible? Needless to say that the AI systems must also be able to work safely across many contexts/scenarios – even those scenarios that are not envisioned or encountered! And of course, with a caveat – it is crucial for AI systems to be able to explain their moral decisions to us!
Challenges apart, AI raises many issues which, if things went wrong, could override the benefits that the AI systems were created for.
There’s already talk about Automation leading to job losses. With the increase in sophisticated AI systems, the world could face rampant unemployment.
Then there is the issue of having to guard against potential AI mistakes. Tesla’s autopilot failed to distinguish the white truck crossing the highway against a bright sky. The car crashed into the truck, killing the person in the car. Microsoft had to silence it’s chatbot Tay after Twitter users taught it racism.
What about the impact AI will have on human behaviour? Her (the movie) apart, have you heard of Eugene Goostman? In 2015, human users used text input to chat with an unknown entity, and then guess whether they had been chatting with a human or a machine. Eugene had half the humans fooled into thinking they had been chatting with a human! We already have bots defeating humans at Chess, and Go. Apart from impacting human behaviour, it will be a watershed moment when we make AI systems match humans in intelligence.
The more intelligent and powerful a technology becomes, the more susceptible it is to be used for an evil cause. This includes not just Iron Man army-like robots produced to replace human soldiers, or autonomous weapons, but also AI systems which can be used to caused colossal damage if used maliciously. Nick Bostrom did warn us that AI is capable of destroying humanity. With AI systems that are faster and more efficient than humans, Cybersecurity will become even more critical.
This leads to many questions. Can AI systems be taught empathy, and moral judgement? How can AI systems serve many purposes across many contexts/scenarios? How do we control AI systems, and how much can we control AI systems? Some even ask about Robot Rights, and what those should be.
Which bring us to the importance of having Ethics Boards, and the purpose they play in the AI context. Texan start-up Lucid AI has a 6-member AI Ethics Board instituted. Late last year, the Alan Turing Institute agreed to set up a UK AI ethics board in association with the UK government. Even Google’s Deepmind claims to have an AI Ethics Board instituted, but does not divulge any details about the board. What’s seen as the biggest development, and an industry-first, is consortium that’s been formed by Amazon, Facebook, Microsoft, IBM and Apple. Called the Partnership on Artificial Intelligence to Benefit People and Society, the group’s goal is to lead efforts to ensure AI’s trustworthiness – technologies that are “ethical, secure and reliable, which help rather than hurt” while also helping remove the fears and misconceptions around AI.
There is no doubt that AI is the future. We’re making strident progress to develop AI and make it as intelligent as humans, if not more. We’re also being wise about things. We should continue to direct our efforts so that when the future arrives, when AI is here in all its glory, it is an outcome of our thoughtful efforts, and not merely a leap of faith.