Artificial intelligence (AI) continues to feature extensively in the daily news cycle.
Microsoft Corp recently announced another multibillion-dollar investment in OpenAI—the startup behind the chatbot sensation ChatGPT. Microsoft has started adding OpenAI’s tech to its search engine Bing and said it aimed to integrate such AI into all its products.
In addition, the Chairman of COVID vaccine giant, AstraZeneca, Leif Johansson said spending more on AI and screening could prevent illness and help people avoid going to hospital. AI could be used to diagnose lung cancer by analyzing X-rays “through software” to identify common patterns that indicate the presence of the disease. Or technology could be used to screen diabetes or cardiovascular diseases.
While AI has demonstrated many positive impacts there is a downside. A group of artists sued AI generators Stability AI Ltd., Midjourney Inc. and DeviantArt Inc. for allegedly downloading and using billions of copyrighted images to train AI tools, without obtaining the consent of or compensating any of the artists.
Getty Images also announced that it initiated copyright infringement legal proceedings against Stability AI in the High Court of Justice, London, alleging it used Getty’s images without a license.
“The recent release of ChatGPT (I must admit to some addiction since I started using it) is a transformational moment in the democratization of AI given its astounding capabilities as well as comical failures. But there can be no doubt that generative AI will have massive ramifications…Generative AI holds the same potential and dangers, and the race is already on, with China outnumbering the U.S. in the number of most-cited scientific papers on AI,” wrote Gautam Adani, Asia’s richest man, on LinkedIn after attending the 2023 World Economic Forum.
The dark side of AI
In the popular web series Fauda on Netflix, its latest season presents the use of AI by an anti-terror group to track down a criminal. This is easier said than done with the rise of AI-enabled deepfakes, which is exemplified by a recent awareness video, showing an Instagram influencer changing his face to actor Robert Downey Jr and cricketer Virat Kohli.
For new-age hackers, AI is a tool to improve and enhance malware, making them extremely dangerous with more advanced and sophisticated attacks.
To gather information about a target software or system and learn its weaknesses, a threat actor can combine neural fuzzing—used to detect software vulnerabilities by testing large amounts of random input data—with neural networks with no human intervention.
AI can obviously improve security. However, while the main benefits of AI for cybersecurity revolve around faster analysis and mitigation of threats, there are some limitations that prevent standalone AI technology from becoming a mainstream security tool.
To train an AI system, security companies often go through a costly phase where they need to use different data sets of anomalies and malware codes. Getting accurate data sets can require significant resources, including data, memory, computing power, and time which some companies can’t afford.
The AI bright side
AI, with the help of machine learning, provides insights that help companies understand threats.
On average, it can take companies 196 days to identify a data breach and the average recovery cost from a common data breach is estimated at $3.86 million, Norton stated in its research.
In cybersecurity, ML algorithms—such as regression, clustering and classification—can automatically detect and analyze security incidents and can automatically respond to threats.
Even before vulnerabilities are officially reported and patched, AI-ML techniques can improve the vulnerability management capabilities of databases. When powered by AI, tools such as user and event behavior analytics (UEBA) can analyze user behavior on servers and endpoints, and then detect anomalies that might indicate an unknown attack.
AI can raise the threat detection rate of traditional techniques up to 95 percent, but it also comes with the problem of multiple false positives. A combination of AI and traditional methods is needed to increase accurate detection rates.
In addition to improving threat hunting by integrating behavior analysis, AI can enhance network security as it quickly learns network traffic patterns and recommends both security policies and functional workload grouping.
The rise of Generative AI
Generative AI are system algorithms or intelligent forms of machine learning that can be used to quickly create new content, including audio, code, images, text, simulations and videos to an entire virtual world. Besides fun, its practical uses include creating new product designs and optimizing business processes.
Explaining this tech trend that is going to have a big impact on industries, C. Vijayakumar, CEO and Managing Director at HCLTech, said: “Generative AI is taking AI to new levels, in terms of content creation and customer service, which is a huge industry. ChatGPT is going to create an ecosystem around itself, which will drive a new era of innovation and disrupt traditional industries.”
AI voice technology startup ElevenLabs even wants to transfer text-to-speech and audio-to-audio in any language and any voice including the full range of emotions. The solution it developed—a deep-learning model for speech synthesis—can be used for everything from creating audiobooks to dubbing movies.
Fear of the unknown
Forbes presents a negative side and questions what if an AI creator makes available a generative AI app that frothily spews out foulness? It also highlights the requirement of factcheck. Healthy skepticism and a persistent mindset of disbelief always made a journalist, especially an editor, the last gatekeeper. Results of Generative AI perhaps is no different.
Ethics and laws
There are concerns that experts highlighted regarding the ethical use and laws related to AI. They feel transparency, non-maleficence, responsibility and privacy are some of the main factors that need to be investigated.
“Ethics plays a significant role as organizations struggle to eliminate bias and unfairness from their automated decision-making systems. Biased data may result in prejudice in automated outcomes that might lead to discrimination and unfair treatment. Regulatory compliance, standards and policies, such as GDPR and DORA, can lead to an unexpected source of competitive advantage. To develop trustworthy AI systems, policies, governance, traceability, algorithms, security protocols are needed, along with ethics and human rights.,” said Phil Hermsen, Solutions Director, Data Science & AI, at HCLTech.
How HCLTech prioritizes AI
Looking at the top AI trends for 2023, HCLTech is contributing to the AI field and its impact on society in the following ways.
- Hyper-automation: Hyper-automation involves the orchestrated use of multiple technologies, tools or platforms, including Robotic Process Automation (RPA) and AI, to enable business processes that complement human workforce tasks and enhance competitive advantage. As digital disruptions are forcing companies to change their business models, HCLTech is adopting newer ways of service delivery with a more customer-centric approach. Its RPA offering is a critical step in this direction.
Cybersecurity: One of the important components of today’s world is information security and the risks attached to it as AI-enabled cyberattacks are on the rise.
HCLTech—with its deep knowledge and experience in AI-ML—helps organizations counter these risks effectively through its Dynamic Cybersecurity, which is a framework of governance and continual assessment to enable an adaptive and evolving security posture, while leveraging best-of-breed technologies.
Intersection of AI-ML with loT: Today's devices are becoming smarter and more secure through the introduction of AI-ML and bonding with the Internet of Things in order to respond in a timely manner to any situation.
HCLTech’s IoT WoRKS™ offerings identify business pain points that IoT services and solutions can resolve, present an IoT technology roadmap in line with desired business outcomes and provide user-centric business process transformation.
Its strategic partnerships with key players across IoT platforms, IoT devices, connectivity and advanced analytics help jointly develop enterprise IoT solutions to address customers’ business challenges. It is aligned to core areas of asset value chain, helping organizations accelerate time to market and optimize costs.
Product development: AI allows for the development of highly accurate differentiated products.
HCLTech’s EXACTO™ is developed in collaboration with a leading university in the field of AI and is a result of continued investments in R&D. EXACTO™ harnesses the latest innovations in AI, ML and computer vision techniques that integrate seamlessly with RPA to create a differentiated product with nearly 100 percent accuracy. Andy Efstathiou, Banking Sourcing Research Director, NelsonHall, shared his experience in 2019.
Augmented Intelligence: Augmented Intelligence is the use of AI that focuses on the role AI plays in enhancing and improving human intelligence and decision making. The term is used to represent a collaboration between people and AI to improve the way people do work, not replace it.
The Intelligent Secure Edge for smart cities, built by HCLTech IoT WoRKS™ using Intel technologies, is a collaborative platform that brings together citizens, communities and authorities as a virtual network, powered by AI, Edge, Wi-Fi 6, 5G and other next-gen technologies, to drive proactive responses powered by real-time insights.