Security has remained a top priority for decades. It has been seen in various avatars as we tried changing for good every time. It was all about perimeter security in the beginning, followed by securing the inside out. Then came data and device proliferation — and now convergence. As we face more technological influence in all parts of our life today, the challenge now is to make convergence work securely each time, every time consistently: physical with virtual worlds, AI with IoT, the cloud with automation, and so on. What does it take to secure this convergence? As we increasingly make everything around us aware and smart, what are the new forms of threats, risks, and vulnerabilities that are knowingly or unknowingly becoming part of our lives that can be exploited? What is keeping the researchers awake these days and on their feet, all the time?
Let’s start with artificial intelligence, something that everyone today wants to be associated with. Whether it’s healthcare, banking, or any other industry, artificial intelligence based systems that aim to provide/predict modern-day insights/behavior or interact with us are on the rise. Similarly, in the consumer space, AI is being increasingly thought of as the single most effective advancement that has the ability to alter our lifestyles for good. But in the process, AI itself is becoming increasingly insecure at various levels and not only because of a traditional reason every time (an attack) but due to its own smartness. How? The machine learning (ML) algorithms that are used to make AI work are fed with data which at times are unknowingly creating a biased system that can be a threat to an individual, society, or one’s freedom eventually. This, when exploited knowingly, is the new emerging challenge for security teams and organizations.
Consider Microsoft’s AI chatbots, Tay, which started learning from certain (racist) sources before it started posting its own racist remarks and, consequently, was shut down by Microsoft. The only issue is that chatbots can be shut but the AI creeping into our daily lives can’t be shut down so easily. The next consequence of this could have been a cyber or physical altercation between individuals, communities, or so on.
Another example could be the malicious AI data being fed to a system that is built to secure us, thereby skewing the decisions to affect one individual, cause, community, or system. Some of these situations are as real as they get even today. Combine this with emerging forms of attack, vectors, and threats and we suddenly have a very different set of data, environment, and actions to protect that emerges out of these advancements.
The primary reason for this is that AI/ML algorithms look for patterns in the data and keep on learning from it. Currently, they are not smart enough to differentiate good from bad (something that separates us from them) and eventually may have a factor of human bias already built in or may create their own bias based on the skewed patterns. Till recently, all the research was focused and centered on one objective: making AI smart which now has shifted to how can you make AI smart and secure to avoid situations like the one mentioned above. While humans can correct their decisions at any time, a self-learning (and unmonitored) machine bias could result in serious situations when combined with the Internet of Things (IoT), the cloud, and other technological advancements.
IoT, on the other hand, is built on the foundation stones of data and connectivity. While everything gets connected and starts spilling out data that users understand little, we are constantly moving to an age of hyper-insights. Here, all our decisions would be based on the real-time or near real-time data that would interact for us and influence our decisions, actions, body, clothes, shoes or more so, everything associated with objects/environment around us. To make sense of this and have millions of decisions per minute to support you, AI convergence is being built in at all levels to help infer, analyze, decide and take remedial actions. Combine this with AI bias and we suddenly have a situation in our hands where the very data that was supposed to make our lives better may be used to make life-altering decisions for us or for organizations (in the enterprise space).
So, how do we make use of this AI-IoT convergence positively and look to create an environment that is protected from this bias that can (at times) be more damaging than a ransomware or a malicious hack? The answer to that again is convergence. The following is a set of recommendations that can be used at different times to prevent/ameliorate the damage created by such actions:
- Don’t accept any system on the face of it as opaque AI systems will not always make for good, secure systems. Understand how they work and the basis for decisions/actions based on real-world, scenario-based testing
- Form converged teams that can assess risk on the basis of combined effect and not only occurrence
- Look for developing converged skill sets in the team so as to view the threat holistically
- Convergence of physical and virtual security, risk, compliance, and threat response management
- Have a team of data scientists (external or internal) to monitor the quality of data being fed to your AI systems. If possible, have metrics created for the data itself rather than only for results
- Have your teams create a holistic view of your data network that can point to unconventional incidents
- Create a system of not only threat detection but an appropriate converged system and environment of detection, response, remediation, and feedback (so the same can’t happen again)
- A solid system (no matter how much) can’t survive or give desired results without end-user training and support. Build your end-user training to include aspects like the effect of social engineering, basic AI bias patterns, and the process of reporting/countering it
- Monitor the health of AI and not the system alone
- Monitor the data being fed to the AI systems by not only securing the data at rest but data in motion and analysis as well
- Create continuous scenario-based testing instead of the decade-old ways of testing your environments (designed by a team of converged team of experts from different fields like data science, business, and so on)
- Don’t eliminate team roles and functions blindly (based on AI and automation convergence) but look for new roles and needs that might emerge out of such implementations
- This should be complemented by all emerging forms of securing the system hacks, attacks, and other forms of threat
- Creating an AI system that can understand and balance different point of views could be one soft way to secure
- Create appropriate checks and balances in the system (an example of a basic one could be filtration of known bad words that can create a negative bias)
- Learn about the nuances of a particular business or industry well before finally rolling out an AI system or algorithm for that
Today, we are in an interesting situation that takes us away from our decades of work on security frameworks and advancements and calls for a renewed and converged approach of securing via both the soft and hard aspects of our systems and environment. The very AI that is supposed to save us or make our lives better would need to be secured first. For instance, it is and will not be about securing the data at rest (saved data) or the most important data only in the future but the data in motion and analysis as well. The approach might be different for all but the need would be common. The immediate need, therefore, is to consider convergence of security, risk, business, monitoring, and others (as defined above) today to define the security landscape of future for AI-IoT convergence to work in our favor.