How Generative AI impacts cybersecurity: Threats and defenses

GenAI accelerates cyberattacks, supercharges defense tools, demands new governance and is rapidly reshaping security across every industry
 
9 minutes read
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
9 minutes read
Share
Generative AI's impact on the cybersecurity industry

, often referred to as GenAI, has swiftly become a game-changer for various industries. Its impact is ubiquitous, affecting every industry vertical. GenAI's revolutionary capabilities have not only transformed the landscape for Cyber defenders but have also significantly amplified the adversarial threat situation.

Key facts about GenAI in cybersecurity

According to the HCLTech Global Cyber Resilience Study 2024–25:

  • 54% of security leaders now identify AI-generated attacks as their top cybersecurity concern
  • 81% of security leaders anticipate a cyberattack on their organization within the next year
  • 56% of organizations have invested in cybersecurity automation to improve their defenses
  • Over the past 12 months, 57% of security leaders faced the reality of a cyberattack, with businesses in North America (64%) and industries like Life Sciences and Healthcare (62%) experiencing the most intense attacks

How adversaries weaponize GenAI in cyberattacks

According to Bugcrowd’s annual Inside the Mind of a Hacker report, in 2024, 71% of hackers believed AI technologies will increase the value of hacking. In addition, the report found that 77% of hackers have adopted GenAI tools (a 13% increase from 2023) to create destructive tools like ransomware, malware and well-crafted phishing emails to access sensitive data.

This rapid creation of malicious content has reduced the time it takes to execute cyber-attacks from months to a few days, according to Srinivasan Sreekumar, VP and Global Practice Head, Cybersecurity at HCLTech. The ease with which adversaries can create and distribute these threats poses a significant challenge for  professionals.

"It puts them in a very difficult situation, and they are struggling to keep pace in identifying such threats, containing them, remediating them and responding to them quickly,” says Sreekumar.

As cybercriminals leverage GenAI, they can create new variants of existing threats at an unprecedented speed. Machine learning algorithms within GenAI can analyze patterns in large datasets, enabling the automated generation of sophisticated and tailored attacks. For example, ransomware creators can generate multiple variations of their malware, making it harder for traditional security solutions to detect and mitigate them. The scale and speed at which these attacks can be deployed presents a major hurdle for cybersecurity teams.

Why GenAI makes modern attacks harder to stop

The sophistication of  has increased drastically due to the speed and ease with which GenAI enables the creation of new attack variants. Previously, it may have taken months to develop a single ransomware, but now, leveraging GenAI, adversaries can create multiple variants within a day. Consequently, organizations are facing a surge in the frequency and complexity of these attacks, especially through the common entry point of phishing emails.

“Phishing attacks, in particular, have become more convoluted and difficult to detect due to GenAI's involvement,” adds Sreekumar.

Cybercriminals can use machine learning algorithms to analyze vast amounts of data, such as social media profiles and online behaviors, to create highly personalized phishing emails. These emails are crafted to fool recipients into believing they are genuine, increasing the likelihood of successful phishing attempts. This evolution in attack techniques calls for stronger defenses and more sophisticated measures to combat the rising threat landscape.

What risks does GenAI introduce to cybersecurity?

As organizations embrace GenAI, they must confront new and amplified cyber risks. One major threat is AI-powered attack automation. Malicious actors can use GenAI to generate highly convincing phishing emails, deepfake content or polymorphic malware. It’s no surprise that in HCLTech’s global survey of security leaders, AI-generated attacks topped the list of emerging threats, with 54% expressing extreme concern. Attackers are already leveraging AI to refine their techniques, increasing the success rate of social engineering and ransomware campaigns.

GenAI also heightens the risks of data poisoning and leakage. AI models can be manipulated if attackers inject false or biased data into training sets, causing the system to behave unpredictably. There’s a rising insider threat dimension too: employees unwittingly sharing sensitive data with GenAI tools can lead to leaks. For instance, Samsung had to ban internal use of ChatGPT after employees fed it proprietary code, which could later surface in the model’s outputs. Such incidents underscore how GenAI can expose confidential information if not properly governed.

How GenAI empowers cybersecurity defense teams

To combat the ever-escalating threat landscape, cybersecurity professionals are increasingly utilizing AI-embedded tools. These AI-enabled capabilities aid in detecting threats and expedite investigation and response processes. Machine learning algorithms and predictive analytics help identify patterns and anomalous behaviors, improving the accuracy and speed of threat detection.

Organizations are increasingly integrating  and  to automate threat detection, enhance incident response capabilities, increase cyber resilience and strengthen their security operation center (SOC) operations. The adoption is rapidly increasing, with HCLTech’s 2024-2025 Cyber Resilience study revealing that 63% of organizations are planning budget increases for cybersecurity initiatives, specifically targeting GenAI-driven tools (57%) and SOC automation in 2025.

According to the same HCLTech report, critical drivers for AI and GenAI investments in cybersecurity include ensuring compliance with regulations (55%), prevent data breaches (54%) and reduce costs (37%).

These technologies leverage GenAI to analyze vast amounts of data, automatically identify suspicious activities, and generate actionable insights for incident response teams. For example, AI algorithms can detect anomalous user access patterns, identify potential insider threats and provide real-time alerts.

Moreover, organizations are exploring innovative solutions, such as Microsoft's Copilot for Security, to enhance their cybersecurity posture. “The Copilot for Security copilot combines the power of human intelligence and AI algorithms to investigate threats, providing human analysts with actionable insights for decision-making,” explains Sreekumar.

This collaboration between human experts and GenAI enables more efficient and effective incident response, enhancing an organization's overall security capabilities.

"Ultimately, AI is increasingly being used in cybersecurity to quickly detect and respond to threats and also enhance the capabilities that already exist in current security tools,” adds Sreekumar.

Responsible AI use: Compliance and governance

As organizations embrace GenAI to drive innovation and achieve business goals,  and compliance become critical factors.

“Whichever geography an organization belongs to, it is essential to train personnel in the responsible use of GenAI, ensuring adherence to regulatory standards,” says Sreekumar. Transparency and explainability of AI algorithms are also crucial, as accountability and the ability to trace decision-making are necessary for addressing potential biases and ensuring ethical practices.

Data governance and privacy considerations play a pivotal role in maintaining ethical and legal standards while harnessing GenAI's potential. Organizations need to implement robust data management practices, ensuring the proper anonymization and masking of sensitive information. Additionally, strong safeguards must be in place to protect AI models and training data from unauthorized access or tampering.

“Governance and compliance should also be a key factor in monitoring technical aspects relating to the controls around data and applications. The applications that are developed to use GenAI should go through stringent security testing as part of the development process,” continues Sreekumar.

In response to rising AI threats, global regulatory bodies like the EU AI Act, NIST and OECD have developed and are actively developing frameworks to govern the use of . For example, Article 15 of the EU AI Act specifically focuses on the cybersecurity requirement for high-risk AI systems.

Policymakers are working on frameworks that address the potential risks associated with AI-enhanced cyber-attacks and establish guidelines for Responsible AI use. Government and industry collaboration are crucial to facilitate the development of standards and best practices that can adapt to the rapidly evolving landscape of GenAI. In June 2025, for example, President Trump signed Executive Order 14306 (EO), titled “Sustaining Select Efforts to Strengthen the Nation’s Cybersecurity” to, in part, refocus AI cybersecurity efforts towards identifying and managing vulnerabilities, rather than censorship.

Which industries are adopting GenAI fastest?

GenAI adoption rates vary widely by industry, reflecting each sector’s data intensity, regulation and risk tolerance. Early adopters include financial services, technology and cloud providers and life sciences (pharmaceuticals and healthcare).

These sectors handle massive data workloads and face compliance pressures that spur rapid GenAI integration. For example, over half of financial services professionals (52%) now use generative AI tools in their work (up from 40% last year). In healthcare, 75% of leading organizations are already experimenting with GenAI or scaling use cases, applying it in areas like drug discovery and diagnostics.

By contrast, slower adopters are often in heavy-industry and critical infrastructure fields. Manufacturing, energy, utilities and retail enterprises tend to be more cautious, held back by operational technology (OT) security concerns and skill gaps.

In such sectors, GenAI is seen as less immediately relevant to core processes. Many are still in pilot phases.

How lagging industries can catch up:

  • Start low-risk pilots: Begin with controlled GenAI pilot projects targeting clear use-cases to build confidence and ROI evidence
  • Strengthen data governance: Implement strict data hygiene and access controls to mitigate risks as AI systems ingest sensitive operational data
  • Upskill and bridge the talent gap: Global CEOs anticipate 35% of their workforce will require retraining due to AI advancements, so it’s crucial to invest in cybersecurity and AI training for staff, as well as consider partnerships to bring in AI expertise. This ensures teams can integrate GenAI securely into OT environments and business workflows

 

HCLTech and OpenAI collaborate to drive enterprise-scale AI adoption

 

What’s next for GenAI in cybersecurity?

Looking ahead, the adoption rates of GenAI in various industries are expected to vary based on their receptiveness to change and regulatory constraints. According to Sreekumar, sectors such as finance, cloud computing and life sciences are anticipated to be early adopters, thanks to the value GenAI can bring in handling vast amounts of data and detecting complex threats. In contrast, more traditional industries, such as manufacturing, utilities and oil & gas, may take a cautious approach due to concerns over security risks and the need for specialized expertise.

However, it is evident that GenAI will become deeply ingrained in every business process, and this holds true for the cybersecurity industry as well. The creation and advancement of GenAI-powered offensive and defensive capabilities will continue to evolve, leading to new challenges and opportunities.

“As a result, the demand for skilled professionals equipped to leverage GenAI as a strategic asset in cybersecurity operations will grow exponentially,” says Sreekumar.

GenAI's impact on the cybersecurity industry and  will be profound and multifaceted. From rapid threat creation by adversaries to the evolution of defense mechanisms, GenAI is reshaping the cybersecurity landscape and necessitating a paradigm shift in how organizations approach cyber defense.

As the cybersecurity industry continues to grapple with the implications of GenAI, it is imperative for stakeholders to proactively adapt to this new era of cyber threats and defense. Collaboration between technology providers, cybersecurity professionals and policymakers is vital in developing effective countermeasures to withstand and mitigate the adversarial use of GenAI. By leveraging the potential of GenAI for both offensive and defensive purposes responsibly, organizations can stay ahead in this ever-evolving threat landscape.

Share On
_ Cancel

Contact Us

Want more information? Let’s connect