GenAI for cyber defence is on the rise

Global Digital Trust Insights 2024

Global Digital Trust Insights
  • 29 Dec 2023
In its 26th year, PwC’s Global Digital Trust Insights is the longest-running annual survey on cybersecurity trends. It’s also the largest survey in the cybersecurity industry, reflecting the views of over 3,800 senior security, technology and business executives.
The reinvention and innovation that businesses are doing today connect more digital experiences using the latest tech tools. Cybersecurity should be right there at the epicentre, hence the theme of our 2024 survey. We have a C-suite playbook for those who dare to break cyber-as-usual.

Don’t lose sight of governance amid all the excitement

  • Seven in 10 senior executives (69%) say their organisation will use generative AI (GenAI) for cyber defence in the next 12 months, according to the 2024 Global Digital Trust Insights survey.
  • Another surge in cyber threats may be coming because GenAI can help create advanced business email compromise at scale. CISOs and CIOs should pay attention to a prevailing sentiment: 52% expect GenAI to lead to catastrophic cyber attacks in the next 12 months.
  • Companies need to establish sound AI governance and get ahead of risks that could come from exploration with GenAI. 63% of senior execs feel personally comfortable using GenAI tools even without data governance policies in place.

Generative AI is opening frontiers that more than 3,800 C-level business and tech executives who responded to our 2024 Global Digital Trust Insights (DTI) Survey are exploring — in the business and for cyber defence.

Nearly 70% say their organisation will use GenAI for cyber defence. Platforms are licensing their large language models (LLMs) in tandem with their cyber tech solutions. Microsoft Security Copilot intends to provide GenAI features for security posture management, incident response and security reporting. Google announced Security AI Workbench for similar use cases and many other security vendors such as Crowdstrike and Zscaler have announced features using GenAI. Even without vendor tools, some companies have been using GenAI to identify and triage phishing attempts.

GenAI for cyber defence



More than two-thirds say they’ll use GenAI for cyber defence in the next 12 months.




Nearly half are already using it for cyber risk detection and mitigation.




One-fifth are already seeing benefits to their cyber programmes because of GenAI — mere months after its public debut.


Q7. To what extent do you agree or disagree with the following statements about Generative AI? Q10. To what extent is your organisation implementing or planning to implement the following cybersecurity initiatives?
Base: All respondents=3876
Source: PwC, 2024 Global Digital Trust Insights.


GenAI comes at an opportune time in cybersecurity.

For defence. Organisations have long been overwhelmed by the sheer number and complexity of human-led cyberattacks, both of which continually increase. And GenAI is making it easier to conduct complex cyber attacks at scale. Researchers found a 135% increase in novel social engineering attacks in just one month, from January to February 2023. Services like WormGPT and FraudGPT are enabling credential phishing and highly personalised business email compromise.

To secure innovation. Businesses eager to reap GenAI’s many potential benefits to develop new lines of business and increase employee productivity invite serious risks to privacy, cybersecurity, regulatory compliance, third-party relationships, legal obligations and intellectual property. So to get the most benefit from this groundbreaking technology, organisations should manage the wide array of risks it poses in a way that considers the business as a whole.

The promise of GenAI for cyber defence

From reconnaissance to action, GenAI can be useful for defence all along the cyber kill chain. Here are the three most promising areas.

Threat detection and analysis. GenAI can be invaluable for proactively detecting vulnerability exploits, rapidly assessing their extent — what’s at risk, what’s already compromised and what the damages are — and presenting tried-and-true options for defence and remediation. GenAI can identify patterns, anomalies and indicators of compromise that elude traditional signature-based detection systems.

GenAI is strong at synthesising voluminous data on a cyber incident from multiple systems and sources to help teams understand what has happened. It can present complex threats in easy-to-understand language, advise on mitigation strategies and help with searches and investigations.

Cyber risk and incident reporting. GenAI also promises to make cyber risk and incident reporting much simpler. Vendors already are working on this capability. With the help of natural language processing (NLP), GenAI can turn technical data into concise content that non-technical people can understand. It can help with incident response reporting, threat intelligence, risk assessments, audits and regulatory compliance. And it can present its recommendations in terms that anyone can understand, even translating confounding graphs into simple text. GenAI could also be trained to create templates for comparisons to industry standards and leading practices.

GenAI’s reporting capabilities should prove invaluable in this new era of heightened cyber transparency. To wit: A recent law will soon require critical infrastructure entities in the US to report cyber incidents. Also, the Securities and Exchange Commission (SEC) has released rules requiring disclosures of material cyber incidents and material cyber risks in SEC filings. The European Union’s Digital Operational Resilience Act calls for timely and consistent reporting of incidents that affect financial entities’ information and communication technologies. Imagine having a tool that makes preparing these reports much easier.

Adaptive controls. Securing the cloud and software supply chain requires constant updates in security policies and controls — a daunting task today. Machine learning algorithms and GenAI tools could soon recommend, assess and draft security policies that are tailored to an organisation's threat profile, technologies and business objectives. These tools could test and confirm that policies are holistic throughout the IT environment. Within a zero trust environment, GenAI can automate and continually assess and assign risk scores for endpoints, and review access requests and permissions. An adaptive approach, powered by GenAI tools, can help organisations better respond to evolving threats and stay secure.

And more. Many vendors are pushing the limits of GenAI, testing what’s possible. As the technology improves and matures, we’ll see many more uses for it in cyber defence. It could be some time, however, before we see “defenceGPT’s” broad-scale use.

Invest in your security teams

GenAI tools could help relieve the acute cyber talent shortage. Attrition is a growing problem for 39% of CISOs, CIOs and CTOs, according to our 2023 Global DTI survey. It’s hindering progress on cyber goals for another 15%.

Brace for regulatory uncertainty

The use of GenAI for cyber defence — just like the use of GenAI across the business — will be affected by AI regulations, particularly concerning bias, discrimination, misinformation and unethical uses. Recent directives including the Blueprint for AI Bill of Rights from the White House and the draft European Union AI Act emphasise ethical AI. Policymakers around the world are scrambling to set limits and increase accountability — treating generative AI with urgency because of its potential for affecting broad swathes of society profoundly and rapidly.

Channel your enthusiasm into trusted, ethical practices

Enthusiasm for AI is so high that 63% of our executive respondents said they’d personally feel comfortable launching GenAI tools in the workplace without any internal controls for data quality and governance. Senior execs in the business are even more so inclined (74%) than the tech and security execs.

However, without governance, adoption of GenAI tools opens organisations to privacy risks and more. What if someone includes proprietary information in a GenAI prompt? And without training in how to properly evaluate outputs, people might base recommendations on invented data or biased prompts.

Don’t overlook people

GenAI tools will be able to quickly synthesise information from multiple sources to aid in human decision-making. And, given that 74% of breaches reportedly involve humans, governance of AI for defence ought to include a human element as well.

Enterprises would do well to adopt a responsible AI toolkit, such as PwC’s, to guide the organisation’s trusted, ethical use of AI. Although it’s often considered a function of technology, human supervision and intervention are also essential to AI’s highest and ideal uses.

Ultimately, the promise of generative AI rests with people. Every savvy user can — should — be a steward of trust. Invest in them to know the risks of using the technology as assistant, co-pilot or tutor. Encourage them to critically evaluate the outputs of generative AI models in line with your enterprise risk guardrails. Rally security professionals to follow responsible AI principles.

Contact Form

You can contact us to learn more about PwC solutions

Contact us

Ulvi Cemal Bucak

Ulvi Cemal Bucak

Digital Services Cyber Consulting Partner, PwC Türkiye

Tel: +90 212 326 6648

Cem Aracı

Cem Aracı

Digital Services Leader, PwC Türkiye

Tel: +90 212 326 6840

Follow us