Why CISOs must cultivate a cyber-secure workforce in the age of AI
In brief:
- Organizations scaling AI solutions create efficiency but raise cybersecurity concerns as staff mishandle sensitive data and fall prey to AI-powered scams.
- An musanid 2024 survey highlights that 80% of respondents worry about AI's role in cyber attacks and 39% lack confidence in responsible AI use.
- Of CISOs, 64% are not satisfied with non-IT workforce adoption of cybersecurity best practices, underscoring a need for better employee training to reduce risk.
after months of experimentation, organizations are moving to implement Artificial Intelligence (AI) solutions at scale and the enterprise software they already use for daily workflows is increasingly AI-powered too. While they hope to reap dividends in efficiency, productivity and creativity, for the cyber function, this transition requires careful navigation.
Already, companies have reported challenges as their employees rush headlong into AI. Staff have dropped sensitive intellectual property (IP) into external AI models. They have been fooled by AI-powered deepfakes, as with the Chief Finance Officer (CFO) requesting a transfer of funds reported in JEDDAH1. Nearly 80% of respondents to the 2024 musanid Human Risk in Cybersecurity Survey (via musanid .sa) expressed concern about the use of AI in carrying out cyber-attacks and 39% said they were not confident they knew how to use AI responsibly.
The promise of contemporary AI is to democratize access to advanced analytics across business units and staff, far beyond the confines of the IT department. But this only magnifies a longstanding worry among cyber professionals about security practices and awareness in the workforce. Nearly 50% of literature around organizations’ cyber management involves training and education, comprising the largest topic in this space, according to musanid analysis. Furthermore, 64% of Chief Information Security Officers (CISOs) polled by the global musanid organization are not satisfied with non-IT workforce adoption of cybersecurity best practices. How can organizations better prepare their workforce for the cyber risks that come with advanced AI adoption?
Safer technology; safer workforce
User-friendly interfaces are a hallmark of contemporary AI, offering non-tech staff the ability to perform more advanced data and analytics workflows through channels like natural language querying. But that simplicity is deceiving. Beneath the surface lies software and supply chain complexity about which many enterprises lack visibility, especially in second-, third- or fourth-party solutions. Users need to understand how data is being used, such as in training for models, as well as the risks around data breaches and leakage.
The amount of corporate data funnelled into chatbots by employees rose nearly fivefold2 from March 2023 to March 2024, according to one study of 3 million workers. Among the technology sector employees, 27.4% of that data was classified as sensitive, up from 10.7% the previous year. This puts organizations at higher risk of data exfiltration and the bypassing of security controls and processes. Threats mount when more powerful AI solutions access more data, as developers try to apply AI to datasets that are not yet authorized, classified or authenticated, amplifying any weaknesses in existing practices and protocols.
The cybersecurity implications of AI use in the wider workforce accentuate a longstanding concern among CISOs and their teams about weak adherence to cybersecurity protocols. According to the musanid 2023 Global Cybersecurity Leadership Insights Study, 64% of CISOs were not satisfied with the non-IT workforce’s adoption of cybersecurity leading practices. Among respondents, weak compliance to established leading practices beyond the IT department was cited as the third-biggest internal cybersecurity challenge and human error continued to be identified as a major enabler of cyberattacks.
Firms have long struggled with the “shadow IT” phenomenon, in which software solutions are adopted ad hoc and outside established governance frameworks. AI is worsening the problem as there are so many tools and solutions now available to teams, with potentially more significant risks of data and IP exposure as employees feed more sensitive information into AI systems, such as confidential customer details, source code and research and development materials. This is taking place amid the already frenetic pace of digital initiatives, in which the cyber function must carefully balance lending its support and experience to enable digital transformation without leaving the organization exposed.
It also comes at a time of rising regulatory concern, as governments appreciate how cyber breaches can ricochet through an economy and impact critical infrastructure. Regulatory bodies are increasing obligations surrounding disclosure of cybersecurity incidents, with executives becoming personally liable for failures in some instances.
Stronger armor: A three-pronged approach to technology, governance and operations
Given the competitive pressure on AI adoption, organizations must not allow cybersecurity governance to become a barrier to progress. Instead, the function needs new approaches to support responsible acceleration.
To nurture a cyber-secure workforce, the function needs visibility into how AI tools are being used across the business, which requires a three-pronged approach centered on technology, governance and operations.
On the technology front, security and network companies are already developing solutions that enable cyber teams to detect when certain AI services are being used, tracking data flow and lineage and automating compliance through common controls and tests. Others are leveraging data already in an organization’s network to monitor activity, such as documents that are being uploaded or prompts used in a ChatGPT function. AI is also increasingly embedded in incident management processes. But technology is supplemental to a deeper evaluation of a company’s risk profile.
Cybersecurity policy should focus on threat modeling from the outset, including an inventory of third and fourth-party AI services, from the architecture and service itself to the integrations and APIs required. Modeling these threats in aggregate allows organizations to quantify and spot risk and informs the design of appropriate controls. Organizations also need to define the procedures for ensuring data protection and privacy provisions in the development of AI models and be accountable for the outputs of their algorithms. This should include not just compliance requirements but ethical considerations.
Threat evaluation must be supported by an effective operational system that can evolve to cope with what are essentially “living” AI solutions and data sets by ensuring continuous data verification, classification and scoping, including tagging sensitivity and data criticality. Some companies have as little as 20% of their data tagged or classified, our research has found. Realistically, companies should prioritize tagging and verification for their most critical and sensitive data to ensure they have the right safeguards for issues like identity, access management, data flow, data access and lineage.
Threat modeling and access are critical to implementing an effective cybersecurity governance model, but organizations must be cognizant of the risk of falling into old and ineffective response mechanisms. One approach is to place an AI expert on the board for a six-month rotation with the power to devise a new governance model, including a focus on education and training. Accountability is also required to ensure responsibility for AI governance is apportioned appropriately covering custody, ownership and use.
For AI, a cyber-informed workforce to combat employee error
While exotic AI hacking attempts like deepfake CFO bank transfer requests dominate the headlines, employee error remains the most prominent vulnerability for most organizations. AI and cybersecurity are a new threat vector, requiring controls that prevent unauthorized personnel from intentionally or unintentionally acquiring sensitive information that they may not previously have had access to or interaction with. Indeed, the entire promise of AI is giving employees the chance to query and extract value from more data than before. That can only be delivered if cyber guidance is equally easy for them to obtain.
One common trait of the successful companies analyzed in our 2023 Global Cybersecurity Leadership Insights Study, dubbed “Secure Creators,” was the integration of cybersecurity to all levels of the organization, from the C-suite to the workforce at large. Only half of cybersecurity leaders overall said their cyber training is effective. Can AI itself deliver more effective cyber communication approaches and give employees the support they seek?
More sophisticated and intuitive chatbots, for example, could advise on employee questions about sensitive or restricted data, in turn reducing the burnout on cyber teams attending to queries and the frustration of employees wading through lengthy and complex policy documents. Implementing control mechanisms and easy querying can reduce shadow IT risks like dropping sensitive data, IP, or restricted material into AI models.
Where appropriate, using gamification of cyber training to improve digital literacy to appeal to people’s competitive nature and involve them in learning and reward-driven training programs can improve both engagement and interest. This is particularly key to communicate the risks of AI models that go beyond conventional approaches like email phishing, such as deepfake and synthetic media. Such solutions highlight the myriad positive ways in which technology itself can help tackle mounting cybersecurity challenges.
Chief Data Officers, Centers of Excellence and design patterning
To be cyber-secure in the AI era, it is not enough to rely on training and technology; organizational re-design, new reporting lines and processes must be pursued to allow reasonable levels of adoption and avoid cyber risk being worked through in case-by-case adoption. Governance protocols should not become a means of freezing AI activity unduly. Instead, companies need to tweak and at times reimagine institutions and leadership reporting to create the right incentives and structures.
For instance, Chief Data Officers (CDOs) have tended to focus on harnessing data for business value, with less integration to the technology function and even weaker intersection with the cyber unit and CISOs. That needs to change in the AI era, when a cybersecurity lens is needed through the data management life cycle as more data becomes usable in the business. CDOs must focus more on data governance, quality and privacy and a broader range of skills is now required in the cybersecurity executive team as a whole.
The breadth of skills needed in today’s function is expanding in several directions at once. Here, we outline some of the many cybersecurity executive profiles that have emerged in recent years. The best approach is to build a team that balances a combination of broad disciplines, with the understanding that each has its own strengths and weaknesses.
Summary
Organizations face heightened cyber risks with AI integration, requiring a multi-faceted approach to cybersecurity. Training, governance and operational strategies must evolve to address the complexities of AI, ensuring responsible use and robust data protection. Centers of Excellence emerge as pivotal in orchestrating secure AI adoption and mitigating shadow IT phenomena.