Top Takeaways from ISACA AI Innovation Workshop

Ramsés Gallego, ISACA Evangelist and ISACA Hall of Fame inductee, and Megan Hall, CPA, CISA, CISA, Chief Information and Infrastructure Officer
Author: Ramsés Gallego, ISACA Evangelist and ISACA Hall of Fame inductee, and Megan Hall, CPA, CISA, CISA, Chief Information and Infrastructure Officer
Date Published: 18 November 2024
Read Time: 3 minutes

Just as the internet created a sea change on the technology landscape, so, too will artificial intelligence. Across digital trust fields such as cybersecurity, risk, privacy, audit and governance, AI is both shifting practitioners’ responsibilities and creating new opportunities for impact.

On 22 October, we co-led an AI Innovation workshop leading up to ISACA Conference Europe in Dublin, Ireland. During the workshop, we explored how professionals in each of the digital trust fields can thrive in this new era of AI and discussed – both collectively and in small groups – the types of controls needed to ensure AI implementations are responsible and effective.

Below are five of our top takeaways from the workshop:

Expect AI to proliferate

If you think AI technologies are prevalent now, we have not seen anything yet.

In the not-too-distant future, we envision an epoch of further democratized access to AI. Everyone, everywhere will have access to a large number of AIs that will help them “automagically” complete and complement their job. This will include numerous single-purpose algorithms dedicated to enhancing specific roles and industries.

Given how prominent of a role AI will play on the enterprise landscape, organizations should be proactive about setting an AI strategy and AI acceptable use policies. Those that choose to be reactive will be left behind. 

All digital trust professionals have a role to play

Even if your organization does not devote resources to building and coding AI, there is a high probability that professionals across all digital trust fields will come into contact with AI in their roles. Therefore, learning about AI is becoming increasingly essential.

For example, in the GRC realm, AI-powered risk management systems will lead to more timely insights on risks, and AI will also factor into regulatory compliance monitoring, review of third-party contracts, review of third-party SOC reports, third party and/or customer analysis to support second-line risk management processes, and analysis of data existence and usage to facilitate data governance/data management efforts.

In most cases, AI will not erase jobs, but it might build a wall between professionals who understand AI and those who do not.

AI still has to earn our trust

While AI has advanced beyond its infancy, making massive impacts in nearly every industry, it would be irresponsible to count on AI without robust controls and safeguards in place.

Sometimes, AI lies – or at least, hallucinates – when driven by imperfect data inputted by imperfect humans.

That is why a series of questions need to be asked on an ongoing basis by digital trust professionals. Who (or what) has access to what (and for how long)? How deep is the access? And says who? These are among the questions that should be at the core of the AI conversation.

Let’s apply our collective experience to AI

AI does not necessarily introduce new risks; in many cases, it amplifies existing risks.

With that in mind, let’s apply sound risk and governance practices that have worked well for decades in other contexts to this new, AI-intensive landscape. The foundational information technology general controls (ITGCs) we are accustomed to implementing are applicable to AI. ISACA’s Artificial Intelligence Audit Toolkit is an excellent resource to explore additional controls.

Success factors for mitigating AI risk include sound data governance and data management, a defined AI vision and strategy, collaboration between business and IT leadership, utilizing a consistent framework to evaluate initiatives and a holistic approach considering all relevant stakeholders.

Don’t forget to be human

AI is powerful technology that seemingly is gaining momentum each week, but that does not mean it is equipped to replace human practitioners.

In many cases, machine learning and AI technology can do around 80 percent of a job flawlessly, but what remains to be done is of the utmost importance: the critical thinking, problem-solving and understanding of ethical implications for which humans are irreplaceable.

Just because we leverage AI in a business process does not mean we have to let AI make the final recommendation. Let the machines do the work, but do not forget to be human.