Risk management
Boards face sharper challenges in navigating a risk environment that has become more expansive, complex and interconnected. A recent EY Board Risk survey indicates an escalating level of concern amongst boards that a risk will severely impact their business. In an increasingly complex risk environment that is likely to both persist and evolve, boards need to support their organizations in anticipating and adapting to key and other emerging risks, rather than reacting to them. Leading boards are continuing to add value by supporting management in horizon scanning and scenario planning to identify and capitalize on changes in the business environment before they materialize into significant risks.
The ongoing economic uncertainty, growing geopolitical turmoil, cybersecurity, artificial intelligence and other disruptive technologies, labour shortages, cost-of-living crisis and extreme weather events continue to be areas of focus for organizations. These risks are also challenging organizations’ ability to navigate simultaneous or intersecting crises and/or the occurrence of multiple risk events/shocks. While geopolitical uncertainty could dampen profit expectations, we’re hearing that the top two barriers to maximizing revenue growth and profitability in 2024 are increasing investment costs and slowing economic growth in key markets.
We expect that companies will need to be able to understand how the dynamics for their business have evolved and to anticipate future shifts, including their competitive position within their target markets. Audit committees may want to consider whether finance teams have adeptly adjusted to this environment by integrating economic considerations, customer demand projections, and dynamic pricing strategies to alleviate these challenges. In a slower-growth environment with greater costs of doing business and a higher external cost of capital for investment, funding ongoing and future transformation will likely hinge on internal operational rationalization and cost takeout initiatives. Accordingly, we anticipate that that this may be one area where leading organizations leverage artificial intelligence (AI) to make better use of their own data, supplemented with external sources, to have a clearer view of their addressable markets.
In order to help internal auditors and their stakeholders, including audit committees, better understand the risk environment and prepare audit plans for the upcoming year, the Internal Audit Foundation recently issued its survey report 2024: Risk in Focus. We’ve excerpted some notable highlights from this survey:
The three areas of highest risk for organizations were cybersecurity, human capital, and business continuity. For most regions, regulatory change also ranks as a top 5 highest risk, with the exception of Africa and the Middle East, where financial liquidity is more of a concern.
In terms of future risk, there is consensus worldwide that risk levels will rise in the next three years for digital disruption and climate change.
Although risk levels may vary region to region, the areas of highest effort for internal audit are generally similar. The top areas of audit effort, worldwide, were as follows:
1
Cybersecurity
2
Governance/corporate reporting
3
Business continuity
4
Regulatory change
5
Financial liquidity
6
Fraud
For North America, governance/corporate reporting is low risk for organizations but high effort for internal audit. Steep rises in audit effort are expected to deal with digital disruption and climate change. These increases are expected to be offset by reductions for audit effort relating to financial liquidity and governance/corporate reporting.
As it relates to mitigating cybersecurity related risks, most North American Chief Audit Executives (CAE) noted efforts to strengthen training and awareness to combat continuous developments and social engineering hacks. Organizations are also expected to run through extensive hacking, defence, and recovery scenarios to ensure the executive team and board are ready for strategic decision making if a ransomware attack occurs. This is combined with the use of ethical hackers to test online and operational defence controls.
As it relates to human capital, the report notes that few companies have fully redefined their work processes in the post-pandemic era. CAEs can help boards with awareness about differences in work practices across business units so that boards are more in tune with culture realities within the organization.
Securing the right talent and skill sets for internal audit is a continuing challenge. Evolving technologies, such as AI, the growing complexity of new and changing regulations, and the dynamic nature of risks all require new skills and capabilities within internal audit. Increasing the use of guest auditors for specific assignments and boosting rotations from within the business is becoming a strategy organizations are deploying to strengthen the bench strength and capabilities of the internal audit function.
Internal audit functions and audit committees may want to review this report to benchmark their own internal audit risk areas and planned audit efforts.
Internal Audit Foundation - 2024: Risk in Focus.
Boards and audit committees are revisiting risk management practices to see that risks are managed effectively across the organization. They’re also building more resiliency toward low-likelihood and high-impact risks, including the ability to rapidly restore business operations. Given the likely continued waves of disruption ahead, leading organizations are making investments to drive resiliency into their long-term strategies and operating models.
We’re seeing leading organizations reassess their enterprise resiliency capabilities and seek ways to increase their maturity on this front. Critical components of enterprise resiliency that leading companies are focused on:
Below are key considerations to assess the company's resiliency capabilities across these key components:
The cybersecurity threat landscape continues to evolve. Traditional threat actors such as IP theft, ransomware, coupled with the adoption of new technologies (e.g. Metaverse, ChatGPT, generative AI), are dramatically affecting the cybersecurity landscape.
EY’s 2023 Global Cybersecurity Leadership Insights study revealed that a wave of new technology implementation is coming, with 84% of organizations in the early stages of adding two or more new technologies to their existing suite of cybersecurity solutions. But ironically, it’s the very scale and complexity of security measures that now pose the greatest threat to efficient cybersecurity. Rationalization of cyber tools should be a continual consideration to ensure that cyber efforts are not distracted through the management and operation of multiple complex cyber technologies.
The Securities and Exchange commission (SEC) has released the formal requirements for breach reporting that came into effect in December 2023, resulting in cyber incidents having to be reported within four business days of a registrant determining that an incident is material. Additionally, Bill C-27 which is focused on Consumer Privacy Protection in Canada as well as various provincial laws, including Law 25 in Quebec, continues to progress and will have widespread impact on the cybersecurity landscape.
There has been a steep increase in more sophisticated cyber attacks on Canadian businesses which has highlighted the lack of governance around third-party risk management and resiliency in the business operations during and after the cyber attacks. Responsibilities at both a management and board level are being scrutinized. There have been instances of relevant management – such as Chief Information Security Officers – being held personally and publicly responsible for significant security breaches, which has led to higher costs associated with the role and greater scrutiny by those considering positions to ensure that they have the autonomy and support to execute well in the role.
Given the dynamic cybersecurity landscape, audit committees should stay attuned to evolving governance and oversight practices, disclosure requirements, reporting structures, and metrics and understand implications for how the company is staying in compliance with requirements.
Learn more about EY 2023 Global Cybersecurity Leadership Insights Study Click here
Artificial intelligence and machine learning (AI/ML) have climbed to the top of the strategic agenda for many boards in recent times. The emergence of generative AI tools capable of producing rich, prompt-based content and code has further fueled this focus.
Fraud detection, automating operational tasks, identifying possible cyberattacks, and regulatory compliance are some of the use cases that organizations are exploring to enhance their risk management and compliance-related efforts. However, AI/ML early adopters may face increased risks, such as lawsuits arising from the use of web-based copyrighted material in AI outputs, concerns about bias, lack of traceability due to the “black box” nature of AI applications, reliability of the output, and threats to data privacy and cybersecurity. As a result, many organizations are opting for a cautious approach to AI/ML.
Organizations are initially implementing applications in non-customer-facing processes or to aid customer-facing employees where the primary goals are improving operational efficiency and augmenting employee intelligence by offering insights, recommendations and decision-making support. Risk teams are beginning to leverage AI in broader contexts, such as scanning and reviewing regulations and for process, risk and control diagnostics. In the future, we expect AI use cases to move to higher impact financial use cases including market simulation and portfolio optimization, as well as customer-facing applications. Advancements in AI automation will result in greater reliance on autonomous decision-making without a human-in-the-loop. As AI functionality matures to higher risk and more autonomous use cases, the need for greater investments in safety guardrails, including real-time monitoring and third-party validations, will need to be made to keep pace.
With AI usage increasingly democratized, robust and agile governance has become an urgent board priority. Audit committees should inquire with management and internal audit regarding risk assessments around AI and related AI governance, including how risks around ethical use of AI, accuracy of outputs, plagiarism, copyright, trademark violations, and protections of company IP were considered. Additionally, audit committees should ask management whether and how AI is used within the financial reporting processes, including related internal control impacts.
What to learn more about EY.ai? Click here
The Artificial Intelligence and Data Act (AIDA), first tabled in June 2022 by the Government of Canada as part of Bill C-27, the Digital Charter Implementation Act, is Canada’s regulatory response in providing guardrails for the responsible design, development, and deployment of AI systems. During the last year, the AIDA framework has undergone significant public consultation which raised critical issues with the first draft. While public hearings and calls for amendments are still in progress, there is broad consensus that revisions to AIDA are needed to more closely align it with other proposed regulatory schemes, including the EU AI Act, and to provide more comprehensive guidance. The recommended updates include identifying the types of AI applications that would be considered high-risk, adding specific provisions for general-purpose AI systems such as ChatGPT, and more clearly defining the obligations under AIDA for different actors involved in the design, development, distribution and operation of AI systems. The timeframe for the next version of AIDA is still to be announced, however it’s becoming more probable that AIDA will be severed from Bill C-27 allowing the less contested Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act to proceed to final readings in the House of Commons and Senate, and Royal Assent.
Policymakers, alongside the C-Suite and boards, are considering what AI could mean for capital markets, the economy and society. In the last quarter of 2023, a number of pronouncements were made that demonstrate the importance to policy makers that AI is trustworthy and developed in a responsible manner.
On October 30, 2023 the Biden Administration issued an Executive Order (EO) on AI with the goal of promoting the “safe, secure, and trustworthy development and use of artificial intelligence.” This EO represents a significant development on the subject of accountability in how AI is developed and deployed across organizations.
Some of the ways in which the EO may impact issuers include:
The National Institute of Standards and Technology (NIST) is tasked with developing guidelines and best practices for “developing and deploying safe, secure and trustworthy AI systems,” incorporating the existing NIST AI Risk Management Framework as appropriate, related to the deployment of AI systems in the critical infrastructure sector.
Companies may want to evaluate their existing AI risk management frameworks against the NIST AI Risk Management Framework to develop a baseline and prepare for additional guidance to be released from relevant agencies and regulatory bodies.
Federal procurement policy relating to AI will be revised to address the “effective and appropriate use of AI, advance AI innovation and manage risks from AI.” As the largest customer in the US economy, the federal government’s purchasing requirements often become industry standard – making procurement policy a very strong tool for promoting policy goals.
A White House fact sheet on the order can be found here. Refer also to this EY Biden Administration news alert along with this summary of key AI-related policy issues.
On December 8, 2023 the EU received provisional agreement on the proposed revisions to the EU AI Act. The draft regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact. A significant revision this fall to the EU AI Act was the inclusion of specific transparency obligations for foundation models and the need for greater oversight over high-impact general-purpose AI models that could cause systemic risk. Issuers should pay attention to the definitions of high-risk and prohibited AI practices to ensure that if any proposed AI use cases fall into one of these categories their obligations in the EU are well understood and can be met. Further details on the EU AI Act can be found here.
Another notable pronouncement in December 2023 included the release of the ISO AI Management System standard (“ISO/IEC 42001:2023” or “AIMS”). This is the world’s first international standard that specifies requirements for establishing, implementing, maintaining and continually improving an organization’s AI management system. Issuers looking to demonstrate the trustworthiness of their AI program may want to consider performing a current state assessment against the AIMS standard in preparation for ISO certification.