500 words. Updated 10 May 2024.

 It is tempting to challenge AI to do risk identification.

After listening to a financial podcast [1] touching on post secondary (higher) education in Canada, I posed this question to AI: "What are the crucial strategic risks that post secondary educational institutions are facing in British Columbia in 2024?"

Post-secondary planning
The podcast itself made clear that universities are affected by public policy in immigration, housing and municipal regulation. The current vital news for colleges and universities is that federal caps on international student visas signify revenue losses, particularly in Ontario and British Columbia.

Partial marks, at least, go to ChatGPT for naming, in its answer, Financial Sustainability as the first in a set of eight categories or issues (without mentioning the federal caps). Other general headings included in its answer were Demographic Changes and Technological Disruption.

Strategic issues and risk categories are not risks
This reminds me of discussions with my former Director at Risk Management Branch. We were daily facilitating risk assessments with various agencies across the province. We would despair when a client would simply take a list of the "top 10" strategic issues published by one of the Big 4 accounting firms or an industry source. They would hold up the list and say: “There, our risk assessment is done.” We would always have to say, "Yes, those are the issues, but what are the risks?"  

Whether supplied by consulting firms or AI, a list of general headings applicable to the whole sector – while useful to inspire one's thinking – does not constitute a risk assessment. This is the fundamental point in this little experiment with AI – at least the current free version of ChatGPT that I accessed – as a risk assessment tool. It gave “categories" or "issues”, not risks, which must be formulated in relation to goals, in a given context.

Can the human mental proccess in risk ID be replicated by AI?
It raises the question: could AI go into an analysis of a given university's operations and formulate specific risks? Under the right circumstances it probably could, but the results I believe would have an artificial or contrived character, and require vetting by people. The reason is that risk identification, to be useful, requires human perception. In other words, it takes a penetrating understanding of the immediate context, and not a generalized textbook-style parsing of information. In a person, that understanding is subtle, and informed by professional memory and intuition.

I am not against AI assisting with risk ID (as long as control over it is decentralized and it does not infringe upon personal privacy). AI could access intel on industry best practices, and help the effort to be comprehensive by naming risk categories, as we saw above.

As in all technological change, the challenge is to harness the benefits while mitigating detrimental unintended consequences, which often don't come to light until a later stage of adoption. 

Notes
[1] Michael Campbell's MoneyTalks Podcast January 27, 2024 episode.