Mr. Rich Apostolik is the President and Chief Executive Officer of Global Association of Risk Professionals (GARP). Under his leadership, GARP launched two world-leading risk certifications—the Financial Risk Manager (FRM®) and the Sustainability and Climate Risk. In the interview with PKU Financial Review, Mr. Apostolik states that, we're still in early stages of artificial intelligence, but it's really moving quickly.
PKU Financial Review: Nowadays, institutions and companies in some countries have begun appointing CSOs while China's institutions have not yet initiated this practice. What are your thoughts on the job requirements and role of CSOs?
Rich Apostolik: CSO is interesting. I'll start off by saying I don't believe the CSO role is a long term role. If there is one within an organization, all right. But if you look at an organization itself, there are really two sides to climate risk that are important. You got the financial impact side of climate risk which is generally being dealt with by the Chief Risk Officers within organizations themselves, and the second is the impact that your organization itself has on the climate. That’s where the sustainability officers generally fit within an organization. If in fact, an organization does have them, Chief Sustainability Officers, I think where you'll find more of them or most all will be in what's called the “real economy”, meaning not within the financial services industry, but within industrial activities and other types of organizations.
What Risk management relates to is more about the financial impact of climate, so the CROs will generally be dealing with climate risk types of things, as it relates to the financial impact of climate risk now. In many cases, Chief Sustainability Officers were appointed as a PR move, as a public relations move. It made the organization feel good. As I mentioned earlier, there's a reputation risk issue. It made them actually present themselves in a very positive way towards the industry itself that they are actually taking climate change very seriously.
Those officers were really focused in many ways on the supply change that the organization itself was dealing with. They were dealing with the businesses within the company itself to ensure that they could measure the impacts that they were having, and that the business itself was happening on the environment. They also tend to have a wider lens than just climate. They may also include broad governance issues and some social issues in addition to just impacts of sustainability.
But I think organizations are looking to embed climate change within their structure itself, within the overall structure of the company, not just within a certain department, but within every department, within every activity. Once that becomes embedded in, there may not be a full role for a Chief Sustainability Officer going forward. That’s why I have a little bit skeptical as to whether the role of Chief Sustainability Officer is more of a long term role. If it is until the embedding of climate related activities takes place within organization, then it transitions out as a full time role, and it's embedded in every other activity within the organization. So that's my personal view, we'll see what happens over the coming years as to whether that pans out or not. That's how I look at it at the moment.
PKU Financial Review: With a swift evolution of AI technology, the financial sector's landscape is undergoing constant transformation. This trend brings forth both unprecedented opportunities and also challenges for risk management. Could you delve into the impact of AI on risk management and identification assessment, citing real world examples? Furthermore, how do prominent financial institutions worldwide harness AI technology to streamline decision making processes in risk management?
Rich Apostolik: It’s really interesting in that area. We’re actually developing a new certification program that we're going to be offering this fall on artificial intelligence in risk management. So it's very very important, it's an important and growing area. I think it's at its very early phases. You might say generative AI is your aware, is the new area that is taking the world by storm, but it also does raise a lot of risk management related issues.
So I'm not going to say that we really have a lot of answers in terms of where risk management, and how risk management might be dealing with things like artificial intelligence and generative AI. The first wave of adoption, though that we've seen generally in this area, has come in regulatory compliance, dealing with financial crime, credit risk, modeling and data analytics, cyber risk and climate risk related activity.
Looking at it in a couple of different ways, firms have been dealing with what they call “virtual experts”. A user can ask a question and receive a generated summary answer. That’s built from documents and other types of data sources within an organization or externally that's developed or brought in from external sources, which is something like it's taking now and is automating, but have been very time consuming tasks. This is all very basic stuff. As we start to learn more and more about it, then it's being used also to accelerate the writing of code. So it's been really really helpful in that regard. I've been told of examples of organizations that are using this, where what has taken weeks is now taking literally just hours to do because of using AI and generative AI.
Financial crime is another word, another area and using it for financial crime. It can source through a lot of data that would relate to financial crime; it can generate with the cost suspicious activity reports that are based on customer and transaction information; and it can automate the creation and updating of customer risk ratings. There's a way it can be used to essentially help to ensure the organization's viability, and fight against financial crime with credit risk.
It’s summarizing customer information, and it can help accelerate of banks with the call end-to-end credit review process. It can draft credit memos, contracts and things along those lines. In many financial institutions, they're using AI and generative AI to actually craft credit reports, and extract customer insights from credit memo. So there's a variety of uses for it within the credit function, within an organization with modeling.
There's a pretty obvious one here, it can accelerate the migration of legacy programming languages. Such as you can switch from what they call SAS to Cobol to Python. For instance, it can automate the monitoring of model performance, and it can generate alerts. In terms of cyber risk, it can check cybersecurity for cybersecurity vulnerabilities within an organization, and it can use natural language to generate code for detection, generate detection rules, and accelerate code development. It’s useful for what they call “red teaming”, which means simulating adversarial strategies and testing attack scenarios.
In terms of the data how it's analyzing that data, I think from a modeling side, using AI for modeling is a real issue. Because itself learning essentially. So for an organization, to be able to explain to its senior management what a modeling is becomes very difficult. The modeling learning on the fly and so, how do you actually explain that to your organization to feel comfortable? And you have to be concerned with this thing called "hallucinations", which you may be familiar with. Where you're looking at it's taking in data, but it's making up an answer that really doesn't exist. That's very very concerning, we're not fully there in terms of dealing with those issues. There’s also the issue of ethics and bias, and how do you protect against bias that might be developing within the model? That’s self-generating code, which doing all these other things that you've asked to do. So we're still in early stages, but it's really really moving quickly.