Your IndustryNov 2 2017

Financial regulator warns on artificial intelligence risk

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
Financial regulator warns on artificial intelligence risk

The increased use of artificial intelligence in financial services could pose a risk to stability, according to an international regulator.

The Financial Stability Board, which includes all G20 major economies, has published a report on the implications of the increasingly widespread use of machine learning and artificial intelligence in the financial sector.

It has raised concerns about an “arms race” for the use of artificial intelligence, with financial services companies investing in it simply because their competitors are.

While the report said there were likely to be benefits from this development, such as the more efficient processing of information and improved regulatory compliance, this was not without risks because of the potential emergence of new “systemically important” players.

The report said: “AI and machine learning services are increasingly being offered by a few large technology firms.

“Like in other platform-based markets, there may be value in financial institutions using similar third-party providers given these providers’ reputation, scale, and interoperability.

“There is the potential for natural monopolies or oligopolies. These competition issues – relevant enough from the perspective of economic efficiency – could be translated into financial stability risks if and when such technology firms have a large market share in specific financial market segments.

“These third-party dependencies and interconnections could have systemic effects if such a large firm were to face a major disruption or insolvency.”

The FSB, which is chaired by Bank of England governor Mark Carney, warned this issue would become particularly problematic if these AI and machine learning tools fell outside the regulatory perimeter or were not familiar with the relevant rules and laws.

These issues could be exacerbated by the opaque nature of some of these models compared to their traditional rivals.

The report said: “The lack of interpretability may be overlooked in various situations, including, for example, if the model’s performance exceeds that of more interpretable models.

“Yet the lack of interpretability will make it even more difficult to determine potential effects beyond the firms’ balance sheet, for example during a systemic shock.

“Notably, many AI and machine learning developed models are being ‘trained’ in a period of low volatility.

“As such, the models may not suggest optimal actions in a significant economic downturn or in a financial crisis, or the models may not suggest appropriate management of long-term risks.”

The Financial Conduct Authority has said it is carrying out an assessment of the suitability processes used by robo-advice as part of its investigation into automated-advice models.

damian.fantato@ft.com