The practical application of AI
The use of machine learning algorithms is enabling highly efficient securities processing for investment managers, without the need for internal development.
The Science of Investment spoke with Andreas Burner, chief innovation officer at SmartStream, about the incentives and barriers to adoption of artificial intelligence (AI) and machine learning (ML) applications.
How would you characterise the pace of adoption for AI and ML in the industry today?
We have clients across the buy and sell side and we are in the back, middle and front office. The interesting scenario for SmartStream is that, in the financial services industry, the companies are far ahead of the regulator. We are helping regulators in discussion of what they should and should not allow. We have set up some principles around what we are and are not doing. For example, the ‘explainability’ of AI is a must have for all applications. In our newest product, SmartStream Artificial Intelligence Reconciliations (AIR) which uses white box AI, we are using technology that allows the user to fully understand why a certain decision was taken. We use supervised and unsupervised learning, we use deep learning, but the end result is always fully explainable so that regulators get a clear understanding of what is executed and what really runs the processes and the workflows.
What would you use to explain that logic; is it that an individual decision can be explained, or is it that the principles by which that decision was arrived at can be explained?
There is an interesting discussion behind that. In machine learning we always talk about scoring and confidence levels. If one of my data scientists comes back with a task and says, ‘Its scoring 99.9 per cent’, that means for a trading firm that for every 1,000 transactions they do, one will fail. That’s significant. The problem with ‘explainability’ is that, even if you explained those 999 transactions, it is not enough because there might be one or the other transactions out of 1000 that will fail to be explained. That’s the difference between explaining why one decision was made up by machine learning or having explicable rules.
Coming back to the auto-trading example, machine learning can tell you that when the euro US dollar exchange rate is high, and the Chinese market is low, the trading or investment decision should be to buy/sell. So we don’t use AI to make the decision independently, we use AI to create the rules and then model/extract decisions. This is a scenario that regulators really like. When you execute it, you don’t use machine learning and AI, and therefore you reduce the risk a lot.
In which exact applications do you use it most?
We have been developing machine learning technology for all of SmartStream’s products and also for our managed services. The question that has been driving our innovation in the last two years has been, how can we make financial systems as simple and accessible as those in other domains, such as Google’s navigation system? They do not just digitise a process, they create intelligence in the background that gives the user more information, like expected time of arrival or traffic information. We wanted to incorporate that idea into our product suite. For each reconciliation system the installation, set up and configuration takes a lot of time.
So the use case for our clients is quite complicated, they have to figure out what risks they are reducing by putting a reconciliation system in place and compare it against their total cost of ownership. And what we have done with SmartStream Air is we have reduced installation and setup times to zero, because it’s a cloud system, and it configures itself. All the user needs to do is upload files to the system and start the Articificial Intelligence.
The machine learning part is able to identify the file content, map across files and work out matches via SmartStream Air in seconds. It is transparent, so the criteria used for match and mapping can be shown to the user. What we did was simplify the process to get to the reconciliation and how to get to the results as quickly as possible.
Do you think that users are feeling comfortable with the understanding of what principles and best practices ought to be put in place around AI?
It is happening. Regulators themselves need AI and machine learning to process the data, and that is putting them in a good position to know what other users might need to consider in terms of AI. We have spoken with some of the most advanced regulators in this area, such as the Monetary Authority of Singapore, and they are still developing their position on AI and ML.
Ethics is a huge issue when applying machine learning and AI. Also, explaining where the data comes from is important, as the risk becomes a liability issue when implementing those technologies. AI software scales up massively. For every decision, systems can scan millions and millions of records. Then, depending on what kind of data is processed, it might contain biased information and it is essential to consider the ethics. So, applications are really case-by-case at the moment.
Regulators don’t really have the manpower at the moment to look at everything, so I see that banks are still somewhat on their own, deciding on how they implement certain technologies and data management.
What has been your own approach to managing these issues?
We were quite early in coming to that stage. We have run innovation labs for more than three years now, and when we started we had to come up with our own rules, that we will apply in SmartStream. We do not allow black box AI for example, and we are always very careful with data, where it comes from, and how we use it.
So we have to find our own restrictions in SmartStream, because we do understand our clients want a very conservative approach with AI and machine learning. During the process we found out that this is not a process that harms innovation, it’s just a question of what to apply and at what point. So, for example, for our cash and liquidity system, we do settlement predictions for each of the payments, and there is no harm in using deep learning in that space. There is always a human that oversees that settlement. For that scenario we used the best approaches to predict settlement times and then find out where we had missed payments, and that works fantastically well.
In other applications we use a more conservative approach, because we want to automate processes that are running at 3 o’clock in the morning, self-sufficiently. Therefore we need a more cautious approach. ●