Beyond the question of data privacy, users also have to think about the possibility of bias being introduced into AI algorithms through the data they’re fed. As users validate and cleanse the data to be plugged into a model, Preece says, they should be mindful of the limitations of the information they use, the sampling techniques they employ, and the possibility of biases being imported from the categories or groups within a population they sample from.
“Machines are very good at processing reports, performing tasks, and understanding the properties of vast quantities of data that are beyond the comprehension of a human,” Preece says. “But they don’t possess fundamental ethical attributes that people have, like client loyalty and respect.”
Read more: How financial planning bodies are exploring the potential of technology
The CFA Institute’s framework also highlights the issue of model interpretability, emphasizing the need for users to understand how a machine arrives at a certain result. On a related note, it says users have to ensure the accuracy of the model by training and evaluating it on a sample data set before applying it to real-world data.
“From an accountability standpoint, there should also be a robust governance structure around the deployment of these technologies,” Preece says. “Are you making sure there are appropriate checks and balances, that there are thorough reviews before a model is put into a live environment? And are you considering ethical conflicts as part of that governance and oversight mechanism?”