Financial advisors have a fiduciary obligation to act in their clients’ best interests, and at the same time are prohibited by state and SEC rules from making misleading statements or omissions about their advisory business. These responsibilities also extend to the use of any technology used in the process of giving advice: A recommendation made with the aid of technology still needs to be in the client’s best interests, while the technology also needs to carry out any function as it’s described in the advisor’s marketing materials and client communications.
In order to adhere to these regulatory standards of conduct while using technology, however, advisors need to have at least a baseline knowledge of how the technology works. Because on the one hand, it’s necessary to understand how technology processes and analyzes client information to produce its output to have a reasonable basis to rely on that output to make a recommendation in the client’s best interest. On the other hand, the advisor needs to understand what process the technology uses to begin with to ensure that their processes are being followed as described in their advertising and communications.
The recent rise of Artificial Intelligence (AI) capabilities embedded within advisor technology throws a wrinkle into how advisors adhere to their fiduciary and compliance obligations when using technology. Because while some AI tools (such as ChatGPT, which produces text responses to an advisor’s prompt in a chat box) can be used simply to summarize or restate the advisor’s pre-determined recommendations in a client-friendly way, other tools are used to digest the client’s data and output their own observations and insights. Given the ‘black box’ nature of most AI tools, this raises questions about whether advisors are even capable of acting as a fiduciary when giving recommendations generated by an AI tool, since there’s no way of vetting the tool’s output to ensure it’s in the client’s best interests.Which also gives rise to the “Catch-22” of using AI as a fiduciary, since even if an AI tool did provide the calculations it used to generate its output, it would likely involve far more data than the advisor could possibly review anyway!
Thankfully, some software tools provide a middle ground between AI used ‘just’ to communicate the advisor’s pre-existing recommendations to clients, and AI used to generate recommendations on its own. An increasing number of tools rely on AI to process client data, but instead of generating and delivering recommendations directly, they produce lists of suggested strategies, which the advisor can then vet and analyze themselves for appropriateness for the client. In essence, such tools can be used as a ‘digital analyst’ that can review data and scan for planning opportunities faster than the advisor can, leaving the final decision of whether or not to recommend any specific strategy to the advisor themselves.
The key point is that while technology (including AI) can be used to support advisors in many parts of the financial planning process, the obligation of advisors to act in their clients’ best interests (and from a regulatory perspective, to ‘show their work’ in doing so) makes AI tools unlikely to replace the advisor’s role in giving financial recommendations. Because ultimately, even as technology becomes ever more sophisticated, the clients who advisors work with remain human beings – which means it takes another human to truly take their best interests to heart!