Advisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.
Imagine being able to free up your time for client meetings and strategic planning while having a tireless assistant handle routine tasks and even help analyze complex data to save you time and money. That's the potential of artificial intelligence (AI) when it comes to improving a wealth-management practice. AI tools are already making waves in the financial world and are becoming increasingly user-friendly. Yet, there are numerous pitfalls to consider when it comes to using AI in an RIA practice given regulatory and legal concerns.
Unfortunately, those RIAs who don’t use AI will likely find themselves falling behind their competitors who do. I will explain what AI is and how it’s impacting the financial services industry; some regulatory and legal concerns with respect to the use of AI; and how RIAs can take their first steps to utilizing AI in their practice responsibly.
What can AI do for your RIA practice?
AI can enhance almost every aspect of an RIA’s practice, including performing tasks related to portfolio management, client onboarding and servicing, operations, and marketing.
When it comes to portfolio management, AI can process significant amounts of historical and research data to identify investments suitable for s clients, monitor investment portfolios for undesirable risks, and rebalance portfolios based on desired target allocations.
For clients, AI tools can enhance and streamline the client experience by facilitating onboarding, gathering and verifying information from clients, crafting targeted client communications, facilitating client education on financial topics, streamlining reporting for clients to help them better understand their financial situation, and helping clients track their progress towards achieving their financial goals.
When it comes to operations, AI can streamline or automate data entry, generate reports on the firm’s financial and investment performance, and identify opportunities to streamline processes and workflows.
For marketing, AI tools can help an RIA identify competitive threats and opportunities, craft and distribute marketing content, and analyze what marketing strategies are (and are not) working for the RIA.
What concerns arise when it comes to using AI in your practice?
Before diving into the exciting opportunities for using AI tools, RIAs must understand that there are numerous regulatory and legal risks to consider. In this section, I’ll focus solely on data safeguarding and accuracy concerns.
First and foremost, RIAs must be sensitive with respect to the safeguarding of confidential information when it comes to the use of AI tools. Depending on the circumstances, AI tools use, process, and store the data provided to them to perform tasks. Because they learn from that data, those tools can disseminate information to other users not affiliated with the RIA.
Whether an RIA is registered with the SEC or a state, the firm has obligations to protect clients from inappropriate sharing of their nonpublic personal information. In addition, RIAs are often subject to confidentiality obligations imposed by other third parties who share their information with the RIA subject to a non-disclosure or other agreement. As such, it behooves RIAs to understand how the AI tools they are using process, store, and use the information being provided to ensure that these practices are consistent with the RIA’s obligations to clients and other third parties. RIAs must be vigilant in selecting AI tools that prioritize data security and adhere to ethical and regulatory guidelines. This ensures client data is used responsibly and satisfies the RIA’s obligations to clients and third parties.
Second, AI tools can provide incorrect or biased results often called “hallucinations” when prompted to perform certain tasks. This can arise for several reasons. AI tools may be fed bad or biased data. AI models learn from the data they are trained on. If this data is incomplete, inaccurate, or biased, the model can develop faulty patterns and produce nonsensical outputs when encountering new information. For instance, an AI trained on news articles with a political slant might generate text reflecting that bias, even if factually untrue.
When an AI tool is trained too closely on a specific dataset, it can become overly reliant on memorizing the data points instead of learning the underlying patterns. This can lead to the model failing to generalize well to new data and producing irrelevant or incorrect outputs for unseen situations.
AI models often require clear and specific prompts or context to understand what is expected of them. Ambiguous or poorly formulated prompts can lead the model to make assumptions or fill in the blanks based on its limited understanding, potentially resulting in outputs that don't align with the intentions of the user. As such, it’s vital that RIAs that utilize AI tools to generate factual output confirm the accuracy of the information provided by the AI tool with independent sources before using the output in their practices.
The need to provide accurate information to clients is not only borne out of ethical obligations to clients but also regulatory mandates including, among other things, regulations that govern the fiduciary obligations of RIAs as well as regulations governing marketing and advertising by RIAs.
In addition, regulators have expressed their concern over biased results generated using AI tools. in August 2023, The SEC proposed new rules to address potential conflicts of interest when investment advisers use predictive data analytics to interact with clients. These rules would require advisers to identify and mitigate any conflicts arising from such technologies, including AI tools, to ensure that advisers can identify and neutralize any conflicts of interest. While not yet adopted, this regulation would seemingly pose a significant burden on RIAs looking to use AI tools. Fortunately, such regulations have yet to be adopted by the SEC.
Considering the concerns described above, RIAs who decide to use AI tools must adopt policies, procedures, and controls reasonably designed to ensure that sensitive information is appropriately safeguarded and that any results generated by any AI tools are accurate and unbiased.
How can RIAs use AI tools responsibly?
While the concerns over the responsible use of AI should be taken seriously, fortunately there are numerous low-risk ways of utilizing AI tools to benefit their practices.
For many RIAs, it’s a matter of getting started. The best way is to utilize free, user-friendly AI tools like ChatGPT, Google Bard/Gemini, Bing Chat, and Perplexity.
These AI tools generate output based on “prompts” provided by users that are nothing more than natural-language queries that you might submit to a search engine. Fortunately, these tools are much more powerful than search engines and can perform more complex tasks and analyses.
Yet, it’s vital for RIAs to practice the skill of prompting because the quality of the prompt is significantly correlated with the usefulness of the output.
When it comes to tasks for AI tools to perform, RIAs should think of those that they want to be performed more efficiently by using the AI tool. Here are eight sample prompts that RIAs can use to jump start their exploration process:
- Tell me 10 specific strategies other investment advisers are employing to gain more clients in 2024.
- Convert this Microsoft Word document outlining the benefits and drawbacks of Section 529 plans into a PowerPoint presentation.
- Write an explanation of tax-loss harvesting that can be understood by clients who are not financial industry professionals.
- Provide me with 10 specific ideas on how RIAs are streamlining their client onboarding procedures in 2024. Cite specific tools being used by these RIAs.
- Rewrite this client newsletter to make it more easily understood and engaging for clients in line with our brand guidelines. [insert brand guidelines]
- Draft a 1,500-word article outlining the most important issues to consider when deciding whether to make contributions to a traditional IRA account versus a Roth IRA account.
- Summarize in one paragraph this PDF that describes key developments arising out of Secure Act 2.0.
- Summarize this compliance policy to make it easier for our employees to understand in plain English.
Each of the above tasks can be performed without having to disclose sensitive data. Nonetheless, it is still important to confirm the accuracy of output through separate sources.
The only way to get better at using AI tools is to experiment, and I strongly recommend exploring what these tools can do, ideally utilizing multiple platforms as each seems to have its strengths and weaknesses.
From there, it’s up to RIAs as to whether to use more advanced tools which may require additional safeguards to ensure responsible use.
Conclusion
By experimenting with AI, you’ll be pleasantly surprised as to how it can benefit your RIA practice when done in a responsible manner.
Richard Chen is a managing partner with Brightstar Law Group, a law firm that serves investment advisory firms by providing proactive business-minded solutions pertaining to corporate and securities law-related matters. Among other things, our firm provides counsel with respect to securities and compliance matters (including representation in SEC examinations), private fund formation, corporate formation and structuring, business transactions (including M&A and joint ventures), contract drafting and negotiation, employment law matters, operational due diligence, and succession planning. For more information, please visit our website at www.brightstarlawgroup.com or call us at 917-838-7398.
A message from Advisor Perspectives and VettaFi: As fixed income dynamics shift, how will you guide your clients through 2024? Discover strategies at the Fixed Income Symposium on April 18th. Click here to register
More Fiduciary Rules Topics >