Produced By: Ensombl
Artificial Intelligence (AI) in financial services has leapt from a distant possibility to an immediate reality in just a few short years. Once relegated to theoretical discussions and lab experiments, AI tools have quickly found footholds in everyday tasks, especially in highly regulated and data-intensive fields such as wealth management and financial planning. As the technology evolves, professionals in this industry are grappling with how AI might reduce manual workload, improve client outcomes, and close persistent service gaps. But as with any transformation in a critical field—particularly one that deals intimately with people’s life savings—the need for professionalism and ethics has never been more urgent.
In this article, we explore these issues through the lens of an illuminating conversation between host Patrick Gardner and Clinton Cunningham, founder of an innovative startup called Advisewell. Advisewell aims to reshape how financial advice practices operate by combining cutting-edge AI with deep domain expertise. Drawing from their dialogue, we will delve into how “virtual back-office staff”—in the form of AI agents—promise to expand capacity, enhance client interactions, and unlock new efficiencies for wealth managers. More importantly, we will examine the ethical cornerstones required when deploying AI in an industry so deeply entwined with trust and fiduciary responsibility.
Digital transformation has been a hot topic in financial services for over a decade, resulting in mobile apps, digital portals, and automation that save time and reduce costs. Yet the actual creation of formal financial advice—especially in wealth management—remains cumbersome. Advisors, paraplanners, and other team members often rekey data across multiple systems, gather endless product information, produce compliance documents, and run iterative financial models, all in pursuit of an end product: the Statement of Advice (SOA).
Many professionals working in advisory or paraplanning roles can attest to the administrative complexity. Multiple platforms—Customer Relationship Management (CRM) tools, product comparison websites, Excel spreadsheets, and more—are juggled daily. On top of that, most advice documents run to dozens of pages, each containing disclaimers, disclosures, and disclaimers for disclaimers!
Enter AI. The advent of AI agents (also called “co-pilots” or “virtual employees”) offers the promise of handling much of this manual, repetitive work. These advanced tools go beyond simpler forms of automation: they can pull data from multiple sources, apply certain rules and calculations, generate well-structured documents, and even adapt to new information.
But for this promise to be realized, AI needs the right data and the right training. It also needs professionals who can maintain oversight, ensuring the AI’s outputs align with both ethical guidelines and the intricate regulations of financial services.
In discussing the impetus behind Advisewell, Clinton Cunningham highlights the persistent “back-office bottleneck” in wealth management. A large part of the day for advisors and their teams is spent creating, reviewing, and finalizing advice documents. Clinton’s central thesis is that building AI-driven “virtual employees” can eliminate most of these manual steps altogether.
Imagine telling an AI, “Generate a research report for my client who wants to establish a Self-Managed Super Fund (SMSF), focusing on capital stability and moderate risk.” Instead of a power planner or junior associate rifling through product disclosure statements (PDSs), the AI would handle the data collection. It would retrieve the most recent PDS from official websites, confirm it is current, incorporate necessary client information, and even format that data into a standardized template.
In theory, this shift offers a double win. First, advisors gain more time to do high-value work such as strategy discussion, relationship building, and more proactive engagement with clients. Second, the likelihood of human error (like copying and pasting data into the wrong section) decreases sharply.
However, from an ethical standpoint, delegating sensitive financial tasks to AI agents carries profound responsibility. Financial advisors maintain a fiduciary duty to their clients; trust is paramount. If the AI misinterprets product disclosures or fails to update a figure, the client may be harmed. Therefore, the question arises: How can we ensure that these AI-driven processes adhere to the same professional standards and ethical guidelines as any human advisor or paraplanner?
In Cunningham’s view, an AI agent is more than just a chatbot. It is a computer program with a degree of agency—it can understand higher-order objectives and perform tasks that affect the external world. The role of these agents in financial services is akin to a well-organized team in a fast-food kitchen, each agent specializing in a different station (e.g., retrieving CRM data, scanning product websites, or running mathematical models).
A “supervisory agent” makes sure each specialized agent works properly, engaging the right one at the right time. This is crucial for wealth management, where tasks interlock in a strict sequence, often regulated and replete with legal ramifications.
Agents can both interpret data and produce actions: for example, they might initiate an email to a client to confirm details, or fill in a compliance form. Because these agents are entrusted to perform tasks without constant human supervision, professionals must design robust checks and balances. A compliance breach could happen if the AI incorrectly interprets or distorts crucial financial information.
Practices such as human-in-the-loop review, systematic quality checks, and robust data validation steps help mitigate risks. Professional conduct standards (like those set by the Financial Planning Association or local regulatory authorities) should inform how these AI systems are trained and deployed. The principles of competence, fairness, and duty of care must be woven into the AI’s operational blueprint.
One of the standout points in the conversation is how Advisewell aims to shift away from a “feature list” mindset. Traditional software is marketed by enumerating features—“we have a built-in CRM,” “we offer an automated compliance module,” etc. But Clinton emphasizes skills and capabilities over features.
In simpler terms, the objective is for an advisor to say, “Create a draft Statement of Advice,” and the AI does everything from verifying client details to fetching product disclosures, updating compliance statements, generating projections, and formatting it all into a cohesive document.
When we talk about ethics, this outcome-based view implies that the AI is not just an isolated module but a direct participant in forming the final advice. All professional standards that apply to a human paraplanner also need to be considered. The AI is effectively an extension of the firm’s professional identity.
For AI to generate sound outputs, the inputs must be valid, relevant, and up-to-date. As Patrick Gardner notes, the typical advice practice may be dealing with numerous data sources: CRMs, product databases, regulatory updates, external websites, and more. Ensuring that each piece of data is correct, accurate, and consistent is already a challenge for human workers; it can be even more so for AI, which relies on large amounts of structured and unstructured information.
Any firm deploying AI agents must have a robust data governance policy. This includes:
Additionally, wealth managers must stay vigilant about regulatory guidelines. For instance, if an AI is pulling live product pricing from an external website, the firm must guarantee that process meets legal standards for data use, client confidentiality, and disclaimers around forward-looking statements.
From an ethical standpoint, misusing data or failing to maintain its integrity can put a client’s interests at risk. If an AI tool inadvertently suggests an outdated product with less favorable terms, the client could suffer financial harm. Therefore, oversight structures such as mandatory compliance audits, user acceptance testing, and continuous quality assurance are critical.
The clearest manifestation of AI’s value in financial services is in creating a Statement of Advice (SOA)—a formal, regulated document summarizing a client’s situation, advisor recommendations, and supporting rationale. Traditionally, generating an SOA is labor-intensive. Paraplanners gather client data, update disclaimers, incorporate modeling outputs, and consult product databases to finalize every detail.
Using an AI agent approach:
Through each step, the guardrails for professional ethics remain paramount. The AI must be taught to reference only compliance-approved language, refrain from suggesting products that have not undergone the firm’s due diligence, and highlight any potential conflicts of interest.
Interestingly, Clinton points out that AI can enable a shift in focus from “just getting the job done” to “finding the best possible solution.” If the AI handles 80–90% of the heavy lifting—collating data, structuring documents, and ensuring compliance—the human advisor can dedicate more time and energy to exploring creative strategies.
For instance, if a client’s goal is to retire early, the advisor can use the system to simulate multiple strategies, from more aggressive investment mixes to making lump-sum contributions at different intervals. The AI can run these scenarios, but a human must still interpret them through the lens of professional responsibility. Is the recommended strategy too speculative for the client’s risk profile? Does it align with the client’s real-life timeline and tolerance for market fluctuation?
This approach—AI plus human oversight—embodies the highest ethical standard, ensuring each client’s unique situation receives the thoughtful analysis it deserves. When used responsibly, AI could empower financial professionals to become better stewards of their clients’ futures, not just more efficient.
One of the pressing ethical and professional considerations is data privacy. While certain AI models (like OpenAI’s ChatGPT) run on public clouds, many financial advisors now insist on more “on-premises” or private cloud solutions. As Clinton describes, advisors need to fine-tune the AI on their own data, while keeping that data inaccessible to external parties.
Compliance with regulations like the Australian Privacy Principles, GDPR, or other local privacy frameworks is non-negotiable. Advisors handle not only personally identifiable information (PII) but also granular financial data: superannuation balances, portfolio compositions, liabilities, and more. Inadvertently leaking any of this via a poorly configured AI system could breach confidentiality and erode client trust.
Equally relevant is the question of ownership. Who “owns” the analysis or the final Statement of Advice that the AI helps produce? From a legal perspective, the advisory firm must own the final output and remain liable for its accuracy. That means data can’t simply be processed and stored in a way that removes accountability from the firm.
A rigorous governance process must therefore underline every AI project in financial advice. In many jurisdictions, if the AI produces misleading or faulty advice, the regulatory body will hold the licensed advisor or the firm accountable, not the technology vendor.
For advisors looking to adopt advanced AI solutions like Advisewell’s, the best first step is to revisit the code of ethics and professional standards that already guide their practice. This helps ensure a logical extension of existing commitments rather than a bolt-on afterthought. Below are some key areas to consider:
Clinton anticipates a formal launch of Advisewell in the near future, with early adopters already piloting the platform. He believes that within 12 to 18 months, interfacing with AI in financial advice will feel as intuitive as having a new staff member on board. The AI will be conversant in natural language—through email, a messaging platform, or even voice—and it will adapt over time to each firm’s evolving policies and workflows.
This kind of technology invites a cultural shift in the professional setting:
As AI adoption scales, so too will the scrutiny from clients, regulators, and professional bodies. Any misstep in compliance or oversight could set the industry back. That’s why professionalism—manifested as a sincere commitment to ethical conduct, continuous learning, and client-centric service—will remain a mainstay.
Firms should formulate and circulate AI governance policies, clarifying roles and responsibilities, including who ultimately signs off on crucial advice documents. Ongoing professional development should train staff to interpret and question AI outputs critically. And as AI-driven solutions become mainstream, professional bodies may update guidelines to reflect new best practices around AI adoption.
In a field as sensitive as financial advice, the infusion of advanced AI systems marks not just a technological milestone but also an ethical turning point. The conversation between Patrick Gardner and Advisewell’s Clinton Cunningham underscores how AI agents can streamline the creation of critical documents like Statements of Advice, manage back-office workflows, and liberate professionals to focus on more nuanced, strategic tasks.
But with great efficiency comes a greater onus on professional ethics. The AI does not absolve advisors of their responsibility to act in the client’s best interests. If anything, it intensifies the demand for rigorous data management, transparent governance, and continuous professional judgment. AI can quickly process vast datasets, automate routine tasks, and even propose dynamic strategies, but it does not replace the ethical and empathetic core of financial planning—at least not yet.
As we stand on the cusp of a new era where co-pilots and virtual employees become commonplace, it is the industry’s collective responsibility to anchor this transformation in trust, accountability, and integrity. For wealth managers, adopting AI is no longer optional if they aim to remain competitive and relevant. But navigating this transition demands more than just new software tools. It requires a renewed commitment to ethical conduct and a focus on equipping financial professionals to harness AI’s capabilities responsibly.
Ultimately, the measure of success will not be whether an AI can churn out documents more rapidly—it will be whether clients feel more empowered, more assured, and better served in their financial journeys. If AI is to be the new “virtual employee,” then it must embody the very principles—professionalism, ethics, and client-centric care—that define the best practitioners in wealth management today. By integrating robust oversight, data governance, and human expertise, the future of AI-driven financial advice can indeed be bright, balanced, and profoundly beneficial for professionals and clients alike.
Accreditation Points Allocation:
0.10 Technical Competence
0.10 Client Care and Practice
0.10 Regulatory Compliance and Consumer Protection
0.10 Professionalism and Ethics
0.40 Total CPD Points