Xplan Is Not Your Workflow: Where AI Actually Helps Advice Firms
Xplan AI workflows for Australian advice firms: where AI should sit around Xplan to improve file notes, paraplanning handoffs, and compliance review without adding rework.
ASIC's Report 798 warned AI adoption was racing ahead of governance in Australian advice firms. Two years on, the gap has three shapes. Most policies only cover one.

Most advice licensees have an AI policy. Most of them would not pass a governance test against ASIC's Report 798. Those are not the same thing.
In June 2024, ASIC reviewed 23 AFS and credit licensees and found AI adoption was racing ahead of the governance arrangements meant to hold it. That gap was the report's whole point. Two years on, the gap is wider. Adviser Ratings reports 82% of Australian advice businesses are using, piloting, or planning AI in the next 12 months. File notes (86%), marketing (48%), and SOA or ROA production (46%) lead the way.
The problem is that "AI governance" has quietly become shorthand for "we wrote a policy." A policy sits in a SharePoint folder. A governance system catches things.
That distinction matters because ASIC and industry media have now publicly framed 2026 as a year of accountability around AI usage in financial services. The question facing advice firms is no longer whether they are using AI. It is whether they can show a named, working control for each of the three shapes the risk actually takes.
Most policies only cover one.
Your firm signed a contract with an AI vendor. That contract defines where your client data travels, how it is processed, what happens when something goes wrong, and what rights you retain to exit.
Most advice firms have not read theirs closely since signing.
The specific questions are concrete and uncomfortable. Where is the data hosted? If the hosting is Australian, is training data also Australian? What sub-processors does the vendor use, and can you block new ones without renegotiating? What is the notification timeline for a breach? What is the exit clause if the vendor is acquired, changes pricing, or narrows product scope?
Large advice groups that sit inside APRA-regulated super trustees or insurers already face CPS 230 material service provider obligations on these questions from 1 July 2026. Firms outside that scope still face the softer but unavoidable version through ASIC's outsourcing expectations under RG 104 and RG 165, and the incoming Privacy Act automated decision-making transparency obligations scheduled for December 2026.
The vendor gap is not about whether your AI tool is "safe." It is about whether your contract reflects the controls your board would expect.
This one quietly breaks more firms than the other two.
Section 286 of the Corporations Act requires financial services records to be complete enough for someone to reconstruct what happened. When a human paraplanner drafts an SOA, the record is the draft, the review notes, and the approval trail. Simple.
When AI drafts the SOA, the record is more than the output. It is the prompt the adviser used. The inputs the model saw. The version of the model at the time. The edits applied before review. If any of those are lost, the record is incomplete.
That is a material change from traditional record-keeping, and most firms have not updated their practice for it. Few generic AI note-taking tools expose the underlying prompt. Fewer expose the model version or surface a complete audit trail. Fewer still lock that audit trail against later edits.
A file note generated by a tool with no audit layer is not a better-kept record than the old paper one. It is a faster-kept record with less provenance. ASIC's REP 798 language about "opacity" is pointed at exactly this.
Every AI-assisted advice artefact needs a named human who owns it. Not a team. Not a role. A person, by signature, every time.
Most firms already believe they do this. Fewer can prove it.
The test is simple. For the last fifty file notes or advice documents that used AI assistance in any form, can the firm produce a timestamped approval trail showing the named reviewer and the version they approved? If the answer is "yes, it is in the adviser's inbox somewhere," the answer is no.
ASIC has been consistent about this since its first AI guidance. Licensed humans retain accountability. Delegating the drafting to AI does not delegate the liability. "Year of accountability" is not a catchphrase. It is a signal that sample-based reviews are coming, and the artefact showing a named, enforced reviewer will be the test.
related: What agentic AI means for your firm
The reason firms collapse the three gaps into a single policy is that they all involve AI, and AI feels like one thing. It is not.
The vendor gap is a contract problem. It lives with legal and procurement.
The record gap is an operations problem. It lives with paraplanning and compliance.
The human gap is a workflow problem. It lives with advice team leads.
A one-page AI policy cannot solve all three because the three sit in different parts of the firm. A governance system links them. A policy tells you to be careful. A system produces evidence when someone asks.
That is the difference REP 798 is pointing at when it talks about "governance arrangements."
This is the free consulting bit. One action per gap.
For the vendor gap, pull every active AI vendor contract this month. For each one, write a one-paragraph answer to four questions: where does data live, who are the sub-processors, what is the breach notification timeline, what is the exit clause. If you cannot answer in a paragraph, that is the work.
For the record gap, pick one workflow where AI is already in use. File notes is usually the easiest. Document what the complete record needs to contain, including prompts and model metadata. Test whether your current tool captures it. If it does not, add the control or change the tool. Do not write a policy that requires something your systems cannot produce.
For the human gap, run the fifty-artefact test. Can you show a named reviewer and a timestamped approval for the last fifty AI-assisted outputs? If not, the issue is not training. It is that your workflow has no sign-off step the system itself enforces. Add one.
None of this is exotic. It is the same operational discipline advice firms apply to investment recommendations, insurance advice, and client onboarding. The novelty is applying it to AI.
A policy tells the regulator what you intend. A governance system shows them what you actually do. ASIC has been clear which one it is grading.
The three gaps are not hypothetical. They are already being closed inside the firms that will come out of the next review cycle looking prepared. The firms that come out of it looking exposed will mostly be the ones that confused a policy with a system.
related: AI copilots without governance