Two AI tools sit on an adviser's desk. One drafts file notes from a meeting recording. The other claims to deliver "AI financial advice." Same category in the marketing. Different jobs entirely.
The buyer who treats them as the same product ends up with a stack of half-finished workflows and a slower compliance review than they had before they started.
That is the trap most advice firms are walking into right now. "AI financial advice" and "AI financial planning" are sold as if they describe a category of product. They do not. They describe a layer of the work. And the gap between layers is where the budget disappears.
I have spent the last eighteen months watching firms buy at one layer and try to use it at another. The result is always the same. The tool works as advertised. The workflow does not.
This piece is the map.
What is happening on the ground
The Financial Adviser Register sat below 15,600 at the end of 2024 according to Adviser Ratings. The same firms that did the rebuild after FOFA, then the rebuild after the Royal Commission, then the rebuild after the FASEA standards, are now being asked to rebuild again because of AI. Fewer advisers, more compliance load, more clients per adviser. The pressure is not abstract.
Investment Trends' 2025 Adviser Technology Report shows the share of advisers using AI somewhere in their workflow at a record high, roughly doubling on the prior year. The share whose firm has a written governance position on AI has barely moved.
Walk a real adviser desk on a Tuesday and what you find is honest. A note-taker tool listening to the morning client meeting. A separate AI summariser pulling from inbox threads. A third platform pitching itself as "AI for SoA generation." A fact-find still being filled in by hand because none of the tools talk to it. Three subscriptions. One workflow. No connection between them.
That sprawl is not a tool problem. It is a buying problem.
The trap inside the words
If you think the problem is "find an AI advice tool," you will evaluate on features and speed. Faster file notes, longer transcripts, smarter summaries. Vendors will demo each one well. Each one will live up to its demo.
The firm will still feel slower at month end.
The reason is the gap between what each tool finishes and what the advice function actually needs. A file note tool finishes a file note. The advice function does not finish at the file note. It finishes when a compliant document, attributable to a supervised human, sits in the client file with an audit trail running back to the underlying client data. Every stop short of that point is work the firm still has to do. Often manually. Often without the tool's records following it.
Speed at one layer is not the same as completion across layers. The two get sold as if they are. They are not.
The Completion Ladder
A better lens. Think of AI in advice as five layers, not one product. The layers stack. Tools sit at one or two of them, almost never all five. Knowing which layer a tool actually closes is the first competence of an AI buyer.
Layer 1. Capture. Transcripts, recordings, raw text pulled from emails and calls. The tool listens or reads. Output is unstructured. A meeting transcript or an email summary lives here.
Layer 2. Structuring. Raw input becomes a structured artefact. A file note. A fact-find update. A meeting brief. The model adds shape. The output is readable but not yet tied to obligations or evidence.
Layer 3. Drafting. The structured artefact becomes a regulated document. An SoA first pass. A draft RoA. A scope-of-advice paragraph that maps to the s961B inquiry. The output is something a licensed adviser can review and amend, not something they can send.
Layer 4. Workflow operation. Multiple drafting steps connected. Fact-find feeds strategy. Strategy feeds the SoA. The SoA links back to the file notes that informed it and the assumptions log that justified it. The output is a chain, not a document. The chain is what audits actually examine.
Layer 5. Closed-loop with compliance. Layer 4 plus the audit layer baked in. Versioning on every draft. Reviewer attribution on every change. Source citation back to the underlying client data. The output survives ASIC scrutiny because the evidence ASIC wants is not assembled at audit time. It is captured during the work.
Most tools sold as "AI financial advice" sit at Layer 1 or Layer 2. A handful reach Layer 3 on a single document type. Layer 4 is rare. Layer 5 is what the regulated workflow actually needs, and almost no-one ships it.
The five-question test
If you are evaluating a vendor this quarter, here is the free consulting bit. Five questions. Apply them in order. The vendor's answers tell you the real layer they sit at, regardless of what the marketing says.
- What artefact does your tool finish? A transcript is not a file note. A file note is not an SoA. An SoA draft is not a signed SoA. Force the vendor to name the deliverable that lands in the client file.
- Where does that artefact go after your tool produces it? If the answer involves "the adviser copies it across," "the paraplanner reformats it," or "we export to PDF," the tool stops short of the workflow.
- What evidence layer does your tool produce? Versioning, reviewer attribution, source citation back to the underlying client data, time-stamped change history. If those are absent, every output the tool produces becomes manual evidence work for the firm.
- Which RG number does your tool's output sit inside? RG 175 for advice records. RG 90 for SoA content. RG 271 for complaint handling. A vendor that cannot answer this is selling at Layer 1 or 2 and pricing at Layer 4.
- What happens if the model is wrong? A note-taker that hallucinates a sentence is annoying. A drafting tool that hallucinates a recommendation is a regulator's exhibit. The escalation, review, and reject path matters more the higher up the ladder the tool sits.
A vendor who answers all five fluently is selling at Layer 4 or 5. A vendor who pivots to "well, the adviser still does the final review" on question 2 is at Layer 1 or 2 and dressing for higher.
The product-strategy implication
Buying AI for advice is not a category decision. It is a layer decision. Pick the layer where your firm bleeds the most time or compliance risk. Buy a tool that closes that layer end-to-end, with the evidence baked in. Resist the temptation to buy three tools at three layers and integrate them in spreadsheets, because spreadsheets are exactly where the audit trail breaks.
If your firm is losing time at file notes, buy a Layer 2 tool that produces structured file notes with versioning attached. One subscription. One layer. Closed.
If your firm is losing time at SoA generation, the calculation changes. Do not buy three tools and stitch them together. Buy one that operates the SoA workflow from fact-find to drafted document, with the compliance evidence captured along the way. That is Layer 4 or 5. The price tag will read higher than the Layer 1 transcribers. The total cost, counted across re-keying time and audit preparation, almost always lands lower.
This is where AdviseWell sits, by design. We chose Layer 4 and Layer 5 because that is where the regulated workflow actually finishes, and because the firms we talk to keep buying at Layer 1 or 2 and discovering at audit that they bought a tool, not a function.
The bookend
AI financial advice is a layer, not a product. AI financial planning is the same idea with a different artefact at the end. Tools that pretend to span both, without naming which layer they actually close, are selling speed and hiding the completion gap.
The firms moving fastest have stopped shopping for "an AI tool." They are shopping for the layer where their work ends.
If you cannot name the layer, you are not buying an AI strategy. You are subscribing to a stack of demos.