How to Use It Wisely — and Avoid Garbage-In, Garbage-Out
Artificial intelligence is rapidly becoming part of the modern investor’s toolkit. Some investors use AI to double-check their advisor’s recommendations. Others use it to build portfolios, evaluate strategies, or manage investments themselves. This shift is understandable. AI is fast, articulate, data-driven, and available on demand. But there is an important reality investors must understand: AI can be extremely intelligent — and still be structurally incomplete. It often produces confident answers without fully understanding what is missing from the question. And in investing, what is missing can matter more than what is present.
AI Thinks in Models — Markets Operate in Reality
Most financial reasoning generated by AI is derived from:
academic finance frameworks
historical data patterns
simplified assumptions about costs and behavior
theoretical portfolio construction methods
These models are useful — but they are not the real world. Real investment management involves:
taxes
trading friction
liquidity constraints
execution quality
behavioral discipline
changing risk environments
imperfect information
client-specific constraints
governance and decision authority
These elements are difficult to fully capture in data — and therefore difficult for AI to fully model. AI often evaluates strategies as if markets were frictionless and decisions were perfectly implemented. Investors do not live in that environment.
The Fee Illusion: Why AI Often Gets Costs Wrong
One of the most common AI conclusions is: Lower fees always produce better outcomes. Mathematically, that sounds obvious. If returns stay the same, removing cost improves results. But that assumption contains a hidden premise: The investment outcome is unaffected by the presence or absence of management. In reality, management is often what produces the outcome. Fees may fund:
risk monitoring and mitigation
tax-efficient implementation
disciplined rebalancing
strategic adaptation
behavioral coaching during stress
research and due diligence
operational infrastructure
coordination across financial domains
When management changes, the expected results change. Fees are not always a simple subtraction from returns. They are often part of the production process that generates those returns.
What AI Struggles to Measure
AI is strongest when evaluating what is visible and measurable. It is far weaker at evaluating what never happened. For example:
losses that were avoided
risks that were mitigated
poor decisions that were prevented
tax events that never occurred
behavioral mistakes that were interrupted
crises that were navigated successfully
These outcomes rarely appear in performance data — yet they are central to real investment management. AI can measure returns. It struggles to measure risk control, decision quality, and process discipline.
The Implementation Gap
AI often assumes that designing a strategy and implementing a strategy are the same thing. They are not. Implementation requires:
ongoing monitoring
decision authority
execution timing
adjustment rules
risk thresholds
behavioral management
operational capability
Two investors can follow the same strategy design and experience very different results depending on how decisions are made over time. Professional management lives in this implementation gap. AI often overlooks it.
Why AI Is More Powerful in the Hands of Professionals
Artificial intelligence does not operate with true understanding. It generates responses based on patterns, probabilities, and the structure of the question it is given. This creates an important dynamic: AI is only as effective as the person guiding it. In technical fields — including investing — the quality of AI output depends heavily on the user’s ability to:
frame the right problem
provide relevant context
recognize hidden assumptions
identify missing variables
challenge incomplete reasoning
interpret uncertainty correctly
These are not computational skills. They are professional judgment skills.
The GIGO Reality in Investment Decision-Making
“Garbage in, garbage out” is not new. But in the age of AI, the definition of “garbage” has expanded. It no longer refers only to incorrect data. It includes:
incomplete framing of the problem
unrealistic assumptions
omitted constraints
misunderstood risks
failure to ask second-order questions
inability to detect oversimplification
AI can produce an answer that is logically consistent — and still fundamentally misaligned with reality — if the question fails to capture the full decision environment.
The Unknown Unknowns Problem
Perhaps the most important limitation of AI is this: AI does not reliably tell you what it is not telling you. It answers the question presented. It does not independently define the full universe of relevant considerations. If a risk, constraint, or complexity is not included in the prompt — or not represented in the underlying data — AI may not surface it. And critically, it will often respond with confidence anyway. This creates a structural risk for inexperienced investors: They may not know which risks, variables, or trade-offs are missing — and therefore do not know what to ask. Professionals, by contrast, know where problems tend to hide. They understand:
failure patterns
implementation challenges
behavioral pitfalls
tax and regulatory constraints
edge cases models overlook
That knowledge shapes the questions they ask AI.
The Overconfidence Effect
AI communicates clearly, quickly, and often persuasively. This can unintentionally create false certainty. When an answer is:
structured
confident
numerically supported
logically explained
it can feel authoritative — even when key uncertainties remain unresolved.
For experienced professionals, this is simply one input among many. For less experienced investors, it can feel like definitive guidance. When understanding is limited, confidence can rise faster than insight.
AI as a Tool — Not a Substitute for Expertise
Professionals do not simply ask AI for answers. They interrogate it. They test assumptions. They explore failure scenarios. They refine prompts iteratively. They cross-check conclusions. They recognize when output is incomplete. AI does not replace expertise. It amplifies the expertise already present. AI answers the questions you ask. Professionals know which questions must be asked.
Using AI to Supervise Results and Monitor Value
While AI has limitations in designing and implementing investment strategies, it can be extremely effective in another role: independent oversight. I is well suited to helping investors monitor outcomes, evaluate consistency, and assess whether the value they are receiving aligns with expectations. Management is about making decisions under uncertainty. Supervision is about evaluating whether those decisions are coherent and aligned. AI is often very good at supervision.
Where AI Can Be Particularly Useful
Investors can use AI to help monitor:
Performance consistency
Is the portfolio behaving as expected?
Are results aligned with stated risk levels?
What factors are driving outcomes?
Strategy discipline
Has the investment approach changed?
Is the portfolio drifting from its mandate?
Are decisions consistent with stated philosophy?
Fee and value alignment
What services support the fee?
How does the cost compare to similar service models?
What capabilities would change if cost were reduced?
Risk exposure
What are the largest risks today?
Where is the portfolio most vulnerable?
How might it behave in stress scenarios?
Communication clarity
Are explanations logically consistent?
Are key assumptions clearly stated?
What questions remain unanswered?
This is governance — not replacement. And governance improves long-term outcomes.
Prompts Investors Can Use to Improve AI Guidance
Ask About Assumptions
What assumptions may not hold in real markets?
What frictions are not included in this analysis?
How would taxes change this outcome?
Ask What Could Go Wrong
What are the major failure risks?
When would this perform poorly?
What investor behaviors could reduce success?
Ask About Management Value
What functions does professional management provide here?
What happens if those functions are removed?
How does implementation affect results?
Ask About Costs More Realistically
What capabilities disappear if fees decline?
Are fees funding activities that influence outcomes?
Ask for Counterfactual Insight
What losses might active management prevent?
What risks are being controlled?
Ask for Personalization
How does this change with my tax bracket and risk tolerance?
The Bottom Line
AI is a powerful analytical tool — but it is not a complete financial decision system. It is strongest when helping investors:
understand concepts
evaluate structure
monitor outcomes
supervise consistency
strengthen accountability
It is less reliable when asked to replace professional judgment or real-world implementation. Investment outcomes are not produced by theory alone. They are produced by decisions, discipline, and management over time. The most effective investors will not use AI to replace expertise. They will use AI to:
✔ ask better questions
✔ monitor results more intelligently
✔ strengthen governance
✔ improve decision quality
Used this way, AI becomes what it should be: not a substitute for wisdom — but a powerful partner in oversight.