Why Enterprise AI Projects Fail - and What Fund Managers Can Learn from Deploying AI Specialists
A $27B+ fund manager has Bloomberg PORT, Charles River IMS, APEX fund administration, and a team of portfolio managers. They also have spreadsheets. Not a few spreadsheets maintained by the back office. Spreadsheets that are, functionally, the integration layer between every enterprise system they have paid millions to deploy. If you work in institutional investment management, this is not a surprise. You have seen it. You may be maintaining one right now.
The question worth asking is why. Why does the spreadsheet persist inside organisations with world-class technology budgets and sophisticated IT teams? The answer is not that fund managers are behind the curve on technology. The answer is that monolithic enterprise platforms try to do everything and end up doing nothing well enough for the people who actually use them.
The Monolith Problem
Bloomberg PORT is an excellent analytics tool. Charles River IMS is an excellent order management system. APEX is a capable fund administration platform. Each one is genuinely good at its core function. The problem is that each system owns a slice of truth, and nothing stitches those slices together in real time.
Charles River has your intraday positions. Bloomberg has your analytics and benchmark data. The fund admin has your NAV and reconciled cash. None of them speak to each other in a way that gives a portfolio manager what they actually need: a single, live, trusted view of their book. So the PM builds a spreadsheet. They pull from Charles River, cross-reference Bloomberg, reconcile against APEX, and maintain a model that is always slightly out of date and entirely understood by one person.
The downstream consequences of that spreadsheet are not theoretical. Key person risk: if the PM who built it leaves, the logic goes with them. Fat finger risk: a manual formula change that nobody catches until end of month. No resilience: the file lives on a desktop, not in a version-controlled system. No backward view: what did this portfolio look like on March 14th? Nobody can reconstruct it without the PM walking through their manual notes.
The CIO situation is worse. Aggregating a view across all PM books means collecting individual spreadsheets and manually reconciling them into a single picture. A cross-portfolio risk view that should take seconds takes hours. By the time it is ready, the market has moved.
I have seen this from the inside. I implemented Charles River for a major NZ asset manager. The platform is excellent at what it does. But what it does does not include replacing the PM's spreadsheet. That was never what it was built for, and expecting it to fill that role is why so many implementations disappoint the investment team even when they deliver technically.
Why Traditional Consulting Does Not Fix It
The standard response to the spreadsheet problem is a project. Hire Accenture or Deloitte for a 12-month discovery engagement. Produce a 200-page requirements document. Issue an RFP. Select a vendor. Build for 18 months. By the time the system ships, three things have happened: the requirements have changed because the market has changed, the PM who championed the project has moved to a competitor, and the new system is another silo that the next generation of PMs will work around with a new spreadsheet.
RFP cycles for major enterprise platforms like Aladdin or SimCorp routinely span three to four years from initial engagement to go-live. In those four years, your team has grown, your investment strategy has evolved, and the technology landscape has shifted fundamentally. The requirements that drove the original selection are no longer the requirements you have.
The problem is not a lack of technology options. The problem is the delivery model. Humans writing requirements documents are slow. Requirements documents go stale before the ink is dry. Waterfall project delivery does not work for problems that evolve as fast as investment management workflows do. The consulting industry has been selling the same solution to this problem for thirty years and the spreadsheet is still there.
The Specialist Team Model
The alternative is not another monolithic platform. It is a team of purpose-built AI specialists, each one handling a single job exceptionally well, coordinated by an orchestration platform that runs continuously inside your infrastructure.
What does that look like in practice? A reconciliation specialist that pulls Charles River positions, APEX fund admin data, and market pricing every morning and runs three-way reconciliation automatically - flagging genuine breaks for your ops team and clearing routine matches without human intervention. A portfolio query specialist that answers natural language questions about positions: what is our duration exposure in the NZ government bond portfolio? what are our top five issuers by weight across all mandates? A scenario testing specialist that models rate changes, FX moves, and duration impacts against live portfolio data. An anomaly detection specialist that flags unusual position changes or correlation shifts before they become compliance issues. A report generation specialist that produces your daily and weekly outputs from verified source data, consistently, every time.
These specialists are autonomous AI agents running on an orchestration platform. They coordinate but operate independently. If the reconciliation agent has a bug, the reporting agent still runs. If the scenario testing agent is updated with a new model, the anomaly detection agent is unaffected. The system is modular in a way that no monolithic platform can be.
This is not a metaphor. We run 10+ AI specialists in production today for our own businesses - document intelligence, data pipelines, lead processing, market analysis. Each one is purpose-built, independently monitored, and has been running 24/7 for months. The model works. The question is how to apply it to institutional investment management at the level of rigor that sector demands.
The Architecture That Makes It Work
Three layers make the specialist model viable at enterprise scale. The first is orchestration: OpenClaw handles multi-agent coordination, model routing, monitoring, and self-healing. When an agent fails, the platform detects it and restarts it. When an agent needs to coordinate with another, the platform manages that communication. The orchestration layer is what separates a production AI system from a demo.
The second layer is intelligence: the AI models that power each specialist. These are interchangeable. Frontier cloud models for complex reasoning tasks where you need the best available capability. Local open-source models running on-premises for air-gapped environments where data cannot leave the building. The platform routes each task to the appropriate model based on the requirements you configure.
The third layer is domain context: the configuration that makes each specialist a specialist in your systems, your data formats, and your workflows. A reconciliation specialist is not useful without knowing the schema of your Charles River API, the format of your APEX SFTP files, and the tolerance thresholds your ops team works to. That configuration is what turns a general AI model into something that does a specific job correctly.
Everything runs on a dedicated compute node inside the client's network. The platform is model-agnostic. Start with Anthropic's Claude for the best available reasoning capability with zero data retention on their enterprise tier. Switch to local models anytime if compliance requirements change. No vendor lock-in on the intelligence layer, because the orchestration platform does not care which model is underneath.
What This Looks Like in Practice
We structure engagements in three phases, each eight weeks, with a go/no-go gate before proceeding to the next. Phase one is focused: replace one portfolio manager's spreadsheet. Connect to their data sources, build a unified position view, deploy the reconciliation specialist. If it is not better than the spreadsheet by week eight, we have not succeeded. That is the explicit standard. No ambiguity, no moving the goalposts.
Phase two expands the scope. Aggregate across the investment team. Build the CIO view across all PM books. Deploy scenario testing and historical reconstruction. Give the team a backward view they have never had before: what did our portfolio look like on any given date, reconciled across all three source systems.
Phase three is full rollout, performance attribution on verified source data, and hardening. Monitoring, alerting, runbooks, handover documentation. After phase three, the specialists keep running. The engagement is finite. The capability is permanent.
No big-bang implementation. No 18-month wait before anyone can validate whether it works. The engagement structure is designed so that if it is not working, you know at week eight and can stop. In practice, it has not come to that.
The Security Question
Fund managers handle sensitive position data, investor information, and proprietary models. The security architecture is not an afterthought. We built it for that reality from the start.
Three deployment options exist depending on the client's compliance requirements. Fully air-gapped, running local models only, with no data leaving the client's network under any circumstances. Enterprise API mode using Anthropic's Claude on their enterprise tier: SOC 2 Type II, zero data retention, a signed data processing agreement, and a contractual commitment that no client data is used to train their models. Hybrid, where sensitive position data is processed locally and less sensitive analytical tasks use cloud models.
Sensitive data - positions, investor information, proprietary models - stays on-premises regardless of which mode is selected. The NZ Privacy Act 2020 and FMA reporting frameworks are understood natively in the platform configuration. These are not requirements we discover during implementation. They are requirements we built for.
Fund managers handle sensitive position data and proprietary models. We built the security model for that reality from day one, not as an afterthought. The compliance conversation is one we can have before any data is connected, not after.
The Spreadsheet Is Not the Problem
The spreadsheet persists not because portfolio managers are technologically conservative. It persists because enterprise IT has been solving the wrong problem. The problem is not "build a better platform." The problem is "deploy specialists that replace the spreadsheet workflow," one function at a time, starting with the function that is causing the most pain.
We do not build a system and leave. We deploy a team of AI specialists inside your infrastructure, each one purpose-built for a specific job, all orchestrated by a platform that has been running in production for over a year. The engagement ends. The specialists keep working.
If you want to understand what this looks like for your specific systems and team structure, the full deployment methodology is available on our enterprise intelligence implementation page.
View Deployment Methodology