Performance Attribution for Fund Managers: Fix the Data
If you've tried to implement proper performance attribution at a New Zealand fund manager, you know how it ends. The vendor gets contracted. The integration project starts. Months later, the attribution outputs land on the investment team's desks, and the PMs don't trust them. Not because the methodology is wrong. Because they can see the data quality problems in the source, and they know the output is unreliable. The attribution model gets shelved. The team goes back to gut feel and basic return summaries.
This has happened at multiple New Zealand firms with both FactSet and Bloomberg PORT. The problem isn't new, and the solution isn't a better attribution vendor.
The Real Attribution Problem
Performance attribution is not a difficult mathematical problem. Brinson-Fachler decomposition for equity portfolios has been standard methodology for decades: allocation effect, selection effect, interaction effect, calculated against a benchmark. Duration, curve, and credit attribution for fixed income books follows equally well-established frameworks. The math is solved.
The problem is the data. Specifically, three data problems that compound each other:
- Latency mismatches: Charles River IMS has intraday position data. APEX has T+1 NAV and cash data. Bloomberg PORT has end-of-day analytics. These systems don't share a common timestamp, and the differences matter when you're trying to attribute performance to specific trades and allocations.
- Reconciliation gaps: Because the three systems don't natively reconcile against each other, the positions in the attribution calculation may not match the positions the PM was actually running. A 0.5% position size discrepancy sounds small until it's driving a reported 30 basis point selection effect that the PM knows is wrong.
- The spreadsheet bridge: As we cover in our enterprise intelligence platform work, the PM's spreadsheet is the unofficial integration layer between Charles River, Bloomberg, and APEX. Any attribution calculation that sits downstream of that spreadsheet inherits all of its errors and manual interventions.
FactSet's attribution tool is technically capable. Bloomberg PORT's attribution module is technically capable. The reason neither delivered usable output for your firm isn't the tool. It's that they built on a dirty data foundation and produced dirty output. PMs with strong intuitions about their own portfolios can sense this immediately, even if they can't articulate it precisely in data quality terms.
What a Verified Data Layer Changes
The approach we take with the ShiftCurve Enterprise Intelligence platform starts with the data problem, not the attribution methodology. Before any attribution module is built, we establish a verified, three-way reconciled data layer: intraday positions from Charles River REST API, daily NAV and cash anchors from APEX SFTP, market data and pricing from Bloomberg BLPAPI, and automated reconciliation logic that flags breaks and holds the data pipeline until exceptions are resolved.
This reconciliation layer creates a position history that every stakeholder agrees is correct. Not the PM's spreadsheet version. Not the fund admin's version. A single, auditable record of portfolio state at every point in time, reconciled across all three source systems.
Attribution built on top of that layer is trustworthy. The output lands on a PM's desk and they can verify the position sizes are right, the trade dates are right, and the benchmark data is correct. The analysis is usable.
What the Attribution Engine Calculates
For equity portfolios, we implement full Brinson-Fachler decomposition: allocation effect (did you overweight the right sectors?), selection effect (did you pick the right stocks within each sector?), and interaction effect (the combined impact of allocation and selection decisions). Attribution is available by sector, by security, by manager, and aggregated at the portfolio level, all the way back to day one if you've been running the data layer from inception.
For fixed income mandates, we calculate duration attribution (how much of performance came from duration positioning?), curve attribution (were you positioned correctly on the yield curve?), and credit attribution (what did credit spread movements contribute?). Currency attribution via the Karnosky-Singer framework is available for portfolios with FX exposure.
The forward view is equally important. With scenario testing integrated into the same data layer, a PM can model how their current attribution profile changes under different rate environments before executing rebalancing trades. That's the analytical capability that turns attribution from a backward-looking report into a forward-looking decision tool.
Why Bloomberg PORT Wasn't the Answer
Bloomberg PORT is a capable analytics product in the right context. For a fund manager that buys data from Bloomberg and doesn't have Charles River or APEX, it can work well. For New Zealand fund managers with multiple data sources across order management, fund admin, and market data, PORT becomes one more system that doesn't integrate cleanly with the others. The Bloomberg PORT attribution module produces output, but that output is only as clean as the data Bloomberg can see, which typically excludes the position reconciliation data held in APEX and the order-level detail in Charles River.
PORT also requires Bloomberg to be your market data vendor for the attribution to work correctly. If you're pulling pricing from another source for any part of your book, the attribution breaks. For the New Zealand fixed income market in particular, Bloomberg's data coverage has gaps that create attribution artifacts.
Build Timeline and What to Expect
We deliver the data layer and unified position view in eight weeks. That's the foundation everything else builds on. Performance attribution as a dedicated module comes at the twelve to sixteen week mark, after the data layer has been live long enough to accumulate verified history and the investment team has established confidence in the source data. We don't build attribution on top of untested data. That's what leads to the outcomes we described at the start.
If your firm has been through a failed attribution implementation before and you want to understand what doing it correctly looks like, talk to us. We've mapped this architecture specifically against the standard New Zealand fund manager tech stack and can tell you exactly where your current setup is generating data quality problems.