Enterprise Deployment

From concept to production inside your firewall.

Three deployment paths. Phase-gated delivery. Go/no-go at every stage. Here is exactly how we build and deploy an enterprise intelligence platform inside your environment.

Deployment Model

Choose your deployment model.

Every organisation has different security requirements. We support all of them.

Fully Air-Gapped

All AI processing runs locally on dedicated hardware inside your network. Zero external connections.

Enterprise AI API

Recommended

State-of-the-art AI with contractual zero data retention. Maximum capability, enterprise security.

Hybrid

Sensitive data processed locally. Everything else uses cloud AI for maximum speed.

On-Premises Hardware

What sits inside your network.

A dedicated compute node handles orchestration, data pipelines, and dashboard serving.

Data Sources

Charles River IMS

REST API

Fund Administrator

SFTP

Market Data

BLPAPI / B-PIPE

OpenClaw Orchestration

On-Premises Compute Node

Data ingestion - pipeline scheduling - dashboard serving

Reconciliation
AI Agents
Pipelines
Monitoring

Outputs

PM Dashboard

Live position view

CIO Dashboard

Aggregated risk

Automated Reports

Scheduled delivery

Orchestration Only (API model)

Recommended

Mac mini M4 Pro, 24GB RAM, 512GB SSD

Handles data ingestion, pipeline scheduling, dashboard serving. AI inference offloaded to the cloud API.

~$2,800

Local AI Inference

Mac Studio M4 Ultra, 192GB unified memory, 2TB SSD

Runs local LLM models for air-gapped processing. Required for fully air-gapped or hybrid deployments.

~$12,000

Enterprise Server

Rack-mount Linux server with NVIDIA GPU

For organisations requiring standard server hardware. Custom spec - we design to your infrastructure standards.

Custom spec

Orchestration Layer

The engine that runs everything.

OpenClaw is the AI orchestration platform that coordinates data pipelines, AI agents, monitoring, and self-healing workflows.

Multi-Agent Architecture

Different AI agents handle different tasks: data ingestion, reconciliation, analytics, reporting, anomaly detection. Each agent is specialised, tested, and independently monitored.

Multi-Model Routing

Routes each task to the optimal AI model. Complex portfolio analysis goes to frontier models. Routine data formatting uses efficient local models. Cost-optimised without capability compromise.

24/7 Autonomous Operation

Reconciliation runs automatically. Anomaly detection fires alerts. Reports generate on schedule. The platform works overnight, weekends, and holidays without human intervention.

Self-Healing Workflows

When a data feed fails or a pipeline breaks, the system detects it, retries with backoff, and alerts the team only if intervention is genuinely needed. Resilience by design.

In production today powering ShiftCurve's own operations - document intelligence, data pipelines, and enterprise client delivery.

Development Architecture

How we build it.

A modern, proven stack chosen for performance, maintainability, and speed of delivery. The same technologies powering production platforms today.

Next.js 15 + TypeScript

Frontend + API Layer

React-based framework that handles both the dashboard interface and the server-side API routes in a single codebase. Type-safe, fast, and production-grade. The same technology behind the ShiftCurve website and CRM.

Python + FastAPI

Data Ingestion

Data pipelines that connect to Charles River (REST API), fund administrator files (SFTP), and Bloomberg (BLPAPI). Python is the standard language for financial data engineering, with native Bloomberg API support.

PostgreSQL + TimescaleDB

Database

Enterprise-grade relational database with a time-series extension purpose-built for financial data. Stores full transaction history, enabling backward reconstruction of portfolio state at any prior date.

OpenClaw

AI Orchestration

Multi-agent AI platform that coordinates data pipelines, schedules reconciliation, manages model routing, and monitors system health. The operational backbone that enables autonomous 24/7 operation.

Anthropic Claude API

AI Inference

Enterprise-grade language models for natural language queries, scenario analysis, anomaly detection, and insight generation. Zero data retention. Swappable for local models if requirements change.

Git + CI/CD

Version Control + Deployment

All code version-controlled in Git (GitHub or your internal GitLab/Azure DevOps). Automated deployment pipeline pushes tested code to the on-premises server. Full audit trail of every change.

Development to deployment pipeline

01

Write

Code written locally using AI-augmented development tools. Architecture defined, reviewed, and validated by senior practitioners.

02

Version

Pushed to Git repository (GitHub or your internal GitLab/Azure DevOps). Full change history, code review, and branch protection.

03

Test

Automated test suite validates data pipeline integrity, reconciliation accuracy, and dashboard rendering before any deployment.

04

Deploy

CI/CD pipeline pushes to the on-premises compute node inside your network. Zero-downtime deployment. Rollback in seconds if needed.

Integration architecture

How data flows from source systems through to the dashboard. Every connection uses standard protocols, no proprietary middleware.

Data Sources

Charles River IMS

REST API

Intraday

Fund Administrator

SFTP

Daily T+1

Bloomberg

BLPAPI / B-PIPE

Real-time

Processing

Python + FastAPI

Data ingestion pipelines

Data Trust Layer

Three-way reconciliation

PostgreSQL + TimescaleDB

Investment Book of Record

Intelligence Layer

AI Analytics

NL queries + scenarios

PM Dashboard

Next.js + TypeScript

CIO Dashboard

Aggregated risk view

Protocols

REST, SFTP, BLPAPI - standard financial services integration. No proprietary middleware.

Reconciliation

Automated three-way check: OMS positions vs fund admin NAV vs market data pricing. Breaks flagged before analytics run.

Time-series

Full transaction history stored from day one. Reconstruct portfolio state at any prior date. Enables backward attribution analysis.

Model Selection

AI models - your choice, no lock-in.

The platform is model-agnostic. Start with what makes sense, switch anytime.

Claude Opus 4

Anthropic Enterprise

Capability

Frontier reasoning, complex analysis

Data Residency

Zero retention, SOC 2, no training

Use Case

Portfolio analytics, NL queries, scenario testing

Claude Sonnet 4

Anthropic Enterprise

Capability

Fast, efficient for routine tasks

Data Residency

Same guarantees

Use Case

Pipeline orchestration, data formatting, alerts

Llama 3.3 70B

Local (open source)

Capability

Strong reasoning, fully local

Data Residency

Never leaves premises

Use Case

Air-gapped sensitive data processing

Mistral Large

Local (open source)

Capability

Good structured output

Data Residency

Never leaves premises

Use Case

Data extraction, report generation

No vendor lock-in. Models are swappable components, not dependencies.

Security and Compliance

Your data. Your rules.

We built the security model for regulated financial services from day one.

On-Premises Only

LocalPortfolio positions and holdings
LocalInvestor and client data
LocalProprietary models and parameters
LocalNatural language queries about portfolios
LocalTrade execution data

Cloud AI - Zero Retention

CloudDashboard code and UI components
CloudData pipeline and ETL logic
CloudDocumentation and reports
CloudNon-sensitive analytics

Anthropic Enterprise Guarantees

Zero data retention
No model training on client data
SOC 2 Type II certified
Data Processing Agreement included
Annual third-party security audit

NZ Regulatory

NZ Privacy Act 2020 compliant
FMA reporting frameworks understood natively
No customer PII sent to AI layer
Only positions, holdings, and market data

Your Controls

IP allowlisting on API access
Full audit logging of every AI interaction
Your IT team has root access to the node
Encryption at rest (AES-256)
Encryption in transit (TLS 1.3)

Delivery Model

Three phases. Value at every gate.

No big-bang implementation. Each phase delivers a working product you can evaluate before committing to the next.

Phase 1

Replace One PM's Spreadsheet

Weeks 1-8

Connect to data sources, build unified position view, Data Trust Layer for reconciliation, basic NL queries. One PM pilots alongside existing spreadsheet.

Deliverables

Live PM dashboard
Charles River + fund admin + market data connected
Automated daily reconciliation
Phase 2 proposal

Gate Criterion

Is it genuinely better than the spreadsheet?

Phase 2

CIO View and Scenario Engine

Weeks 9-16

Aggregate all PM books, scenario testing (rates, spreads, FX), historical reconstruction, attribution prototype.

Deliverables

CIO aggregated view
Scenario testing engine
NL queries over live data
Historical portfolio reconstruction
Attribution prototype

Gate Criterion

Can the CIO see aggregated risk in real time?

Phase 3

Full Rollout and Attribution

Weeks 17-24

All PMs onboarded, full attribution engine, role-based access, audit trails, hardening.

Deliverables

All PM book views
Full attribution (equity + fixed income + currency)
Role-based access control
Compliance logging
Documentation and training

Gate Criterion

Production-ready for the full investment team?

Decision Checklist

What you will need to decide.

A clear checklist so your team knows what is needed and when.

Before Phase 1

AI deployment model: air-gapped, enterprise API, or hybrid?
Hardware: Mac mini, Mac Studio, or rack server?
Pilot PM: who goes first?
Bloomberg API: confirm licence covers programmatic access
Data source access: Charles River API credentials, fund admin SFTP
IT security review of on-premises deployment

Phase 1 Gate - Week 8

PM feedback: is the dashboard better than the spreadsheet?
Reconciliation accuracy: breaks caught vs missed
Go/no-go on Phase 2

Phase 2 Gate - Week 16

CIO assessment: aggregated view meets requirements?
Scenario engine evaluation
Go/no-go on Phase 3

We deploy a team of AI specialists inside your infrastructure.

Each one purpose-built for a specific job, all orchestrated by a platform that has been running in production for over a year. No procurement hoops. No 50-page proposals. Start with a discovery call.