Enterprise Deployment
Three deployment paths. Phase-gated delivery. Go/no-go at every stage. Here is exactly how we build and deploy an enterprise intelligence platform inside your environment.
Deployment Model
Every organisation has different security requirements. We support all of them.
All AI processing runs locally on dedicated hardware inside your network. Zero external connections.
State-of-the-art AI with contractual zero data retention. Maximum capability, enterprise security.
Sensitive data processed locally. Everything else uses cloud AI for maximum speed.
On-Premises Hardware
A dedicated compute node handles orchestration, data pipelines, and dashboard serving.
Data Sources
Charles River IMS
REST API
Fund Administrator
SFTP
Market Data
BLPAPI / B-PIPE
On-Premises Compute Node
Data ingestion - pipeline scheduling - dashboard serving
Outputs
PM Dashboard
Live position view
CIO Dashboard
Aggregated risk
Automated Reports
Scheduled delivery
Mac mini M4 Pro, 24GB RAM, 512GB SSD
Handles data ingestion, pipeline scheduling, dashboard serving. AI inference offloaded to the cloud API.
~$2,800
Mac Studio M4 Ultra, 192GB unified memory, 2TB SSD
Runs local LLM models for air-gapped processing. Required for fully air-gapped or hybrid deployments.
~$12,000
Rack-mount Linux server with NVIDIA GPU
For organisations requiring standard server hardware. Custom spec - we design to your infrastructure standards.
Custom spec
Orchestration Layer
OpenClaw is the AI orchestration platform that coordinates data pipelines, AI agents, monitoring, and self-healing workflows.
Different AI agents handle different tasks: data ingestion, reconciliation, analytics, reporting, anomaly detection. Each agent is specialised, tested, and independently monitored.
Routes each task to the optimal AI model. Complex portfolio analysis goes to frontier models. Routine data formatting uses efficient local models. Cost-optimised without capability compromise.
Reconciliation runs automatically. Anomaly detection fires alerts. Reports generate on schedule. The platform works overnight, weekends, and holidays without human intervention.
When a data feed fails or a pipeline breaks, the system detects it, retries with backoff, and alerts the team only if intervention is genuinely needed. Resilience by design.
In production today powering ShiftCurve's own operations - document intelligence, data pipelines, and enterprise client delivery.
Development Architecture
A modern, proven stack chosen for performance, maintainability, and speed of delivery. The same technologies powering production platforms today.
React-based framework that handles both the dashboard interface and the server-side API routes in a single codebase. Type-safe, fast, and production-grade. The same technology behind the ShiftCurve website and CRM.
Data pipelines that connect to Charles River (REST API), fund administrator files (SFTP), and Bloomberg (BLPAPI). Python is the standard language for financial data engineering, with native Bloomberg API support.
Enterprise-grade relational database with a time-series extension purpose-built for financial data. Stores full transaction history, enabling backward reconstruction of portfolio state at any prior date.
Multi-agent AI platform that coordinates data pipelines, schedules reconciliation, manages model routing, and monitors system health. The operational backbone that enables autonomous 24/7 operation.
Enterprise-grade language models for natural language queries, scenario analysis, anomaly detection, and insight generation. Zero data retention. Swappable for local models if requirements change.
All code version-controlled in Git (GitHub or your internal GitLab/Azure DevOps). Automated deployment pipeline pushes tested code to the on-premises server. Full audit trail of every change.
Code written locally using AI-augmented development tools. Architecture defined, reviewed, and validated by senior practitioners.
Pushed to Git repository (GitHub or your internal GitLab/Azure DevOps). Full change history, code review, and branch protection.
Automated test suite validates data pipeline integrity, reconciliation accuracy, and dashboard rendering before any deployment.
CI/CD pipeline pushes to the on-premises compute node inside your network. Zero-downtime deployment. Rollback in seconds if needed.
How data flows from source systems through to the dashboard. Every connection uses standard protocols, no proprietary middleware.
Data Sources
Charles River IMS
REST API
Intraday
Fund Administrator
SFTP
Daily T+1
Bloomberg
BLPAPI / B-PIPE
Real-time
Processing
Python + FastAPI
Data ingestion pipelines
Data Trust Layer
Three-way reconciliation
PostgreSQL + TimescaleDB
Investment Book of Record
Intelligence Layer
AI Analytics
NL queries + scenarios
PM Dashboard
Next.js + TypeScript
CIO Dashboard
Aggregated risk view
Protocols
REST, SFTP, BLPAPI - standard financial services integration. No proprietary middleware.
Reconciliation
Automated three-way check: OMS positions vs fund admin NAV vs market data pricing. Breaks flagged before analytics run.
Time-series
Full transaction history stored from day one. Reconstruct portfolio state at any prior date. Enables backward attribution analysis.
Model Selection
The platform is model-agnostic. Start with what makes sense, switch anytime.
Capability
Frontier reasoning, complex analysis
Data Residency
Zero retention, SOC 2, no training
Use Case
Portfolio analytics, NL queries, scenario testing
Capability
Fast, efficient for routine tasks
Data Residency
Same guarantees
Use Case
Pipeline orchestration, data formatting, alerts
Capability
Strong reasoning, fully local
Data Residency
Never leaves premises
Use Case
Air-gapped sensitive data processing
Capability
Good structured output
Data Residency
Never leaves premises
Use Case
Data extraction, report generation
No vendor lock-in. Models are swappable components, not dependencies.
Security and Compliance
We built the security model for regulated financial services from day one.
Delivery Model
No big-bang implementation. Each phase delivers a working product you can evaluate before committing to the next.
Weeks 1-8
Connect to data sources, build unified position view, Data Trust Layer for reconciliation, basic NL queries. One PM pilots alongside existing spreadsheet.
Deliverables
Gate Criterion
“Is it genuinely better than the spreadsheet?”
Weeks 9-16
Aggregate all PM books, scenario testing (rates, spreads, FX), historical reconstruction, attribution prototype.
Deliverables
Gate Criterion
“Can the CIO see aggregated risk in real time?”
Weeks 17-24
All PMs onboarded, full attribution engine, role-based access, audit trails, hardening.
Deliverables
Gate Criterion
“Production-ready for the full investment team?”
Decision Checklist
A clear checklist so your team knows what is needed and when.
Each one purpose-built for a specific job, all orchestrated by a platform that has been running in production for over a year. No procurement hoops. No 50-page proposals. Start with a discovery call.