Why you might need more than one CPMS
The assumption built into most EV charging deployments is: one network, one CPMS. In practice, it rarely stays that way.
Fleet operators often inherit chargers from acquisitions. Parking operators want one CPMS for billing and another for grid services. Enterprise campuses run a corporate CPMS alongside a public roaming network. Utilities layer their demand response platform on top of an existing operator backend.
Each of these situations creates the same problem: how do you send the same OCPP stream to two different systems that weren't designed to share it?
The naive solution and why it fails
The obvious approach is to run a TCP-level proxy that duplicates WebSocket frames to multiple backends. This works at the connection layer but breaks immediately at the application layer.
OCPP is a request/response protocol. When a charger sends a StatusNotification, it expects exactly one response with a matching message ID. If you fan out to two backends, you get two responses — and the charger rejects the second one as an unsolicited message, which can trigger error handling or disconnect logic.
You can't proxy OCPP at the TCP level. You have to understand the protocol.
The right architecture
A proper multi-CPMS setup requires an application-layer broker that:
- Maintains one connection per charger — the charger sees a single OCPP endpoint
- Fans out at the message level — each upstream gets a copy with its own message ID sequence
- Aggregates responses — the broker waits for a response from the primary upstream and returns it to the charger; secondary upstreams process asynchronously
- Handles upstream failures gracefully — if a secondary CPMS is down, the primary path isn't affected
This is exactly what EV Cloud's multi-CPMS routing mode implements.
Configuration
In EV Cloud, you define upstreams per charger or per network:
{
"charger_id": "CP-001",
"upstreams": [
{ "url": "wss://primary.cpms.io/ocpp", "role": "primary" },
{ "url": "wss://analytics.internal/ocpp", "role": "observer" }
]
}primary upstreams participate in the request/response cycle. observer upstreams receive a copy of every message but their responses are discarded. This is ideal for analytics pipelines, billing systems, or grid services that need visibility but don't control the charger.
Operational rules that keep this architecture safe
Multi-backend routing is only useful if the control boundary stays clear.
In production, that usually means:
- one upstream is explicitly authoritative for charger control
- observer systems are prevented from participating in the live request/response path
- response timing and retry behavior are measured on the primary path
- downstream consumers are prepared for eventual consistency, not perfect simultaneity
- rollback is done by changing routing policy, not improvising charger-side changes
If those rules are not explicit, multi-CPMS architecture creates ambiguity instead of resilience.
Migration use case
Multi-CPMS is also the cleanest way to migrate between CPMS providers. Route traffic to both old and new systems simultaneously. Verify the new system is handling everything correctly. Cut over by removing the old upstream from configuration — no downtime, no charger reconfiguration, no risk.
For the right fleet segment, this can compress migration work materially because you change upstream routing policy instead of touching every charger in the field.
Readiness checklist before you run two backends
Before you adopt multi-CPMS orchestration, verify:
- the primary backend role is clearly defined
- observer systems cannot send live charger control responses
- charger event IDs and upstream message tracking are auditable
- timeout and retry behavior on the primary path is measured
- support teams know which system is authoritative during incidents
- rollback is documented per charger group or site wave
If you're evaluating whether this architecture belongs in your vendor shortlist, read How to Evaluate an OCPP Platform. If you're moving toward procurement, pair it with the EV charging software RFP checklist.