Skip to main content
productJanuary 28, 2026

Multi-CPMS Orchestration: Running Multiple Backends on the Same Network

How to route OCPP traffic to multiple Charge Point Management Systems simultaneously — for redundancy, migration, or multi-tenant deployments.

At a glance

Multi-CPMS orchestration lets operators send one charger fleet to more than one backend without reconfiguring hardware. It is one of the safest ways to handle migration, resilience, and parallel analytics.

CPO platform architectsEV charging operations teamsMigration leads
  • OCPP must be brokered at the application layer, not simply duplicated at the TCP layer.
  • Primary and observer backend roles are useful for migration and analytics.
  • Multi-backend routing reduces rollout risk during platform changeovers.
  • This architecture is most valuable in brownfield environments with mixed systems.
Y
Yacine El Azrak
Co-founder & CEO
4 min read

Why you might need more than one CPMS

The assumption built into most EV charging deployments is: one network, one CPMS. In practice, it rarely stays that way.

Fleet operators often inherit chargers from acquisitions. Parking operators want one CPMS for billing and another for grid services. Enterprise campuses run a corporate CPMS alongside a public roaming network. Utilities layer their demand response platform on top of an existing operator backend.

Each of these situations creates the same problem: how do you send the same OCPP stream to two different systems that weren't designed to share it?

The naive solution and why it fails

The obvious approach is to run a TCP-level proxy that duplicates WebSocket frames to multiple backends. This works at the connection layer but breaks immediately at the application layer.

OCPP is a request/response protocol. When a charger sends a StatusNotification, it expects exactly one response with a matching message ID. If you fan out to two backends, you get two responses — and the charger rejects the second one as an unsolicited message, which can trigger error handling or disconnect logic.

You can't proxy OCPP at the TCP level. You have to understand the protocol.

The right architecture

A proper multi-CPMS setup requires an application-layer broker that:

  1. Maintains one connection per charger — the charger sees a single OCPP endpoint
  2. Fans out at the message level — each upstream gets a copy with its own message ID sequence
  3. Aggregates responses — the broker waits for a response from the primary upstream and returns it to the charger; secondary upstreams process asynchronously
  4. Handles upstream failures gracefully — if a secondary CPMS is down, the primary path isn't affected

This is exactly what EV Cloud's multi-CPMS routing mode implements.

Configuration

In EV Cloud, you define upstreams per charger or per network:

{
  "charger_id": "CP-001",
  "upstreams": [
    { "url": "wss://primary.cpms.io/ocpp", "role": "primary" },
    { "url": "wss://analytics.internal/ocpp", "role": "observer" }
  ]
}

primary upstreams participate in the request/response cycle. observer upstreams receive a copy of every message but their responses are discarded. This is ideal for analytics pipelines, billing systems, or grid services that need visibility but don't control the charger.

Operational rules that keep this architecture safe

Multi-backend routing is only useful if the control boundary stays clear.

In production, that usually means:

  1. one upstream is explicitly authoritative for charger control
  2. observer systems are prevented from participating in the live request/response path
  3. response timing and retry behavior are measured on the primary path
  4. downstream consumers are prepared for eventual consistency, not perfect simultaneity
  5. rollback is done by changing routing policy, not improvising charger-side changes

If those rules are not explicit, multi-CPMS architecture creates ambiguity instead of resilience.

Migration use case

Multi-CPMS is also the cleanest way to migrate between CPMS providers. Route traffic to both old and new systems simultaneously. Verify the new system is handling everything correctly. Cut over by removing the old upstream from configuration — no downtime, no charger reconfiguration, no risk.

For the right fleet segment, this can compress migration work materially because you change upstream routing policy instead of touching every charger in the field.

Readiness checklist before you run two backends

Before you adopt multi-CPMS orchestration, verify:

  • the primary backend role is clearly defined
  • observer systems cannot send live charger control responses
  • charger event IDs and upstream message tracking are auditable
  • timeout and retry behavior on the primary path is measured
  • support teams know which system is authoritative during incidents
  • rollback is documented per charger group or site wave

If you're evaluating whether this architecture belongs in your vendor shortlist, read How to Evaluate an OCPP Platform. If you're moving toward procurement, pair it with the EV charging software RFP checklist.

Frequently asked questions

Short answers for operators evaluating this topic in production.

Continue evaluation

Turn this topic into a buying decision

Use these pages to move from protocol research into shortlist design, migration planning, and commercial evaluation.

From content to rollout

Need help applying this in a live EV charging stack?

EV Cloud helps operators connect chargers, roaming partners, and internal platforms without rewriting their entire backend. Use the guide above for strategy, then use the product pages below for rollout planning.