A vPAC pilot in 2026 has plenty of products to choose from. ABB SSC600 SW, GE PhasorController, Siemens SIPROTEC V, the LF Energy SEAPATH reference platform that reached v1.0 in February 2025, and a long row of vPAC Alliance member offerings. Each one has its own answer to the same set of engineering questions: what is the IEC 61850 interface a virtualised IED has to expose, how is it configured, how is its time discipline maintained, how is it supervised in operation, how is it patched without taking the substation off-line.

What has been missing is an industry-wide framework that pins those answers down — and that framework is now being written. CIGRE Working Group B5.84 — full title "Recommendations and constraints for development and interfacing of virtual Intelligent Electronic Device implemented in Protection, Automation and Control Systems" — is the body doing the work. It is convened by David Macdonald (GB), with David Madrid and Marcus Stollfuss as co-authors of the framework article published in CIGRE Future Connections on 20 January 2026 and refeatured in ELECTRA 345 in April 2026.

CIGRE is not a standards body — it is the international association of large electric systems, and its working groups produce Technical Brochures, not normative standards. What CIGRE does have is a Category A liaison between Study Committee B5 (Protection and Automation) and IEC TC 57, the technical committee that owns IEC 61850. In practice this means CIGRE SC B5 work — and specifically the work of vIED-relevant working groups like B5.60 (TB 891) and B5.84 — is a direct input channel to IEC TC 57 WG 10 and WG 17. A vIED definition coming out of B5.84 is therefore not a binding rule, but it is the document IEC editors will reach for when the next IEC 61850 part needs to address virtualisation explicitly.

The framework article and the Terms of Reference (TOR) together set out the expected scope of the final Technical Brochure, due in Q1 2028, and the implications for vPAC pilots being commissioned now.

Why CIGRE bothered

Working Group B5.84 did not appear out of nowhere. It is the direct successor to WG B5.60, which was initiated in 2017 and produced Technical Brochure 891, "Protection, Automation and Control Architectures with Functionality Independent of Hardware (FIH)", in April 2023. TB 891 set the conceptual ground: PAC functions can be decoupled from the box they currently live in, and there are two architectural paths to do it — a middleware-based IED with standardised interfaces to application containers, and a server-based Centralised Protection and Control system that aggregates applications on redundant compute.

TB 891 also gave the industry the lifecycle argument that has been quoted in every vPAC pitch since: PAC hardware lifecycles are roughly 10–15 years, while primary plant lifecycles are 40–60 years. You replace the relay multiple times during the life of the breaker. Each replacement is a panel rebuild, a new SCD, a new commissioning campaign. Functionality independent of hardware promises to break that cycle.

What TB 891 deliberately did not do was specify the vIED itself. It described the architectural option; it did not say what the vIED has to look like at its interfaces, how it is configured, how it is supervised, or how it is patched. That is the gap B5.84 is filling.

The framing in the CIGRE article is blunt: a large percentage of the installed base of IEDs is at the end of its useful life or approaching it. Utilities are about to make the next round of replacement decisions. If they make them without a standards-body view of what a vIED is, the industry will get exactly what it got with the first wave of IEC 61850 deployments — six vendor-specific dialects of the same idea.

What the WG actually covers

The TOR, signed by the SC B5 chair and dated 8 January 2024, lists the topics the WG will treat.

%%{init: {
  "theme": "base",
  "themeVariables": {
    "background": "#fafaf7",
    "fontFamily": "Roboto Flex, Inter, system-ui, sans-serif",
    "primaryColor": "#1e1f1d",
    "primaryTextColor": "#fafaf7",
    "primaryBorderColor": "#58c1da",
    "lineColor": "#cbcbc4",
    "mainBkg": "#1e1f1d",
    "nodeTextColor": "#fafaf7",
    "darkTextColor": "#fafaf7",

    "cScale0":  "#1e1f1d", "cScaleLabel0":  "#fafaf7", "cScaleInv0":  "#58c1da",
    "cScale1":  "#d6f0f6", "cScaleLabel1":  "#0d5566", "cScaleInv1":  "#58c1da",
    "cScale2":  "#fdf0c5", "cScaleLabel2":  "#5c3f00", "cScaleInv2":  "#f5b301",
    "cScale3":  "#d1efe1", "cScaleLabel3":  "#0e4f31", "cScaleInv3":  "#2f9e6a",
    "cScale4":  "#fbd6d8", "cScaleLabel4":  "#6e1014", "cScaleInv4":  "#e5484d",
    "cScale5":  "#c3e7ef", "cScaleLabel5":  "#0d4452", "cScaleInv5":  "#2fa9c6",
    "cScale6":  "#dde0e3", "cScaleLabel6":  "#2e3338", "cScaleInv6":  "#6e7681",
    "cScale7":  "#1e1f1d", "cScaleLabel7":  "#fafaf7", "cScaleInv7":  "#58c1da",
    "cScale8":  "#1e1f1d", "cScaleLabel8":  "#fafaf7", "cScaleInv8":  "#58c1da",
    "cScale9":  "#1e1f1d", "cScaleLabel9":  "#fafaf7", "cScaleInv9":  "#58c1da",
    "cScale10": "#1e1f1d", "cScaleLabel10": "#fafaf7", "cScaleInv10": "#58c1da",
    "cScale11": "#1e1f1d", "cScaleLabel11": "#fafaf7", "cScaleInv11": "#58c1da"
  }
}}%%
mindmap
  root((WG B5.84<br/>scope))
    Definition
      Virtual IED · what it is and is not
      Risks, benefits, solutions
    Interface
      IEC 61850 mandatory
      Configuration rules and files
      Time synchronisation
    Lifecycle
      Administration and update
      Supervision of vIED
    Resources
      Communication capacity
      Memory capacity
      CPU capacity
    Host side
      Server requirements for vIED hosting
      Server-to-vIED interface requirements
      Industrial server requirements
      Hypervisor comparison
      Redundancy
      Hardware/middleware/software monitoring
    Future
      Future of vIED concept
      Link to Digital Twin

A few points are worth pulling out of that map.

The IEC 61850 interface is mandatory. Both the TOR and the CIGRE article say it twice: a vIED must expose an IEC 61850 interface and must be configurable using IEC 61850 configuration rules and files. There is no "vendor proprietary equivalent" path. If a product calls itself a vIED and does not consume an SCD, it is something else.

Both VM and container methods are in. The CIGRE article describes the two principal methods of virtualisation explicitly: hypervisor-based systems where each VM contains its own operating system, and container-based systems where containers run on the host operating system via a container engine. The WG covers both. It does not pick a winner, and from the way the TOR is worded, it is unlikely to. Expect a comparative treatment.

Supervision and update get a chapter each. Two of the listed topics are "administration and update of virtual IED" and "supervision of virtual IED". For anyone with field experience of bulk firmware upgrades across a fleet of bay-level IEDs, this is the section to read first when the brochure drops. A vIED that is easy to deploy in factory acceptance and impossible to maintain in the field is a regression, not progress.

The host platform is in scope — but not the PACS architecture itself. The WG covers the server side: industrial server requirements, hypervisor comparison, redundancy, hardware/middleware/software monitoring. What it does not cover is how the PACS as a whole is architected, and it does not cover server structure for centralised PACS — that one is reserved for WG B5.70.

This is a useful boundary to understand. B5.84 is telling you what a vIED must be at its interfaces and how its host must behave to support it. It is not telling you whether you should build a centralised PAC, a distributed vPAC, or a hybrid. Architecture is somebody else's working group.

What the WG explicitly does not cover

%%{init: {
  "theme": "base",
  "themeVariables": {
    "background": "#fafaf7",
    "fontFamily": "Roboto Flex, Inter, system-ui, sans-serif",
    "primaryColor": "#ffffff",
    "primaryTextColor": "#1e1f1d",
    "primaryBorderColor": "#e2e2dc",
    "lineColor": "#cbcbc4"
  }
}}%%
flowchart LR
    subgraph IN["B5.84 IN scope"]
        direction TB
        I1["vIED definition"]
        I2["IEC 61850 interface"]
        I3["Configuration · time sync"]
        I4["Update · supervision"]
        I5["CPU · memory · network sizing"]
        I6["Host platform requirements"]
        I7["Hypervisor comparison"]
        I1 ~~~ I2 ~~~ I3 ~~~ I4 ~~~ I5 ~~~ I6 ~~~ I7
    end
    subgraph OUT["OUT of scope"]
        direction TB
        O1["PACS architecture choice"]
        O2["Centralised PACS server structure<br/>(reserved for WG B5.70)"]
        O3["Vendor product certification"]
        O4["Choosing VM vs container as policy"]
        O1 ~~~ O2 ~~~ O3 ~~~ O4
    end
    IN -.->|"complements"| OUT

    classDef inItem  fill:#d6f0f6,stroke:#58c1da,color:#0d5566;
    classDef outItem fill:#dde0e3,stroke:#6e7681,color:#2e3338;
    class I1,I2,I3,I4,I5,I6,I7 inItem
    class O1,O2,O3,O4 outItem
    style IN  fill:#eff8fb,stroke:#58c1da,stroke-width:1.5px,color:#0d5566;
    style OUT fill:#f3f3ee,stroke:#6e7681,stroke-width:1.5px,color:#2e3338;

Engineers who have only read the marketing material around vPAC tend to assume the CIGRE framework will eventually tell them which architecture to pick — central server, distributed servers, hybrid with edge boxes. It will not. WG B5.84 takes the architecture as a given and answers a narrower, more useful question: regardless of where you put the vIED, what does the vIED itself owe to the rest of the system?

The TOR also lists "future of vIED concept" and notes a link between the virtual IED and the Digital Twin concept as a topic the WG will discuss. What that linkage will be in the brochure is not yet on the public record.

The two virtualisation methods, side by side

The CIGRE article describes the two methods as follows. Mapping them to what is now actually shipping:

%%{init: {
  "theme": "base",
  "themeVariables": {
    "background": "#fafaf7",
    "fontFamily": "Roboto Flex, Inter, system-ui, sans-serif",
    "primaryColor": "#ffffff",
    "primaryTextColor": "#1e1f1d",
    "primaryBorderColor": "#e2e2dc",
    "lineColor": "#cbcbc4"
  }
}}%%
flowchart TB
    subgraph H["Hypervisor + VMs"]
        direction TB
        HW1["Industrial server hardware<br/>(IEC 61850-3 / IEEE 1613)"]:::hw
        HV["Hypervisor (KVM / VMware ESXi)"]:::platform
        OS1["Guest OS · vIED-A"]:::runtime
        OS2["Guest OS · vIED-B"]:::runtime
        OS3["Guest OS · vIED-C"]:::runtime
        APP1["Protection app A"]:::app
        APP2["Protection app B"]:::app
        APP3["Protection app C"]:::app
        HW1 --> HV
        HV --> OS1 --> APP1
        HV --> OS2 --> APP2
        HV --> OS3 --> APP3
    end
    H ~~~ C
    subgraph C["Container engine + containers"]
        direction TB
        HW2["Industrial server hardware<br/>(IEC 61850-3 / IEEE 1613)"]:::hw
        OSH["Host OS (Linux, real-time kernel)"]:::platform
        CE["Container engine"]:::platform
        CT1["Container · vIED-A"]:::runtime
        CT2["Container · vIED-B"]:::runtime
        CT3["Container · vIED-C"]:::runtime
        AP1["Protection app A"]:::app
        AP2["Protection app B"]:::app
        AP3["Protection app C"]:::app
        HW2 --> OSH --> CE
        CE --> CT1 --> AP1
        CE --> CT2 --> AP2
        CE --> CT3 --> AP3
    end

    classDef hw       fill:#1e1f1d,stroke:#1e1f1d,color:#fafaf7;
    classDef platform fill:#d6f0f6,stroke:#58c1da,color:#0d5566;
    classDef runtime  fill:#fdf0c5,stroke:#f5b301,color:#5c3f00;
    classDef app      fill:#d1efe1,stroke:#2f9e6a,color:#0e4f31;
    style H fill:#fafaf7,stroke:#e2e2dc,stroke-width:1.5px,color:#1e1f1d;
    style C fill:#fafaf7,stroke:#e2e2dc,stroke-width:1.5px,color:#1e1f1d;

The CIGRE article states the distinction directly: "the difference between them is that the virtual machine contains its own Operating System (OS) whereas containers all run on the host operating system." That distinction changes the answer to almost every question the brochure intends to address:

  • Time synchronisation: in the VM model, each guest OS terminates PTP separately. In the container model, the host OS holds time and the containers inherit it. The discipline you write for one does not apply to the other.
  • Update administration: updating a VM means orchestrating an OS plus an application; updating a container means swapping an image. The risk profile is different.
  • Supervision: what counts as a "vIED is up" signal differs. A container can be up while its application is dead; a VM can be up while its NIC has lost its driver. The WG will need to define the supervision signal at the IEC 61850 level so it does not depend on which method the host uses.

Today, every vendor making vPAC products has its own answer to those questions. The brochure will not prescribe an implementation, but it is expected to define the IEC 61850-level interface and behaviour that a vIED must expose regardless of the underlying virtualisation method.

Real-time, deterministic, and the things the WG cannot pretend away

The CIGRE article identifies "one of the challenges of virtualisation" as "guaranteeing low latency and deterministic performance." For protection that is decisive: IEC 61850-5 puts a 3 ms transfer time on a Type 1A trip message. A virtualised platform either meets it across the worst-case loaded condition, the worst-case background process, the worst-case noisy neighbour on the same socket — or it does not.

Recent academic work has been explicit about the techniques required. Real-time Linux kernels, CPU pinning to dedicate cores to the vIED, NUMA-aware placement, hardware passthrough for NICs handling SV and GOOSE traffic, configuration of the hypervisor scheduler for deterministic latency. TU Delft's published benchmarks of virtualised controllers on a software-defined IEC 61850 substation showed average GOOSE trip times below the 3 ms IEC 61850 transfer-time requirement — but only with the real-time configuration in place, and only at controlled traffic loads. The MDPI evaluation work makes the same caveat: communication latency increases as you add vIEDs and as background traffic grows, and the relationship is not linear.

This is the hardest thing the WG has to write down without becoming a vendor selection guide. Expect the brochure to specify the supervision and reporting that lets a utility measure whether its host is meeting determinism targets, rather than to mandate a specific Linux kernel version or hypervisor. The TOR phrasing — "communication, memory and CPU capacity evaluation" — supports that read.

The deliverables and the timeline

The TOR lists four sourced milestone quarters; everything else (working-draft start, internal review cadence) is not pinned to a date and should not be inferred:

Section Milestone 2027 Q4 2028 Q1 2028 Q2
Brochure Draft TB for SC review
Brochure Final Technical Brochure (critical)
Communications Tutorial
Communications Webinar

Q4 2027 for the draft Technical Brochure presented to the Study Committee. Q1 2028 for the final TB. Q2 2028 for the tutorial and webinar. The ELECTRA-published article being discussed here is part of that schedule — the WG's first public position paper, designed to surface the framework while the brochure is still being written.

By CIGRE working-group standards this is a tight schedule. It is also the right one. The vPAC market is moving fast enough that a brochure published towards the end of the decade would describe a museum.

Three engineering decisions to make before the brochure lands

If you are running a vPAC pilot in 2026, three things follow from the framework as it stands.

First, make sure your vIED candidates expose an IEC 61850 interface that is configurable from SCL files (per the TOR's requirement that vIEDs be "configurable based on IEC 61850 configuration rules and files"), not from a vendor-only GUI. Anything commissioned today against a proprietary configuration tool will sit outside that requirement and is a likely candidate for rework once the brochure is published.

Second, decide now whether you are going down the VM route or the container route, and instrument both your supervision and your update process for that choice. The WG will not pick a winner. It will, however, define what a vIED owes the rest of the system in supervision and update; if your pilot already exposes those signals over MMS, you will not need to retrofit when the brochure lands.

Third, pay attention to what the WG is not covering. The architecture of your PACS — central, distributed, hybrid — is not in B5.84's scope. That is your engineering decision; centralised-PACS server structure is referenced in the B5.84 TOR as work belonging to WG B5.70 (a separate working group). Do not wait for B5.84 to tell you whether you should be building a CPC. It will not.

For supervision specifically — the chapter most likely to be retrofitted into existing tooling rather than freshly developed — utilities running live IEC 61850 traffic monitors today have an advantage. A vIED, just like a hardware IED, publishes data on a bus. Tools like Tekvel Park that already supervise GOOSE, SV and MMS traffic in real PACS networks see a vIED as just another publisher; they do not care whether the source is a relay or a VM, as long as the IEC 61850 interface and behaviour are intact. That is the point the WG is enforcing: the vIED's identity to the rest of the substation is its IEC 61850 interface, not its hardware.

Hypervisor-vs-container and the cybersecurity boundary

Two big questions remain, and the brochure will have to answer them.

The first is hypervisor versus container as a recommendation. The CIGRE article presents both methods evenly. The TOR includes "comparison of hypervisors". Real industry practice is split — VMware ESXi and Linux KVM dominate the VM camp; Kubernetes, k3s, and bespoke container engines are appearing in the container camp. SEAPATH, the LF Energy reference, is KVM-based. Most of the vendor-shipping vPAC products today are VM-based. If the WG ends up neutral, the brochure will be a useful reference; if it ends up pushing one method, that will reshape vendor roadmaps.

The second is the boundary with cybersecurity. Virtualisation makes attack surfaces smaller in some places (one hardened host instead of twenty exposed devices) and larger in others (a hypervisor compromise is now a substation compromise). The TOR does not lift cybersecurity into a chapter heading, but the topic is unavoidable. The interesting question is whether B5.84 will cross-reference WG B5.66 / SC D2 work on OT cybersecurity, or write its own short statement on what the vIED interface owes a security architecture. The CIGRE article does not give that away.

For engineers who still treat parts of the IEC 61850 stack as optional — Sampled Values especially, where the cost of merging units and process bus has been used as an argument to defer adoption — the WG B5.84 framework is, in passing, an answer. A vIED has no native I/O. It has no copper inputs. The only way it sees the substation is over the IEC 61850 process bus and station bus. Virtualisation makes the full stack mandatory in a way that hardware IEDs never quite did, because a hardware IED could always fall back to a wired CT input. In the vIED world there is no fallback — the process bus IS the input. The savings from removing twenty IEDs only land if the merging-unit infrastructure underneath them is already there.

Be careful about waiting two years for the answers. The substations being commissioned now will live for 40 years. The framework is being written for them.


Sources