Benchmark any validator set, including operators not in the public entity registry, against the network average using the same methodology as the public leaderboard.
API Endpoints: This guide uses /api/v2/ethereum/validators/performance-aggregate, /api/v2/ethereum/validators/apy-roi, /api/v2/ethereum/validators/rewards-aggregate, and /api/v2/ethereum/performance-aggregate.
Plan requirement:validators/performance-aggregate with dashboard_id, validators/apy-roi, and validators/rewards-aggregate are available on all paid plans. performance-aggregate (network baseline) is available on all plans.
When to use this guide: Use this when your validator set is not covered by the public entity registry, or when you need to benchmark an internal subset (e.g. a specific node operator, region, or client configuration) that does not correspond to a public entity.
Configurable evaluation window: All examples below use 30d, but you can change evaluation_window to 24h, 7d, 30d, or 90d depending on your use case. Use consistent windows across all calls in the same comparison. See Evaluation Windows for guidance.
Define your own operator boundaries regardless of public label coverage. Useful for staking protocols, custodians, or infrastructure providers managing validators across multiple withdrawal credentials.
Same Methodology, Private Data
The benchmark methodology (BeaconScore delta, component efficiency, missed reward % of earned) is identical to the public entity leaderboard. You can produce comparable reports without being labeled in the public registry.
Group-Level Sub-Entity Benchmarking
Organize validators into dashboard groups by node, region, client, or team. Benchmark each group independently using group_id, mirroring the sub-entity drill-down available for public entities.
Operator SLA Reporting
Produce performance reports for customers or internal stakeholders using your own segmentation, with network-relative deltas as the baseline.
Create a Validator Dashboard and add the validators that constitute your private entity. Each dashboard can represent one entity; dashboard groups represent sub-entities.
Use dashboard_id for the full private entity set
Use dashboard_id + group_id for sub-entity benchmarking
Unlabeled validators: If your validators appear as Unknown in the public entity list, they are not assigned to any public entity. Using a dashboard is the correct approach for benchmarking these validators. See Validator Tagging for information on label coverage.
Compare data.combined.apy.total, data.consensus_layer.apy.total, and data.execution_layer.apy.total against a network-level APY query (omit the validator selector). Use the same ±0.25pp thresholds for coloring.
Wei values are JSON strings. All reward values from rewards-aggregate are returned as large integer strings. Always cast with int(str(v)) before dividing by 1e18.
Apply the same missed rewards methodology used for public entities:
To benchmark multiple internal groups in the same way public entity sub-entities are benchmarked, query each group_id independently and compare to the same network baseline:
GROUP_IDS = [1, 2, 3] # your dashboard group IDsnet_score = network_beaconscore # from performance-aggregatefor gid in GROUP_IDS: perf = post("/api/v2/ethereum/validators/performance-aggregate", { "chain": "mainnet", "validator": {"dashboard_id": DASHBOARD_ID, "group_id": gid}, "range": {"evaluation_window": WINDOW}, }) score = perf["data"]["beaconscore"]["total"] delta = score - net_score flag = "🟢" if delta >= 0.0025 else ("🔴" if delta <= -0.0025 else "🟡") print(f"Group {gid}: {score*100:.4f}% {flag} {delta*100:+.2f}pp vs network") time.sleep(1)
Longer windows reduce noise from proposal luck variance (see residual luck factors). Use 30d as the minimum for meaningful comparisons.
Standardize Group Semantics
Keep group-to-infrastructure mappings consistent. Reorganizing groups mid-period makes historical deltas incomparable.
Audit Dashboard Membership
Verify which validators are in each dashboard before publishing benchmark results. Validator exits, activations, or reassignments can silently change group composition.
Separate EL APY Outliers
If your private set shows elevated EL APY, verify it reflects actual MEV outcomes (check proposal count) rather than a group composition issue.
Store private-set vs network deltas over time. A persistent downward drift in delta (even within the yellow band) may indicate gradual infrastructure degradation before it becomes a visible issue.