Skip to main content

Overview

Benchmark any validator set, including operators not in the public entity registry, against the network average using the same methodology as the public leaderboard.
API Endpoints: This guide uses /api/v2/ethereum/validators/performance-aggregate, /api/v2/ethereum/validators/apy-roi, /api/v2/ethereum/validators/rewards-aggregate, and /api/v2/ethereum/performance-aggregate.
Plan requirement: validators/performance-aggregate with dashboard_id, validators/apy-roi, and validators/rewards-aggregate are available on all paid plans. performance-aggregate (network baseline) is available on all plans.
When to use this guide: Use this when your validator set is not covered by the public entity registry, or when you need to benchmark an internal subset (e.g. a specific node operator, region, or client configuration) that does not correspond to a public entity.
Configurable evaluation window: All examples below use 30d, but you can change evaluation_window to 24h, 7d, 30d, or 90d depending on your use case. Use consistent windows across all calls in the same comparison. See Evaluation Windows for guidance.

Why Use Dashboards as Private Entities?

Custom Entity Definitions

Define your own operator boundaries regardless of public label coverage. Useful for staking protocols, custodians, or infrastructure providers managing validators across multiple withdrawal credentials.

Same Methodology, Private Data

The benchmark methodology (BeaconScore delta, component efficiency, missed reward % of earned) is identical to the public entity leaderboard. You can produce comparable reports without being labeled in the public registry.

Group-Level Sub-Entity Benchmarking

Organize validators into dashboard groups by node, region, client, or team. Benchmark each group independently using group_id, mirroring the sub-entity drill-down available for public entities.

Operator SLA Reporting

Produce performance reports for customers or internal stakeholders using your own segmentation, with network-relative deltas as the baseline.

Step 1: Define Your Private Entity Set

Create a Validator Dashboard and add the validators that constitute your private entity. Each dashboard can represent one entity; dashboard groups represent sub-entities.
  • Use dashboard_id for the full private entity set
  • Use dashboard_id + group_id for sub-entity benchmarking
For setup, grouping workflows, and validator import methods, see Dashboard as Private Sets.
Unlabeled validators: If your validators appear as Unknown in the public entity list, they are not assigned to any public entity. Using a dashboard is the correct approach for benchmarking these validators. See Validator Tagging for information on label coverage.

Step 2: Fetch Private Entity BeaconScore

Fetch the aggregated BeaconScore for your dashboard:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/performance-aggregate \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "validator": {
      "dashboard_id": 123
    },
    "range": {
      "evaluation_window": "30d"
    }
  }'
For group-level benchmarking (sub-entity equivalent):
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/performance-aggregate \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "validator": {
      "dashboard_id": 123,
      "group_id": 456
    },
    "range": {
      "evaluation_window": "30d"
    }
  }'
Extract from response:
FieldPath
Total BeaconScoredata.beaconscore.total
Attestation efficiencydata.beaconscore.attestation
Sync committee efficiencydata.beaconscore.sync_committee
Proposal efficiencydata.beaconscore.proposal

Step 3: Fetch Network Baseline

curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/performance-aggregate \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "range": {
      "evaluation_window": "30d"
    }
  }'
Use data.beaconscore.total as the network baseline. Use the same evaluation_window as Step 2.

Step 4: Compute BeaconScore Delta

delta_total = private_beaconscore - network_beaconscore
d_att  = private_attestation  - network_attestation
d_sync = private_sync         - network_sync_committee
d_prop = private_proposal     - network_proposal

Threshold Reference

🟢 Green🟡 Yellow🔴 Red
delta >= +0.0025-0.0025 < delta < +0.0025delta <= -0.0025
These thresholds apply to the total delta and all three component deltas.

Step 5: Fetch APY / ROI for the Private Set

curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/apy-roi \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "validator": {
      "dashboard_id": 123
    },
    "range": {
      "evaluation_window": "30d"
    }
  }'
Compare data.combined.apy.total, data.consensus_layer.apy.total, and data.execution_layer.apy.total against a network-level APY query (omit the validator selector). Use the same ±0.25pp thresholds for coloring.

Step 6: Fetch Missed Rewards for the Private Set

curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/rewards-aggregate \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "validator": {
      "dashboard_id": 123
    },
    "range": {
      "evaluation_window": "30d"
    }
  }'
Wei values are JSON strings. All reward values from rewards-aggregate are returned as large integer strings. Always cast with int(str(v)) before dividing by 1e18.
Apply the same missed rewards methodology used for public entities:
def w(v): return int(str(v)) if v is not None else 0

d    = response["data"]
att  = d["attestation"]
sc   = d["sync_committee"]
prop = d["proposal"]

m_cl_eth    = (w(att["head"]["missed_reward"]) + w(att["source"]["missed_reward"]) +
               w(att["target"]["missed_reward"]) + w(sc["missed_reward"]) +
               w(prop["missed_cl_reward"])) / 1e18
m_el_eth    = w(prop["missed_el_reward"]) / 1e18
m_total_eth = m_cl_eth + m_el_eth
earned_eth  = w(d["total_reward"]) / 1e18    # data.total_reward = gross (pre-penalty)

pct_missed  = m_total_eth / (m_total_eth + earned_eth) * 100

Missed % of Earned Thresholds (lower = better)

🟢 Green🟡 Yellow🔴 Red
< 0.40%0.40% – 0.60%> 0.60%

Example: Full Private Entity Benchmark Script (Python)

import requests
import time

API_KEY      = "<YOUR_API_KEY>"
BASE         = "https://beaconcha.in"
HEADERS      = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
WINDOW       = "30d"
DASHBOARD_ID = 123

def post(endpoint, payload):
    r = requests.post(f"{BASE}{endpoint}", headers=HEADERS, json=payload, timeout=30)
    r.raise_for_status()
    return r.json()

def w(v): return int(str(v)) if v is not None else 0

# Step 1: Private entity BeaconScore
perf = post("/api/v2/ethereum/validators/performance-aggregate", {
    "chain": "mainnet",
    "validator": {"dashboard_id": DASHBOARD_ID},
    "range": {"evaluation_window": WINDOW},
})
priv_score = perf["data"]["beaconscore"]["total"]
priv_att   = perf["data"]["beaconscore"]["attestation"]
priv_sync  = perf["data"]["beaconscore"]["sync_committee"]
priv_prop  = perf["data"]["beaconscore"]["proposal"]

# Step 2: Network baseline
net = post("/api/v2/ethereum/performance-aggregate", {
    "chain": "mainnet",
    "range": {"evaluation_window": WINDOW},
})
NET = net["data"]["beaconscore"]

delta = priv_score - NET["total"]
print(f"BeaconScore: {priv_score*100:.4f}%  | Network: {NET['total']*100:.4f}%  | Delta: {delta*100:+.2f}pp")
print(f"  Att:  {priv_att*100:.4f}%  vs {NET['attestation']*100:.4f}%  ({(priv_att-NET['attestation'])*100:+.2f}pp)")
print(f"  Sync: {priv_sync*100:.4f}% vs {NET['sync_committee']*100:.4f}% ({(priv_sync-NET['sync_committee'])*100:+.2f}pp)")
print(f"  Prop: {priv_prop*100:.4f}%  vs {NET['proposal']*100:.4f}%  ({(priv_prop-NET['proposal'])*100:+.2f}pp)")

# Step 3: APY
time.sleep(1)
apy_r = post("/api/v2/ethereum/validators/apy-roi", {
    "chain": "mainnet",
    "validator": {"dashboard_id": DASHBOARD_ID},
    "range": {"evaluation_window": WINDOW},
})
apy    = apy_r["data"]["combined"]["apy"]["total"]
cl_apy = apy_r["data"]["consensus_layer"]["apy"]["total"]
el_apy = apy_r["data"]["execution_layer"]["apy"]["total"]
print(f"\nAPY: {apy*100:.2f}%  (CL: {cl_apy*100:.2f}%, EL: {el_apy*100:.2f}%)")

# Step 4: Missed rewards
time.sleep(1)
rw   = post("/api/v2/ethereum/validators/rewards-aggregate", {
    "chain": "mainnet",
    "validator": {"dashboard_id": DASHBOARD_ID},
    "range": {"evaluation_window": WINDOW},
})
d    = rw["data"]
att  = d["attestation"]
sc   = d["sync_committee"]
prop = d["proposal"]

m_cl_eth    = (w(att["head"]["missed_reward"]) + w(att["source"]["missed_reward"]) +
               w(att["target"]["missed_reward"]) + w(sc["missed_reward"]) +
               w(prop["missed_cl_reward"])) / 1e18
m_el_eth    = w(prop["missed_el_reward"]) / 1e18
m_total_eth = m_cl_eth + m_el_eth
earned_eth  = w(d["total_reward"]) / 1e18
pct_missed  = m_total_eth / (m_total_eth + earned_eth) * 100

flag = "🟢" if pct_missed < 0.40 else ("🔴" if pct_missed > 0.60 else "🟡")
print(f"\nMissed: {m_total_eth:.2f} ETH  {flag} {pct_missed:.3f}% of earned")
print(f"  CL: {m_cl_eth:.2f} ETH  |  EL: {m_el_eth:.2f} ETH")

Multi-Group Benchmarking (Sub-Entity Pattern)

To benchmark multiple internal groups in the same way public entity sub-entities are benchmarked, query each group_id independently and compare to the same network baseline:
GROUP_IDS = [1, 2, 3]  # your dashboard group IDs

net_score = network_beaconscore  # from performance-aggregate

for gid in GROUP_IDS:
    perf = post("/api/v2/ethereum/validators/performance-aggregate", {
        "chain": "mainnet",
        "validator": {"dashboard_id": DASHBOARD_ID, "group_id": gid},
        "range": {"evaluation_window": WINDOW},
    })
    score = perf["data"]["beaconscore"]["total"]
    delta = score - net_score
    flag  = "🟢" if delta >= 0.0025 else ("🔴" if delta <= -0.0025 else "🟡")
    print(f"Group {gid}: {score*100:.4f}%  {flag} {delta*100:+.2f}pp vs network")
    time.sleep(1)

Best Practices

Use 30d or 90d Windows

Longer windows reduce noise from proposal luck variance (see residual luck factors). Use 30d as the minimum for meaningful comparisons.

Standardize Group Semantics

Keep group-to-infrastructure mappings consistent. Reorganizing groups mid-period makes historical deltas incomparable.

Audit Dashboard Membership

Verify which validators are in each dashboard before publishing benchmark results. Validator exits, activations, or reassignments can silently change group composition.

Separate EL APY Outliers

If your private set shows elevated EL APY, verify it reflects actual MEV outcomes (check proposal count) rather than a group composition issue.
Store private-set vs network deltas over time. A persistent downward drift in delta (even within the yellow band) may indicate gradual infrastructure degradation before it becomes a visible issue.