Skip to main content

Overview

Compare a single staking entity against the network average across BeaconScore, APY, missed rewards, and sub-entity performance.
API Endpoints: This guide uses /api/v2/ethereum/entities, /api/v2/ethereum/performance-aggregate, /api/v2/ethereum/validators/apy-roi, /api/v2/ethereum/validators/rewards-aggregate, /api/v2/ethereum/entity/sub-entities, and /api/v2/ethereum/validators/metadata.
Premium access: entities, entity/sub-entities, validators/apy-roi (with entity selector), validators/rewards-aggregate (with entity selector), and validators/metadata require a Scale or Enterprise plan. performance-aggregate (network baseline) is available on all plans.
Attribution required: If you display BeaconScore publicly, follow the BeaconScore License and License Materials.
Configurable evaluation window: All examples below use 30d, but you can change evaluation_window to 24h, 7d, 30d, or 90d depending on your use case. Use consistent windows across all calls in the same comparison. See Evaluation Windows for guidance.

Why Benchmark vs Network?

Baseline Performance Check

A BeaconScore of 99.5% is only meaningful in context. Comparing to the network average reveals whether that score represents outperformance, peer performance, or lagging performance.

Incident Validation

If an entity’s score drops, compare against the network delta first. A simultaneous network-wide drop indicates an external event; an entity-only drop points to operational issues.

Stakeholder Reporting

Produce clear entity-vs-network deltas for customer SLAs, quarterly reports, or public transparency disclosures.

Sub-Entity Diagnosis

When a parent entity score changes, drill into sub-entities (node operators) to identify which operators are driving the change.
MetricLuck-normalized?Best use
BeaconScoreYes (see residual factors)Performance comparison across entities
APY / ROINoAbsolute return reporting
Missed rewardsYes (% of earned)Operational efficiency
For the full BeaconScore methodology, see BeaconScore vs. 3rd Party Metrics.

Step 1: Get the Entity BeaconScore

Fetch the entity list and select your target:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/entities \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "range": { "evaluation_window": "30d" }
  }'
From the response, use the target entity’s beaconscore field (decimal, e.g. 0.9947).

Step 2: Get the Network Baseline

Fetch the network-wide performance aggregate over the same evaluation window:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/performance-aggregate \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "range": { "evaluation_window": "30d" }
  }'
Known behavior: This endpoint does not filter by entity. Any entity parameter is silently ignored. The response always represents the full network. This is the intended behavior for computing the baseline.
Use data.beaconscore.total as the network benchmark. Also extract:
FieldPathUse
Total BeaconScoredata.beaconscore.totalOverall delta
Attestation efficiencydata.beaconscore.attestationComponent delta
Sync committee efficiencydata.beaconscore.sync_committeeComponent delta
Proposal efficiencydata.beaconscore.proposalComponent delta

Step 3: Compute BeaconScore Delta

delta_total = entity_beaconscore - network_beaconscore

Threshold Reference

🟢 Green🟡 Yellow🔴 Red
delta >= +0.0025 (+0.25pp above network)-0.0025 < delta < +0.0025 (within ±0.25pp)delta <= -0.0025 (0.25pp+ below network)
These same thresholds apply to all three component deltas (attestation, sync committee, proposal).

Step 4: Fetch APY and Compare vs Network

Fetch APY for the target entity:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/apy-roi \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "validator": { "entity": "Lido" },
    "range": { "evaluation_window": "30d" }
  }'
Then fetch APY for the network baseline (no entity selector):
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/apy-roi \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "range": { "evaluation_window": "30d" }
  }'
Compare data.combined.apy.total, data.consensus_layer.apy.total, and data.execution_layer.apy.total between entity and network. Use the same ±0.25pp thresholds for coloring.
Interpreting EL APY: A high EL APY vs the network in short windows typically reflects proposal luck (favorable MEV), not operational efficiency. Always check whether EL outperformance aligns with an above-average proposal count before attributing it to operator quality.

Step 5: Fetch Missed Rewards and Compute Efficiency

Fetch missed rewards for the entity:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/rewards-aggregate \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "validator": { "entity": "Lido" },
    "range": { "evaluation_window": "30d" }
  }'
Wei values are JSON strings. All reward values are returned as strings representing large integers. Always cast with int(str(v)) before dividing by 1e18.
Compute per-component efficiency and total missed:
def w(v): return int(str(v)) if v is not None else 0

d    = response["data"]
att  = d["attestation"]
sc   = d["sync_committee"]
prop = d["proposal"]

# Component efficiency
att_eff  = (w(att["head"]["reward"]) + w(att["source"]["reward"]) + w(att["target"]["reward"])) / \
           (w(att["head"]["reward"]) + w(att["source"]["reward"]) + w(att["target"]["reward"]) +
            w(att["head"]["missed_reward"]) + w(att["source"]["missed_reward"]) + w(att["target"]["missed_reward"]))

sync_eff = w(sc["reward"]) / (w(sc["reward"]) + w(sc["missed_reward"]))

prop_cl_earned = w(prop["attestation_inclusion_reward"]) + w(prop["sync_inclusion_reward"]) + w(prop["slashing_inclusion_reward"])
prop_eff       = prop_cl_earned / (prop_cl_earned + w(prop["missed_cl_reward"]))

# Deltas vs network baseline components
d_att  = att_eff  - NET_attestation
d_sync = sync_eff - NET_sync_committee
d_prop = prop_eff - NET_proposal

# Missed totals
m_cl_eth    = (w(att["head"]["missed_reward"]) + w(att["source"]["missed_reward"]) +
               w(att["target"]["missed_reward"]) + w(sc["missed_reward"]) +
               w(prop["missed_cl_reward"])) / 1e18
m_el_eth    = w(prop["missed_el_reward"]) / 1e18
m_total_eth = m_cl_eth + m_el_eth
earned_eth  = w(d["total_reward"]) / 1e18   # gross, not net

pct_missed = m_total_eth / (m_total_eth + earned_eth) * 100

Missed % of Earned Thresholds (lower = better)

🟢 Green🟡 Yellow🔴 Red
< 0.40%0.40% – 0.60%> 0.60%
Absolute ETH missed scales with validator count. Use % of earned for fair cross-entity comparisons.

Step 6: Drill Into Sub-Entities

When parent performance changes, identify which operators are driving it:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/entity/sub-entities \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "entity": "Lido",
    "range": { "evaluation_window": "30d" },
    "sort_by": "beaconscore",
    "sort_order": "desc"
  }'
Compare each sub-entity’s beaconscore against the network baseline. Sub-entities with delta <= -0.0025 warrant investigation. Sortable fields: beaconscore, net_share, validator_count.

Step 7: Map Validators to Entities (Optional)

If you start from validator indices rather than an entity name, resolve entity assignments first:
curl --request POST \
  --url https://beaconcha.in/api/v2/ethereum/validators/metadata \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "chain": "mainnet",
    "validator": {
      "validator_identifiers": [1, 2, 3]
    },
    "page_size": 10
  }'
The response includes entity and sub_entity per validator. Use these to route validators to the correct entity benchmarking queries.

Example: Full Entity Benchmark Script (Python)

import requests
import time

API_KEY     = "<YOUR_API_KEY>"
BASE        = "https://beaconcha.in"
HEADERS     = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
WINDOW      = "30d"
ENTITY_NAME = "Lido"

def post(endpoint, payload):
    r = requests.post(f"{BASE}{endpoint}", headers=HEADERS, json=payload, timeout=30)
    r.raise_for_status()
    return r.json()

def w(v): return int(str(v)) if v is not None else 0

# Entity BeaconScore
entities = post("/api/v2/ethereum/entities", {
    "chain": "mainnet",
    "range": {"evaluation_window": WINDOW},
})
entity = next((e for e in entities["data"] if e["entity"] == ENTITY_NAME), None)
if not entity:
    raise ValueError(f"Entity not found: {ENTITY_NAME}")

# Network baseline
net = post("/api/v2/ethereum/performance-aggregate", {
    "chain": "mainnet",
    "range": {"evaluation_window": WINDOW},
})
NET = net["data"]["beaconscore"]
delta = entity["beaconscore"] - NET["total"]
print(f"BeaconScore: {entity['beaconscore']*100:.4f}% | Network: {NET['total']*100:.4f}% | Delta: {delta*100:+.2f}pp")

# APY
time.sleep(1)
apy_r = post("/api/v2/ethereum/validators/apy-roi", {
    "chain": "mainnet",
    "validator": {"entity": ENTITY_NAME},
    "range": {"evaluation_window": WINDOW},
})
apy = apy_r["data"]["combined"]["apy"]["total"]
print(f"APY: {apy*100:.2f}%  CL: {apy_r['data']['consensus_layer']['apy']['total']*100:.2f}%  EL: {apy_r['data']['execution_layer']['apy']['total']*100:.2f}%")

# Missed rewards
time.sleep(1)
rw = post("/api/v2/ethereum/validators/rewards-aggregate", {
    "chain": "mainnet",
    "validator": {"entity": ENTITY_NAME},
    "range": {"evaluation_window": WINDOW},
})
d    = rw["data"]
att  = d["attestation"]
sc   = d["sync_committee"]
prop = d["proposal"]

m_cl_eth    = (w(att["head"]["missed_reward"]) + w(att["source"]["missed_reward"]) +
               w(att["target"]["missed_reward"]) + w(sc["missed_reward"]) +
               w(prop["missed_cl_reward"])) / 1e18
m_el_eth    = w(prop["missed_el_reward"]) / 1e18
m_total_eth = m_cl_eth + m_el_eth
earned_eth  = w(d["total_reward"]) / 1e18
pct_missed  = m_total_eth / (m_total_eth + earned_eth) * 100
print(f"Missed: {m_total_eth:.2f} ETH ({pct_missed:.3f}% of earned)")

# Sub-entity drill-down
time.sleep(1)
subs = post("/api/v2/ethereum/entity/sub-entities", {
    "chain": "mainnet",
    "entity": ENTITY_NAME,
    "range": {"evaluation_window": WINDOW},
    "sort_by": "beaconscore",
    "sort_order": "desc",
})
print(f"\nSub-entities for {ENTITY_NAME}:")
for row in subs["data"][:10]:
    sub_delta = row["beaconscore"] - NET["total"]
    flag = "🟢" if sub_delta >= 0.0025 else ("🔴" if sub_delta <= -0.0025 else "🟡")
    print(f"  {flag} {row['sub_entity']:<25} {row['beaconscore']*100:.4f}% ({sub_delta*100:+.2f}pp)")

Best Practices

Use 30d or 90d Windows

Short windows introduce noise from proposal luck. Use 30d minimum for stable benchmarking.

Track Delta History

Store entity - network deltas over time to distinguish persistent underperformance from temporary variance.

Inspect Sub-Entities

A parent-level score change can mask improvement in some operators and degradation in others.

Separate CL and EL APY

When APY diverges from peers, check whether the gap is in CL APY (operational) or EL APY (proposal luck). CL APY differences are operationally significant; EL APY differences in short windows often are not.

Data Freshness

  • Entity and sub-entity data is precomputed and updated hourly.
  • Validator-to-entity assignments are updated once per day.

For endpoint details, see the Entities and Network sections in the V2 API Docs sidebar.