Overview
Rank all public staking entities on Ethereum and reproduce the metrics published in the monthly beaconcha.in leaderboard.
API Endpoints: This guide uses /api/v2/ethereum/entities, /api/v2/ethereum/validators/apy-roi, and /api/v2/ethereum/validators/rewards-aggregate.
Premium access: entities, apy-roi (with entity selector), and rewards-aggregate (with entity selector) require a Scale or Enterprise plan.
Configurable evaluation window: All examples below use 30d, but you can change evaluation_window to 24h, 7d, 30d, or 90d depending on your use case. Use consistent windows across all calls in the same comparison. See Evaluation Windows for guidance.
Why Compare Entities?
Provider Due Diligence Evaluate operational quality across staking providers before selecting integrations or partners.
Competitive Analysis Track how entity rankings shift across windows to detect persistent vs. short-lived changes.
Concentration Monitoring Measure stake concentration using net_share and validator_count.
Reproduce the Leaderboard Fetch the exact data behind the @beaconcha_in monthly leaderboard posts using the steps below.
Step 1: Fetch Ranked Entities
Retrieve entities sorted by network share (the order used in the leaderboard):
curl --request POST \
--url https://beaconcha.in/api/v2/ethereum/entities \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"chain": "mainnet",
"range": { "evaluation_window": "30d" },
"sort_by": "net_share",
"sort_order": "desc",
"page_size": 10
}'
Filtering Unknown: page_size maximum is 10. If Unknown appears in the first page, use the paging.next_cursor from the response to fetch the next page and complete your top-10 named entity list:
curl --request POST \
--url https://beaconcha.in/api/v2/ethereum/entities \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"chain": "mainnet",
"range": { "evaluation_window": "30d" },
"sort_by": "net_share",
"sort_order": "desc",
"page_size": 10,
"cursor": "<next_cursor_from_previous_response>"
}'
Each row returns: entity, beaconscore, validator_count, net_share, sub_entity_count.
Sort Options
sort_by valueWhat it ranks net_shareWho controls the most stake (leaderboard default) beaconscoreWho performs best operationally validator_countWho operates the most validators sub_entity_countWho appears most distributed across operators
Step 2: Fetch APY / ROI per Entity
Query annualized returns for each entity. Run serially with a 1-second sleep between calls to avoid rate limits:
curl --request POST \
--url https://beaconcha.in/api/v2/ethereum/validators/apy-roi \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"chain": "mainnet",
"validator": { "entity": "Lido" },
"range": { "evaluation_window": "30d" }
}'
Extract from response:
Field Path Total APY data.combined.apy.totalCL APY data.consensus_layer.apy.totalEL APY data.execution_layer.apy.totalROI (30d) data.combined.roi.total
APY and BeaconScore measure different things. BeaconScore is a duty efficiency metric (0-100%): a score of 100% means the validator earned the maximum possible rewards for every assigned duty. APY is a financial return metric expressing annualized yield on staked ETH (e.g., 3%). A validator can have a perfect BeaconScore of 100% and an APY of 3% simultaneously; the two are not comparable on the same scale. Use BeaconScore for operational performance comparison and APY for absolute return reporting. See BeaconScore vs. 3rd Party Metrics .
Step 3: Fetch Missed Rewards per Entity
Query missed reward data per entity. Run serially with a 1-second sleep between calls:
curl --request POST \
--url https://beaconcha.in/api/v2/ethereum/validators/rewards-aggregate \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"chain": "mainnet",
"validator": { "entity": "Lido" },
"range": { "evaluation_window": "30d" }
}'
Key Fields (all values returned as wei strings, divide by 1e18 for ETH)
All wei values from rewards-aggregate are returned as JSON strings , not numbers. Always cast with int(str(v)) before arithmetic.
Metric Path Notes Earned gross data.total_rewardUse for % of earned calculation Earned net data.totalAfter penalties, do NOT use for % calc API total missed data.total_missedPrecomputed sanity check Att. head missed data.attestation.head.missed_rewardAtt. source missed data.attestation.source.missed_rewardAtt. target missed data.attestation.target.missed_rewardAtt. head earned data.attestation.head.rewardRequired for att. efficiency Att. source earned data.attestation.source.rewardAtt. target earned data.attestation.target.rewardSync missed data.sync_committee.missed_rewardSync earned data.sync_committee.rewardRequired for sync efficiency Proposal CL missed data.proposal.missed_cl_rewardProposal EL missed data.proposal.missed_el_rewardForegone MEV + tips Proposal CL earned data.proposal.attestation_inclusion_reward + sync_inclusion_reward + slashing_inclusion_rewardRequired for proposal efficiency
Missed Rewards Methodology
CL missed = att_head.missed + att_source.missed + att_target.missed
+ sync_committee.missed_reward
+ proposal.missed_cl_reward
EL missed = proposal.missed_el_reward (foregone MEV + tips)
Total missed = CL missed + EL missed
% of earned = total_missed / (total_missed + total_reward_eth) x 100
inactivity_leak_penalty is a penalty on earned rewards, not a missed opportunity. Exclude it from all missed reward totals.
Step 4: Fetch the Network Baseline
To compute “vs average” deltas, fetch the network-wide baseline over the same window:
curl --request POST \
--url https://beaconcha.in/api/v2/ethereum/performance-aggregate \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"chain": "mainnet",
"range": { "evaluation_window": "30d" }
}'
Known behavior: This endpoint does not accept an entity filter. It always returns network-wide data regardless of any entity parameter passed. Use rewards-aggregate component ratios to derive per-entity BeaconScore component approximations.
Extract baseline values:
Field Path Total BeaconScore data.beaconscore.totalAttestation efficiency data.beaconscore.attestationSync committee efficiency data.beaconscore.sync_committeeProposal efficiency data.beaconscore.proposal
Step 5: Compute Derived Metrics
Once you have data from all three entity endpoints and the network baseline, compute the leaderboard metrics:
# BeaconScore delta vs network
delta_total = entity_beaconscore - NET_total
# Per-component efficiency (derived from rewards-aggregate)
# Cast all wei fields: int(str(v))
att_earned = (att_head_reward + att_src_reward + att_tgt_reward) / 1e18
att_missed = (m_head + m_src + m_tgt) / 1e18
att_eff = att_earned / (att_earned + att_missed)
sync_earned = sync_reward / 1e18
sync_missed = m_sync / 1e18
sync_eff = sync_earned / (sync_earned + sync_missed)
prop_cl_earned = (prop_att_incl + prop_sync_incl + prop_slash_incl) / 1e18
prop_cl_missed = m_pcl / 1e18
prop_eff = prop_cl_earned / (prop_cl_earned + prop_cl_missed)
# Deltas vs network
d_att = att_eff - NET_attestation
d_sync = sync_eff - NET_sync_committee
d_prop = prop_eff - NET_proposal
# Missed reward totals
m_cl = att_missed + sync_missed + (m_pcl / 1e18 )
m_total = m_cl + (m_pel / 1e18 )
earned = earned_gross / 1e18 # data.total_reward
pct_missed = m_total / (m_total + earned) * 100
“vs Average” Thresholds
Applied consistently to BeaconScore total, all three components, and APY deltas:
🟢 Green 🟡 Yellow 🔴 Red delta >= +0.0025-0.0025 < delta < +0.0025delta <= -0.0025
Missed % of Earned Thresholds (lower = better)
🟢 Green 🟡 Yellow 🔴 Red < 0.40%0.40% – 0.60%> 0.60%
Example: Full Leaderboard Script (Python)
import requests
import time
import json
API_KEY = "<YOUR_API_KEY>"
BASE = "https://beaconcha.in"
HEADERS = { "Authorization" : f "Bearer { API_KEY } " , "Content-Type" : "application/json" }
WINDOW = "30d"
def post ( endpoint , payload ):
r = requests.post( f " { BASE }{ endpoint } " , headers = HEADERS , json = payload, timeout = 30 )
r.raise_for_status()
return r.json()
def w ( v ):
"""Cast wei string/int to int."""
return int ( str (v)) if v is not None else 0
# Step 1: Top 10 named entities by net share
raw = post( "/api/v2/ethereum/entities" , {
"chain" : "mainnet" ,
"range" : { "evaluation_window" : WINDOW },
"sort_by" : "net_share" ,
"sort_order" : "desc" ,
"page_size" : 10 ,
})
entities = [e for e in raw[ "data" ] if e[ "entity" ] != "Unknown" ]
# If Unknown was on the page, fetch more to complete 10
while len (entities) < 10 and "next_cursor" in raw.get( "paging" , {}):
raw = post( "/api/v2/ethereum/entities" , {
"chain" : "mainnet" ,
"range" : { "evaluation_window" : WINDOW },
"sort_by" : "net_share" ,
"sort_order" : "desc" ,
"page_size" : 10 ,
"cursor" : raw[ "paging" ][ "next_cursor" ],
})
for e in raw[ "data" ]:
if e[ "entity" ] != "Unknown" :
entities.append(e)
if len (entities) >= 10 :
break
entities = entities[: 10 ]
# Step 2: Network baseline
net = post( "/api/v2/ethereum/performance-aggregate" , {
"chain" : "mainnet" ,
"range" : { "evaluation_window" : WINDOW },
})
NET = net[ "data" ][ "beaconscore" ]
# Steps 3 & 4: APY and missed rewards per entity
results = []
for e in entities:
name = e[ "entity" ]
try :
apy_r = post( "/api/v2/ethereum/validators/apy-roi" , {
"chain" : "mainnet" ,
"validator" : { "entity" : name},
"range" : { "evaluation_window" : WINDOW },
})
apy = apy_r[ "data" ][ "combined" ][ "apy" ][ "total" ]
cl_apy = apy_r[ "data" ][ "consensus_layer" ][ "apy" ][ "total" ]
el_apy = apy_r[ "data" ][ "execution_layer" ][ "apy" ][ "total" ]
roi = apy_r[ "data" ][ "combined" ][ "roi" ][ "total" ]
except Exception :
apy = cl_apy = el_apy = roi = None
time.sleep( 1 )
try :
rw = post( "/api/v2/ethereum/validators/rewards-aggregate" , {
"chain" : "mainnet" ,
"validator" : { "entity" : name},
"range" : { "evaluation_window" : WINDOW },
})
d = rw[ "data" ]
att = d[ "attestation" ]
sc = d[ "sync_committee" ]
prop = d[ "proposal" ]
# Earned
att_head_r = w(att[ "head" ][ "reward" ])
att_src_r = w(att[ "source" ][ "reward" ])
att_tgt_r = w(att[ "target" ][ "reward" ])
sync_r = w(sc[ "reward" ])
prop_cl_e = w(prop[ "attestation_inclusion_reward" ]) + \
w(prop[ "sync_inclusion_reward" ]) + \
w(prop[ "slashing_inclusion_reward" ])
earned_gross = w(d[ "total_reward" ])
# Missed
m_head = w(att[ "head" ][ "missed_reward" ])
m_src = w(att[ "source" ][ "missed_reward" ])
m_tgt = w(att[ "target" ][ "missed_reward" ])
m_sync = w(sc[ "missed_reward" ])
m_pcl = w(prop[ "missed_cl_reward" ])
m_pel = w(prop[ "missed_el_reward" ])
# Compute
att_eff = (att_head_r + att_src_r + att_tgt_r) / \
(att_head_r + att_src_r + att_tgt_r + m_head + m_src + m_tgt)
sync_eff = sync_r / (sync_r + m_sync) if (sync_r + m_sync) > 0 else 0
prop_eff = prop_cl_e / (prop_cl_e + m_pcl) if (prop_cl_e + m_pcl) > 0 else 0
m_cl_eth = (m_head + m_src + m_tgt + m_sync + m_pcl) / 1e18
m_total_eth = m_cl_eth + m_pel / 1e18
earned_eth = earned_gross / 1e18
pct_missed = m_total_eth / (m_total_eth + earned_eth) * 100
except Exception :
att_eff = sync_eff = prop_eff = None
m_total_eth = m_cl_eth = pct_missed = None
time.sleep( 1 )
delta = e[ "beaconscore" ] - NET [ "total" ]
results.append({
"rank" : len (results) + 1 ,
"entity" : name,
"beaconscore" : e[ "beaconscore" ],
"net_share" : e[ "net_share" ],
"delta" : delta,
"apy" : apy, "cl_apy" : cl_apy, "el_apy" : el_apy,
"att_eff" : att_eff, "sync_eff" : sync_eff, "prop_eff" : prop_eff,
"m_total" : m_total_eth, "m_cl" : m_cl_eth, "pct_missed" : pct_missed,
})
# Print summary
print ( f " { '#' :<3} { 'Entity' :<20} { 'Score' :>10} { 'vs Avg' :>10} { 'APY' :>8} { 'Missed%' :>10} " )
for r in results:
score = f " { r[ 'beaconscore' ] * 100 :.4f} %"
delta = f " { r[ 'delta' ] * 100 :+.2f} pp"
apy = f " { r[ 'apy' ] * 100 :.2f} %" if r[ 'apy' ] else "/"
miss = f " { r[ 'pct_missed' ] :.3f} %" if r[ 'pct_missed' ] is not None else "/"
print ( f " { r[ 'rank' ] :<3} { r[ 'entity' ] :<20} { score :>10} { delta :>10} { apy :>8} { miss :>10} " )
Compare Across Time Windows
Run the same query across windows to identify persistent vs. short-lived performance changes:
Window Typical use 24hIncident detection 7dWeekly review 30dMonthly benchmarking (leaderboard default) 90dLong-term trend analysis
Best Practices
Filter Unknown Always exclude entity == "Unknown" before ranking. Unknown validators are unlabeled and may skew concentration metrics.
Interpret APY Carefully EL APY scales with proposal luck in short windows. Use BeaconScore as the primary efficiency metric (see residual luck factors ).
Respect Rate Limits V1 and V2 API calls share one combined rate limit bucket. Sleep 1 second between serial apy-roi and rewards-aggregate calls. If you receive a 429, read the ratelimit-reset header and wait before retrying.
Store Snapshots Export ranking snapshots periodically for trend reporting and incident investigation.
Data Freshness
Entity rankings are precomputed and updated hourly.
Validator-to-entity assignments are updated once per day.
For endpoint details, see the Entities section in the V2 API Docs sidebar.