Compare a single staking entity against the network average across BeaconScore, APY, missed rewards, and sub-entity performance.
API Endpoints: This guide uses /api/v2/ethereum/entities, /api/v2/ethereum/performance-aggregate, /api/v2/ethereum/validators/apy-roi, /api/v2/ethereum/validators/rewards-aggregate, /api/v2/ethereum/entity/sub-entities, and /api/v2/ethereum/validators/metadata.
Premium access:entities, entity/sub-entities, validators/apy-roi (with entity selector), validators/rewards-aggregate (with entity selector), and validators/metadata require a Scale or Enterprise plan. performance-aggregate (network baseline) is available on all plans.
Configurable evaluation window: All examples below use 30d, but you can change evaluation_window to 24h, 7d, 30d, or 90d depending on your use case. Use consistent windows across all calls in the same comparison. See Evaluation Windows for guidance.
A BeaconScore of 99.5% is only meaningful in context. Comparing to the network average reveals whether that score represents outperformance, peer performance, or lagging performance.
Incident Validation
If an entity’s score drops, compare against the network delta first. A simultaneous network-wide drop indicates an external event; an entity-only drop points to operational issues.
Stakeholder Reporting
Produce clear entity-vs-network deltas for customer SLAs, quarterly reports, or public transparency disclosures.
Sub-Entity Diagnosis
When a parent entity score changes, drill into sub-entities (node operators) to identify which operators are driving the change.
Known behavior: This endpoint does not filter by entity. Any entity parameter is silently ignored. The response always represents the full network. This is the intended behavior for computing the baseline.
Use data.beaconscore.total as the network benchmark. Also extract:
Compare data.combined.apy.total, data.consensus_layer.apy.total, and data.execution_layer.apy.total between entity and network. Use the same ±0.25pp thresholds for coloring.
Interpreting EL APY: A high EL APY vs the network in short windows typically reflects proposal luck (favorable MEV), not operational efficiency. Always check whether EL outperformance aligns with an above-average proposal count before attributing it to operator quality.
Wei values are JSON strings. All reward values are returned as strings representing large integers. Always cast with int(str(v)) before dividing by 1e18.
Compute per-component efficiency and total missed:
Compare each sub-entity’s beaconscore against the network baseline. Sub-entities with delta <= -0.0025 warrant investigation.Sortable fields: beaconscore, net_share, validator_count.
Short windows introduce noise from proposal luck. Use 30d minimum for stable benchmarking.
Track Delta History
Store entity - network deltas over time to distinguish persistent underperformance from temporary variance.
Inspect Sub-Entities
A parent-level score change can mask improvement in some operators and degradation in others.
Separate CL and EL APY
When APY diverges from peers, check whether the gap is in CL APY (operational) or EL APY (proposal luck). CL APY differences are operationally significant; EL APY differences in short windows often are not.