Validator uptime and stability reports for Story (beta)

STAKEME team launched a new tool that helps validators clearly demonstrate their real uptime and on-chain performance, and makes validator evaluation easier for the community and the Story Foundation team during delegation rounds and application reviews.

With this tool, validators can generate public, shareable performance reports that include:

• validator uptime and stability over time

• signed and missed blocks (based on real on-chain data)

• overall performance indicators that reflect how reliably the node is operating

• clear, easy-to-read metrics for validators, delegators, and ecosystem teams

Link: https://trackval.storyscan.app/

Why we built this.

Right now, it’s difficult to objectively compare validators or quickly verify claims about uptime and reliability.

Application forms and delegation processes often rely on self-reported data, scattered screenshots from different dashboards, and manual checks.

Instead of sending random screenshots, validators can now simply share one clean report link that shows their actual performance.

How to use it:

  1. Find your validator in the explorer
  2. Click Generate report
  3. Share the report link or a screenshot

Current status

This feature is currently in beta testing, and your feedback is very important to us.

If you notice anything unclear or missing, or if you have ideas for additional metrics that would be useful, please let us know. We plan to actively improve the tool based on community feedback.

Our goal is to support a healthy, fair, and transparent validator ecosystem and make high-quality operators more visible and easier to evaluate. Thank you for your attention.

6 Likes

Any way you can generate report on a weekly basis and post it on the forum?

2 Likes

We’ll think about how to do this better. :saluting_face:

Thanks for this! Looks helpful

https://trackval.storyscan.app/report/cryptomolot-1768677792172

2 Likes

Hi @STAKEME team :waving_hand:

Proposal to review the calculation of avg uptime in reports only for active sets. Since currently the calculation is performed for the entire history. For example, a validator was only active 1 day out of 7, out of 20,000 blocks it missed only 5, its real uptime is 99.99%. But if you make a report for 7 days, avg ~ 14%. Therefore, I think this significantly harms the reputation of validators, in case a regular user visits the page, seeing an uptime of 15% in the latest reports.

Overall, the UI looks great.

2 Likes

First impression - looks great!

I just have some concerns about accuracy, more specifically the number of scanned blocks. We ran a report using your system, and over the last 6 months it indicated fewer than 4M total blocks, which according to our tooling is much lower than it should be. Our internal result is around 6.5M blocks. This is more or less aligned with what was shared in another topic:

“Story’s original design targeted 20 million IP tokens emitted annually, based on an expected 10,368,000 blocks per year. Engineering optimizations have improved block production to approximately 13,140,000 blocks per year.”

Where might these discrepancies be coming from?

Other than that, I think it’s a great contribution to the ecosystem - thank you!

Thanks for the feedback, we really appreciate you flagging this.

You’re right there was indeed a discrepancy. The issue came from how block counts were previously estimated. We assumed a fixed block time (for example, ~3 seconds per block) and used that to work backwards over a time range. As actual block production speed has changed over time, this approach resulted in undercounting blocks over longer periods.

This has now been fixed. Instead of assuming a constant block time, we now determine block ranges directly using timestamps, which allows us to accurately identify the correct boundaries regardless of how block times varied historically.

With this change, the numbers are much more accurate and align better with on-chain data and internal tooling.

You can see an example report generated with the updated logic here:

https://trackval.storyscan.app/report/p2p-org-locked–1769134408077

Thanks again feedback like this really helps us improve the tooling for the ecosystem.

1 Like

This is a great step forward for the Story validator ecosystem. Tools like this are key to improving transparency and accountability across validators, especially during programs, delegation rounds and reviews.

Having a verifiable on-chain performance report reduces reliance on self-reported data and screenshots, and makes it easier for the community and the Story Foundation to evaluate operators fairly.

From the SenseiNode | Validator Introduction side, we strongly support initiatives like this and are happy to provide feedback during the beta. Shared standards and open metrics are essential for building a healthy, high-quality validator set over the long term.

1 Like