Scoring
Scoring
Scoring is a core component of the consensus system in any Hypertensor subnet. It determines how each node is evaluated based on its performance, behavior, and contributions.
Each subnet is responsible for defining its own scoring algorithm, tailored to its unique goals and architecture. There are no hardcoded or on-chain role restrictions — subnets are free to design and implement any node classification or evaluation strategy they choose.
What Can Nodes Be Scored On?
You can design scoring around virtually any metric relevant to your subnet’s use case. For example:
Role-specific behavior
Validator accuracy, timeliness, or data quality
Worker reliability or output correctness
Performance metrics
Latency and uptime
Speed of consensus data submission or attestation
Economic or trust metrics
Delegate stake rate
Stake balance or reward history
Proof-of-Work (PoW) or Proof-of-Useful-Work (PoUW)
Reputation or longevity
Time active in the network
Historical behavior or consistency
💡 The scoring algorithm is executed in the Consensus class — both by the validator and by attesting peers — to ensure consistency and verifiability.
Tools Available for Scoring
Hypertensor gives you powerful primitives for designing decentralized scoring mechanisms:
Decentralized Storage Use DHT Record Storage to track task results, reveal hashes, validator scores, and more.
P2P RPC Calls Use
rpc_*
methods to interact with peers, check their state, request data, or verify task completion.Blockchain Integration Access on-chain data such as stake amounts, delegate ratios, role registrations, or governance flags.
P2P Proofs Require peers to upload data (e.g., inference outputs, hashes, proofs) to the DHT or directly to other peers for validation.
Design Freedom
There are no enforced on-chain roles in Hypertensor. Every node starts as a general-purpose peer, and the subnet itself defines what each node is responsible for and how they are evaluated.
This flexibility allows you to:
Define custom node types (e.g., validators, trainers, relayers)
Apply dynamic or evolving scoring models
Experiment with staking incentives or decentralized governance
🎯 Your scoring system is not just a metric — it's the mechanism of trust that drives rewards, reputation, and responsibility within the subnet.
Example: Commit-Reveal Scoring (Inference Subnet)
The inference-subnet built on the Hypertensor template demonstrates a powerful example of decentralized coordination using a commit-reveal model. This model ensures fairness, verifiability, and resistance to manipulation during scoring phases for both hosters and validators.
Commit-Reveal Workflow
This subnet divides each epoch into phases, where nodes submit commits (cryptographic hashes of data) followed by reveals (the actual data), enabling transparent scoring without early data leaks.
Step-by-Step Process:
Validator Prompt Commit (0-15%)
At the start of the epoch, the elected validator publishes a randomized prompt tensor to the DHT.
This prompt is validated using a Pydantic schema to ensure proper format and tensor structure.
If the validator doesn't submit one by the 10% mark of the epoch, anyone can take over this task.
Hoster Inference & Commit Phase (15-50%)
Each hoster runs inference on the validator’s prompt.
Instead of revealing the output immediately, each hoster commits a hash of the result (e.g.,
SHA256(salt + tensor)
).This hash is stored in the DHT to prevent tampering or early reveals.
Reveal Phase (50-60%)
After the commit phase ends (based on epoch progress), each hoster reveals their output and salt.
The validator also reveals its own score commit from the previous epoch (i.e., the scores it assigned to each hoster).
These reveals are matched against the original commits to verify integrity later when scored.
Scoring (60-100%)
Hosters are scored based on the distance of their output from the mean tensor across all valid hoster reveals.
Validators are scored based on:
The accuracy of their revealed scores
How closely their scores align with the previous validator's on-chain scores in relation to the attestation ratio.
Validator Commit Scores (60-100%)
Validators commit a hash of their scores to the DHT that is reveals on the following epoch.
Predicate Validator Integration
All commit and reveal actions are validated by a PredicateValidator
, which:
Enforces phase correctness (e.g., commits and reveals in specific periods of an epoch)
Ensures each record (commit or reveal) is schema-compliant and authenticated
Prevents out-of-order or invalid submissions
Authentication Requirement
Every reveal must be cryptographically linked to its original commit using the same keypair. This ensures:
No node can forge or steal another node’s output
Scores are only assigned to authenticated participants
Why This Matters
This design enables decentralized, trustless coordination and evaluation of AI work — all without central servers or privileged roles.
🔐 The commit-reveal model ensures fairness, resists manipulation, and enables transparent peer scoring at scale.
Overview

Last updated