Who will guard the guards? Who will check the Checkers? This post for the first time goes into how the Checker Network incentivizes truthful checking using reputation, tokenomics and even ideas from magnetism.
- But First, What Is The Checker Network?
- Extracting Truth in the Checker Network
- Spark’s Basic Mechanics
- The Problem with the Current Checker Model
- Checker Network Innovation: Incentivizing Truth
- Protocols of Truth
- Ising Model Checker Game
- Simplified case: no external information
- External Information: Previous Retrieval Score
- Checker Reputation Algorithm
- Likely truthfulness
- Entropy contrast
- Stacking incentives on top of truth
- More posts like this
In this first litepaper, we will introduce how the Checker Network is thinking about incentivising truthful checks from its network participants: the Checkers. We will also highlight certain parts that will be followed up on in a longer paper, which include the Checker tokenomics.
But First, What Is The Checker Network?
The Checker Network provides verifiable quality of service checks and reputation data on DePIN nodes and networks. Just think of Google Reviews, but for Web3 nodes, and decentralised.
The Checker Network is structured to host Checker “subnets”. You can view Filecoin Spark, which checks the retrieval success rate of Filecoin, as the first Checker Subnet. By structuring the Checker Network in this way, it enables anyone to build Checker subnets just like Filecoin Spark that can measure, verify and improve web3 networks, whilst rewarding builders checkers and DePIN providers.
Extracting Truth in the Checker Network
The Filecoin Spark protocol has been incredibly successful in providing data retrieval metrics for Filecoin, which are being used to incentivize higher retrieval success rates. However, as the Spark network (and more generally Checker network) scales, and the stakes become higher, there are clear challenges in incentive alignment. The core issue is that Spark retrievability scores rely on honest reporting from each set of checkers.
Space Meridian and CryptoEconLab are teaming up to re-work Checker incentives from the ground up, focusing on incentivizing truthful reporting. The following few sections will focus on the Spark subnet for simplicity but you can imagine how it would apply more generally to other Checker subnets too.
Spark’s Basic Mechanics
While Filecoin's Proof of Space-time provides guarantees that a storage provider (SP) is storing a specific piece of data, the protocol does not provide guarantees to the user that they will be able to retrieve the given file.
The Spark protocol enhances Filecoin by assigning a retrievability score to SPs, which informs how likely one is to obtain the file from the SP upon request. This data helps Filecoin data clients to build trust in which SPs to store (and retrieve) their files with.
In short, Spark works by randomly assigning a set of "Checkers" to attempt to retrieve a particular piece of data from Filecoin. These checkers then report to the Spark protocol whether they received the file or not. These reports are then used to build the retrievability score for the SP.
The Problem with the Current Checker Model
The dynamics we have just described produce a desirable outcome (of providing an accurate retrievability score) on the assumption that the majority of checker nodes are acting honestly, as well as on the assumption that the SPs cannot identify requests from checker nodes and serve retrievals only to them.
Once external incentives start weighing in, the honesty assumption starts becoming less reliable. These external influences include for example:
- The cost of checking: it is cheaper for the checker to not bother to retrieve the file, and simply provide a response without actually performing the check.
- Collusion: since having a high Spark retrievability score provides significant value for the SP, they can attempt to collude with checkers to improve their scores.
More problems may emerge as a more general Checker network is built, and there may be more complex Checker x “Checkee” interactions.
Checker Network Innovation: Incentivizing Truth
To address these issues, the Checker protocol is being reworked from the ground up, and built around the new paradigm of incentivizing truthful reporting.
In an ideal world, we would have cryptographic proofs available for any kind of activity the Checker Network is checking, such that Checkers cannot lie. Unfortunately such proofs of retrievability of a file do not exist as they do for storing a file. This problem is wider than just the data retrievability problem, as in general no such proofs for "fair exchange" problems are available.
With the lack of cryptographic proofs, we focus on economic incentivization for truthful checking. Broadly speaking, we are aiming to achieve two requirements in the design of the protocol:
1. It should be economically favorable for a checker to report with the truth (which prevents misalignment from rational checkers),
2. Dishonest reporting should be detectable and penalizable (which would discourage malicious checkers).
Protocols of Truth
We will now briefly go over two of the core ideas that aim to tackle the two requirements above: The Ising Model Checker Game (we can work on a name), and the Checker Reputation Algorithm.
Ising Model Checker Game
Let us define the setting for the Ising model Checker Game (IMCG). In a given round, a set of checkers are randomly assigned by the Spark protocol to attempt to retrieve a specific piece of data. Each checker requests the data from the SP, and either successfully retrieves it or not. Each Checker must then report to the Spark protocol, whether they successfully retrieved the file or not.
That is, each Checker, labeled as checker , returns an binary answer, , with a value of if the file was retrieved and if it wasn't.
The question is then, how do we encourage the answer from each Checker to be truthful?
Simplified case: no external information
Suppose all Checkers have no external information on the piece of data they are retrieving or the SP storing it. That is, they have no a priori opinion on whether this file is likely to be retrievable or not.
In this scenario, truthful reporting can be incentivized by rewarding checkers that are more aligned with other Checkers. For example, we can propose a reward for Checker of the form,
Checker can then obtain higher rewards by ensuring their vote is aligned with the majority of other Checkers. The higher this parameter the stronger the incentive for alignment of votes.
In the absence of any external information, the best strategy for a Checker to try to reach alignment is to report truthfully. To maximize their reward, Checker has to guess which outcome (-1 or +1) will be most likely, so they would try to align with it. Their only piece of information available to make this guess, is the true outcome of their retrieval request.
A reader familiar with the physics of magnetism, would notice that this reward function takes the shape of the total energy function of the Infinite range Ising model, where this interaction encourages the alignment of atomic spins and, if the interaction is strong enough, leads to magnetization (or alignment of spins).
External Information: Previous Retrieval Score
The above incentive structure is undermined if the Checker has any information on whether the SP is more or less likely to provide the file when requested.
For example, an SP may have an existing retrievability score that the Checkers have access to. For instance, if the SP has an existing retrievability score of 90%, then the Checker knows alignment is more likely to happen in the direction of . In this case, even if they were not able to successfully retrieve the file, they may choose to dishonestly say they did, to increase the chances of alignment.
This influence from the previous retrievability score can be countered by introducing an external magnetic field
This "magnetic field" term explicitly breaks the symmetry of the reward, giving higher reward if alignment happens in the direction of .
For any previous retrieval history for a given SP, the Checker protocol's algorithm is able to calculate a specific magnetic field term, that directly cancels the influence of the retrieval history. This restores, even in the presence of a previous score, the strategy of Checkers reporting truthfully as the way to maximize their reward.
The mathematical details of this algorithm, relying on an application of the statistical mechanics of the infinite range Ising model, will be published soon in a full length paper.
Checker Reputation Algorithm
The Ising model Checker game ensures that at each individual round, the optimal strategy for each checker is to try to reach alignment with other checkers by responding truthfully. The incentives of rational checkers are thus aligned with the network's goals.
This, however leaves open the question of malicious checkers, which have external motivations for falsifying their reports. The Checker Reputation algorithm is there to detect whether any checker consistently acts "irrationally", so their impact on reported metrics can be minimized.
Each checker will be assigned a reputation score, which is based on several critical factors, including in particular:
- Likely truthfulness over time
- Entropy contrast
Having this reputation score allows us to tune the rewards that a checker node receives, and so can be used to penalize malicious nodes by taking their rewards. Identifying disreputable nodes also allows us to treat their reports as less trustworthy, and reduce the impact they have on an SP retrievability score, for instance.
Likely truthfulness
Suppose checkers are asked to retrieve a file from an SP in a given round, and 70% of checkers report having received the file. There are still 30% of checkers with the "unlikely" answer of not having received the file. Are those checkers minority lying?
This is impossible to know in one round, it could very well be that the checker received the file but decided to lie, or that they did not receive the file.
The key insight of the likely truthfulness component, is that while it is impossible to know in one particular round if a checker lied about their result, as they build a voting record over time, it is possible to build a stronger idea of whether the checker has been voting truthfully.
Suppose the experiment we have described is repeated 1,000 times. It would be increasingly unlikely for an honest checker to be always in the minority of the vote, 1,000 times in a row. It is in fact possible to compute the likelihood that the checker's voting record arises from honest reporting.
As the checker builds a voting record, they build a likely truthfulness score, which can be used to steer network incentives. A checker who is unlikely to have been voting truthfully, will be able to receive fewer rewards, and will have less impact on an SP's retrievability score.
Entropy contrast
While the likely truthfulness compares the checker's voting record with the actual resulting votes from each round, the Entropy contrast metric compares the voting record with itself to determine a likelihood of the record arising honestly.
The voting record of a checker that is reporting honestly will have a natural level of "noise". That is, sometimes the vote will be "yes" and sometimes "no", with a certain level of disorder, that can be measured. Entropy is a well established measure which can be applied to measure this type of disorder, or apparent "randomness" in an honest checker's record.
A checker that is "lazy", in the extreme case, could always vote "no". This would lead to at least some of the time, the checker agreeing with the majority vote, so the likely truthfulness from the previous section would not be the best measure of "laziness".
A voting record of all "no's" would however have an easily detectable low entropy score. Conversely, an entropy score that is very large can signal that the checker is simply voting randomly. More generally, we can compute the entropy of a checker's voting record, and contrast it with a typical expected entropy of an honest checker. Large entropy contrast can signal either laziness, or other dishonest voting strategies.
Stacking incentives on top of truth
These truthful games form the fundamental building blocks of the Checker protocol economics. These building blocks will be super-powered when coupled to Checker tokenomics.
Having powerful truth-incentivizing protocols allow the Checker token to be deployed in an efficient and impactful matter, and to swiftly cutoff when dishonesty is detected. This will lead to lean tokenomics where no token emissions are "wasted" and any meaningless inflation can be avoided.
Stay tuned for our full tokenomics whitepaper coming soon!