Finality—Part 1: Two Generals
"A truly permissionless and secure distributed environment has no barrier to entry or impediment to use, providing a trustworthy open infrastructure for any party wishing to participate."
Finality is a four-part exploration into the inner workings and technological significance of the IOTA tangle’s many-worlds Nakamoto consensus mechanism. Visit wiki.iota.org to learn more about the IOTA tangle, or click below to continue reading the Finality series with:
What’s referred to in distributed computing as the Two Generals’ Problem is a famous allegorical thought experiment with far-reaching implications to the nature of trust in distributed environments.
Imagine that the supreme commander of an army wishes to overthrow an enemy city, and the city’s surrounding topology is such that a simultaneous attack from opposite ends of the city is required for a successful siege.
Accordingly, the supreme commander splits his men into two armies, appoints an independent commanding general to each army, and orders the two generals to deploy their troops to opposite ends of the city without additional instruction.
By the nature of the assignment, the two generals have been tasked with overthrowing the enemy city but were never provided by the supreme commander with a set time for a coordinated attack.
What level of difficulty do the generals face in coming to an agreement? Can’t one of the generals just send a messenger to the other general disclosing a time of attack? Is a distributed agreement even possible?
As a result of the landscape, the two generals have no choice but to send messages to one another through the enemy city, leaving their communications open to interception, tampering, and seizure.
Even with the use of a commitment scheme to provably bind a message to its issuer, reception of a message by either general still requires a potentially infinite number of follow-up messages to confirm agreement on any previous message.
For example, if general A sends a message to general B proposing a time of attack, then general B must send a second message back to general A to confirm agreement or disagreement.
After receiving the second message, however, general A must then send a third message back to general B confirming receipt of generals B’s opinion on the first message.
What ruleset could the supreme commander have introduced that would guarantee the two generals agree, and can confirm to one another that they agree, on a set time of attack?
For the purposes here, let’s call the challenge of coming to an agreement without a central authority the distributed consensus problem.
As a simplified representation of a computer network, the Two Generals’ Problem serves as a special case of the distributed consensus problem where the number of network participants is fixed at two, all participant identities are known with certainty, and new participants require permission to join the network.
In a more realistic setting, however, a host of additional challenges arise from the need for some network participants, also know as validators, to vote on the collective network opinion.
It is the ideal situation in a modern distributed network that the number of validators is unspecified, validator identities are not necessarily known to one another, and new participants can join the network without permission.
A truly permissionless and secure distributed environment has no barrier to entry or impediment to use, providing a trustworthy open infrastructure for any party wishing to participate.
Fundamentally, however, there is no way of differentiating between information with integrity and disinformation in a permissionless environment without an ungameable set of voting rules for all honest validators to follow.
If a distributed network rewards malicious actors for following alternative strategies, then the network can never be trusted to reach definitive, irreversible agreements.
For instance, if a permissionless network’s voting protocol counts all validator opinions equally, and the cost of generating a new validator identity is cheap, then an adversary may be incentivized to create any number of identities to disrupt consensus.
In this scenario, the network’s voting system is open to a Sybil attack where new identities can be forged to subvert the network.
It is arguably impossible for any distributed network to be both permissionless and secure without a Sybil protection mechanism to prevent adversaries from distorting perceptions or gaining undue influence.
Currently, the only practical methods of Sybil protection incorporate scarce resources that are difficult to obtain and impossible to counterfeit.
If you’re enjoying the Finality series, you may also be interested in checking out Layer One, a three-part philosophical journey into the emergence and implications of the IOTA distributed ledger technology and its parallel reality based data structure. Click below to read more.
🤘 Thank you for reading iologica—the blog that strives to challenge you.
✍️ The Finality series was written in large part based on the commentary of Hans Moog, which is publicly available on Twitter and Medium. Please also acknowledge Holger (Phylo) of the IOTA Foundation, unrecognized_User and Jeroen van den Hout of the IOTA Experience Teams, and John for providing valuable assistance in the creation of the Finality series.
👉 If you enjoyed reading this article, feel free to share it or subscribe below.
🤝 If you found the information in this article valuable, consider donating below to support the author’s future work.
🤙 If you want to get in touch, contact the author below.
The Content of this article is provided for informational purposes only, you should not construe any such information or other material as legal, tax, investment, financial, or other advice. The Content is not to be construed as a recommendation or solicitation to buy or sell any security, financial product, instrument, or to participate in any particular trading strategy.