Inside the blockchain developer’s mind: Proof-of-burn blockchain consensus

Fiverr
Inside the blockchain developer’s mind: Proof-of-burn blockchain consensus
Fiverr


Cointelegraph is following the development of an entirely new blockchain from inception to mainnet and beyond through its series, Inside the Blockchain Developer’s Mind. In previous parts, Andrew Levine of Koinos Group discussed some of the challenges the team has faced since identifying the key issues they intend to solve, and outlined three of the “crises” that are holding back blockchain adoption: upgradeability, scalability and governance. This series is focused on the consensus algorithm: Part one is about proof-of-work, part two is about proof-of-stake and part three is about proof-of-burn. 

In the first article in the series, I explored proof-of-work (PoW) — the OG consensus algorithm — and explained how it works to bootstrap decentralization but also why it is inefficient. In the second article, I explored proof-of-stake (PoS) and how it is good for lowering the operating costs of a decentralized network relative to proof-of-work, but also why it further entrenches miners, requires complex and ethically questionable slashing conditions and fails to prevent “exchange attacks.”

In this article, I will explain the third consensus algorithm that was proposed about a year after proof-of-stake but, for reasons that should become clear, has never actually been implemented as a consensus algorithm on a general purpose blockchain. At least, not until now.

Proof-of-work

As I explained in the first article, from a game-theoretical perspective blockchains are a game in which players compete to validate transactions by grouping them into blocks that match the blocks of transactions being created by other players. Bitcoin (BTC) works by assigning more weight to blocks produced by people who have probably sacrificed more capital which they “prove” through “work.”

bybit

Since these people have already spent their money to acquire hardware and run it to produce blocks, their punishment is easy because they’ve already been punished. Proof-of-stake, however, operates in a fundamentally different way that has important game-theoretical consequences.

Proof-of-stake

Instead of forcing block producers to sacrifice capital to acquire and run hardware in order to gain the ability to earn block rewards, in proof-of-stake, the token holders need only sacrifice the liquidity of their capital in order to earn block rewards. The problem is it decreases network security because the attacker need only acquire 51% of the base currency of the platform and stake it to take control of the network.

To thwart this attack, PoS systems that must implement complicated systems designed to “slash” block rewards from user accounts, which adds to the computational overhead of the network, raises legitimate ethical concerns and only work if the attacker fails to acquire 51% of the token supply. Implementing these slashing conditions is by no means trivial, which is why so many proof-of-stake projects like Solana have, by their own admission, launched with centralized solutions in place, and why so many other projects like Ethereum 2.0 (Eth2) are taking so long to implement PoS. The typical solution is to give a foundation a large enough stake so that it alone has the power to determine who is a malicious actor and slash their rewards.

This is especially problematic in a world with centralized exchanges that feature custodial staking which means it can find itself in control of over 51% of a given token supply without having incurred any risk, making the cost of an attack deminimus. In fact, this has already happened in recent history on one of the most used blockchains in the world, at one time valued at nearly $2 billion: Steem.

Related: Proof-of-stake vs. proof-of-work: Differences explained

Holy Grail consensus

As I said at the end of my last article, what we will be discussing in this article is the hypothetical question of whether there is a “best-of-both-worlds” solution that delivers the decentralization and security of proof-of-work with the efficiency of proof-of-stake. Today, we are excited to announce the release of our white paper on proof-of-burn. In that white paper, we argue that proof-of-burn is exactly that best of both worlds solution.

Iain Stewart proposed proof-of-burn in 2012 — a year after proof-of-stake — as a thought experiment designed to contrast the differences between proof-of-work and proof-of-stake. We believe that he unwittingly discovered the “holy grail” of consensus algorithms that got lost in the sands of time due largely to historical accidents. As Iain Stewart noted:

“I thought it would be interesting to invent a task that is absolutely, nakedly, unambiguously an example of the contrast between the two viewpoints. And yes, there is one: burning the currency!”

The exchange attack

As the former core development team behind the Steem blockchain, we have intimate experience with exchange attacks. This is why mitigating this attack vector was of the utmost importance and inspired blockchain architect Steve Gerbino to explore alternative consensus algorithms in search of a solution that would still give us the performance and efficiency necessary for a high performance world computer, all while mitigating this important attack vector.

Proof-of-burn as a consensus algorithm is remarkably simple and its unique value is easy to understand. Like proof-of-work, it requires that the cost of attacking the network be paid “upfront.” Like proof-of-stake, no actual hardware has to be purchased and run aside from the hardware required to produce blocks. Like proof-of-work, the exchange attack is thwarted because the block producer has already lost their money, as they are simply trying to get it back by maintaining a correct ledger.

In order to mount a 51% attack, the malicious actor doesn’t just need to acquire 51% of the token supply, they need to provably dispose of it by acquiring virtual mining hardware. The only way to recoup that loss is by producing blocks on the chain that ultimately wins. It’s a remarkably simple and elegant solution to the problem. There is no need for slashing conditions because the block producer effectively slashed their own stake at the very beginning.

Proof-of-burn

Iain Stewart proposed proof-of-burn for Bitcoin a year before a general purpose blockchain was even conceived of by Vitalik Buterin. Perhaps that is why it has taken this long for people to realize that these two things work together incredibly well. General purpose blockchains place a high premium on efficiency while allowing for token economic designs without max supply caps, a requirement for proof-of-burn implementations. Part of the problem might also have been that several innovative concepts like nonfungible tokens (NFTs) and market makers, and solutions such as upgradeable smart contracts are extremely beneficial to the implementation and only emerged after the proposal.

NFT miners

Keeping track of which accounts have burned what amounts and when they were burned can be a computationally demanding task and this increased load on the network could be one of the reasons why people have avoided this implementation.

Fortunately, nonfungible tokens provide us with a powerful primitive which the system can use to efficiently keep track of all of this information for the purpose of distributing block rewards to valid block producers. The end result is an NFT that effectively functions as a virtual miner, but also one that is infinitely and precisely customizable.

Blockchain developers can precisely regulate the accessibility of their platforms based on how they price their miner NFTs. Pricing the miners high would be like requiring the purchasing of ASICs (miner machines) in order to participate in block production. Pricing the miners low would be like allowing anyone to mine on commodity hardware. But, the best part is that no actual hardware is required either way.

Since Koinos is all about accessibility, miner NFTs will likely have a low price, which is effectively like having the ultimate GPU and ASIC resistant algorithm possible. But, this begs the question: “What if you pick the wrong number?” This highlights the importance of modular upgradeability. On Koinos, all business logic is implemented as smart contract modules which are individually upgradeable without a hard fork. This means that if, for example, the price of KOIN were to explode to the degree that the fixed cost of miners was no longer sufficiently accessible, governance could simply vote to lower that cost and the number would be updated the moment there was a consensus.

Centralization resistance

Fixing the cost of miner NFTs is like building the most GPU- and ASIC-resistant algorithm possible because no one can gain an advantage by acquiring specialized hardware. Better yet, it makes the miner NFTs more uniform and therefore easier to sell (more fungible) on a decentralized exchange, meaning that block producers are taking on less risk because they can always liquidate their miners.

The power of proof-of-burn ultimately stems from the fact that we are internalizing the mining hardware to the system. It is virtual hardware, which means that it is infinitely customizable by the system designers to maximize the performance of the network. One consequence of this is that the system can be designed to ensure that the miner will earn back their burn plus some additional tokens — a guarantee that cannot be made by proof-of-work systems.

This customizability also allows us to mitigate 51% attacks by designing the system so that as the demand for miners increases, the payback period gets extended.

Now, imagine that someone (like an exchange) wants to take over block production. First, they would need to burn more tokens than everyone else combined. Even then, they will have gotten nothing for it. They will need to begin producing blocks on the winning chain to begin to earn back their rewards. During that time, other network participants would be able to see what is happening and respond accordingly. If they feel that the actor is attempting to take control of governance, they can simply purchase more miners, pushing back the payback window for the malicious actor until they “get in line.”

Token economics

Proof-of-burn also has interesting economic properties that separate it from both PoW and PoS. For example, if you were to fix the rate of new token creation (aka “inflation”), then, at a certain point, if too many people were to participate in block production, then the token economy would turn deflationary because rewards would be getting pushed back faster than new tokens were being created. This could provide performance benefits to the network, if necessary.

Many people producing blocks can negatively impact latency. This deflationary component would serve to dynamically disincentivize excessive block production, while also providing the ecosystem with an important economic lever, or deflation.

It was my goal with this series to give the reader an insanely deep understanding of the topic of consensus algorithms in a way that was still accessible and, hopefully, interesting. We’ve covered the historical arc of the major consensus algorithms and what I think is the next evolution: proof-of-burn. I hope that you are now equipped to evaluate different consensus implementations for yourself and come to your own conclusions about what is innovating and what is not.

The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Andrew Levine is the CEO of Koinos Group, a team of industry veterans accelerating decentralization through accessible blockchain technology. Their foundational product is Koinos, a fee-less and infinitely upgradeable blockchain with universal language support.



Source link

Coinmama

Be the first to comment

Leave a Reply

Your email address will not be published.


*