Ethereum Blog

Understanding Serenity, Part 2: Casper

Introduction

user

Vitalik Buterin


LATEST POSTS

Roundup Round III 24th May, 2017

Ethereum Dev Roundup: Q1 (Boring Edition) 07th April, 2017

technical

Understanding Serenity, Part 2: Casper

Posted on .

Special thanks to Vlad Zamfir for introducing the idea of by-block consensus and convincing me of its merits, alongside many of the other core ideas of Casper, and to Vlad Zamfir and Greg Meredith for their continued work on the protocol

In the last post in this series, we discussed one of the two flagship feature sets of Serenity: a heightened degree of abstraction that greatly increases the flexibility of the platform and takes a large step in moving Ethereum from “Bitcoin plus Turing-complete” to “general-purpose decentralized computation”. Now, let us turn our attention to the other flagship feature, and the one for which the Serenity milestone was originally created: the Casper proof of stake algorithm.

Consensus By Bet

The keystone mechanism of Casper is the introduction of a fundamentally new philosophy in the field of public economic consensus: the concept of consensus-by-bet. The core idea of consensus-by-bet is simple: the protocol offers opportunities for validators to bet against the protocol on which blocks are going to be finalized. A bet on some block X in this context is a transaction which, by protocol rules, gives the validator a reward of Y coins (which are simply printed to give to the validator out of thin air, hence “against the protocol”) in all universes in which block X was processed but which gives the validator a penalty of Z coins (which are destroyed) in all universes in which block X was not processed.

The validator will wish to make such a bet only if they believe block X is likely enough to be processed in the universe that people care about that the tradeoff is worth it. And then, here’s the economically recursive fun part: the universe that people care about, ie. the state that users’ clients show when users want to know their account balance, the status of their contracts, etc, is itself derived by looking at which blocks people bet on the most. Hence, each validator’s incentive is to bet in the way that they expect others to bet in the future, driving the process toward convergence.

A helpful analogy here is to look at proof of work consensus – a protocol which seems highly unique when viewed by itself, but which can in fact be perfectly modeled as a very specific subset of consensus-by-bet. The argument is as follows. When you are mining on top of a block, you are expending electricity costs E per second in exchange for receiving a chance p per second of generating a block and receiving R coins in all forks containing your block, and zero rewards in all other chains:

Hence, every second, you receive an expected gain of p*R-E on the chain you are mining on, and take a loss of E on all other chains; this can be interpreted as taking a bet at E:p*R-E odds that the chain you are mining on will “win”; for example, if p is 1 in 1 million, R is 25 BTC ~= $10000 USD and E is $0.007, then your gains per second on the winning chain are 0.000001 * 10000 - 0.007 = 0.003, your losses on the losing chain are the electricity cost of 0.007, and so you are betting at 7:3 odds (or 70% probability) that the chain you are mining on will win. Note that proof of work satisfies the requirement of being economically “recursive” in the way described above: users’ clients will calculate their balances by processing the chain that has the most proof of work (ie. bets) behind it.

Consensus-by-bet can be seen as a framework that encompasses this way of looking at proof of work, and yet also can be adapted to provide an economic game to incentivize convergence for many other classes of consensus protocols. Traditional Byzantine-fault-tolerant consensus protocols, for example, tend to have a concept of “pre-votes” and “pre-commits” before the final “commit” to a particular result; in a consensus-by-bet model, one can make each stage be a bet, so that participants in the later stages will have greater assurance that participants in the earlier stages “really mean it”.

It can also be used to incentivize correct behavior in out-of-band human consensus, if that is needed to overcome extreme circumstances such as a 51% attack. If someone buys up half the coins on a proof-of-stake chains, and attacks it, then the community simply needs to coordinate on a patch where clients ignore the attacker’s fork, and the attacker and anyone who plays along with the attacker automatically loses all of their coins. A very ambitious goal would be to generate these forking decisions automatically by online nodes – if done successfully, this would also subsume into the consensus-by-bet framework the underappreciated but important result from traditional fault tolerance research that, under strong synchrony assumptions, even if almost all nodes are trying to attack the system the remaining nodes can still come to consensus.

In the context of consensus-by-bet, different consensus protocols differ in only one way: who is allowed to bet, at what odds and how much? In proof of work, there is only one kind of bet offered: the ability to bet on the chain containing one’s own block at odds E:p*R-E. In generalized consensus-by-bet, we can use a mechanism known as a scoring rule to essentially offer an infinite number of betting opportunities: one infinitesimally small bet at 1:1, one infinitesimally small bet at 1.000001:1, one infinitesimally small bet at 1.000002:1, and so forth.


A scoring rule as an infinite number of bets.

One can still decide exactly how large these infinitesimal marginal bets are at each probability level, but in general this technique allows us to elicit a very precise reading of the probability with which some validator thinks some block is likely to be confirmed; if a validator thinks that a block will be confirmed with probability 90%, then they will accept all of the bets below 9:1 odds and none of the bets above 9:1 odds, and seeing this the protocol will be able to infer this “opinion” that the chance the block will be confirmed is 90% with exactness. In fact, the revelation principle tells us that we may as well ask the validators to supply a signed message containing their “opinion” on the probability that the block will be confirmed directly, and let the protocol calculate the bets on the validator’s behalf.



Thanks to the wonders of calculus, we can actually come up with fairly simple functions to compute a total reward and penalty at each probability level that are mathematically equivalent to summing an infinite set of bets at all probability levels below the validator’s stated confidence. A fairly simple example is s(p) = p/(1-p) and f(p) = (p/(1-p))^2/2 where s computes your reward if the event you are betting on takes place and f computes your penalty if it does not.

A key advantage of the generalized approach to consensus-by-bet is this. In proof of work, the amount of “economic weight” behind a given block increases only linearly with time: if a block has six confirmations, then reverting it only costs miners (in equilibrium) roughly six times the block reward, and if a block has six hundred confirmations then reverting it costs six hundred times the block reward. In generalized consensus-by-bet, the amount of economic weight that validators throw behind a block could increase exponentially: if most of the other validators are willing to bet at 10:1, you might be comfortable sticking your neck out at 20:1, and once almost everyone bets 20:1 you might go for 40:1 or even higher. Hence, a block may well reach a level of “de-facto complete finality”, where validators’ entire deposits are at stake backing that block, in as little as a few minutes, depending on how brave the validators are (and how much the protocol incentivizes them to be).


50000-foot view summary: the blockchain is a prediction market on itself.


Blocks, Chains and Consensus as Tug of War

Another unique component of the way that Casper does things is that rather than consensus being by-chain as is the case with current proof of work protocols, consensus is by-block: the consensus process comes to a decision on the status of the block at each height independently of every other height. This mechanism does introduce some inefficiencies – particularly, a bet must register the validator’s opinion on the block at every height rather than just the head of the chain – but it proves to be much simpler to implement strategies for consensus-by-bet in this model, and it also has the advantage that it is much more friendly to high blockchain speed: theoretically, one can even have a block time that is faster than network propagation with this model, as blocks can be produced independently of each other, though with the obvious proviso that block finalization will still take a while longer.

In by-chain consensus, one can view the consensus process as being a kind of tug-of-war between negative infinity and positive infinity at each fork, where the “status” at the fork represents the number of blocks in the longest chain on the right side minus the number of blocks on the left side:

Clients trying to determine the “correct chain” simply move forward starting from the genesis block, and at each fork go left if the status is negative and right if the status is positive. The economic incentives here are also clear: once the status goes positive, there is a strong economic pressure for it to converge to positive infinity, albeit very slowly. If the status goes negative, there is a strong economic pressure for it to converge to negative infinity.

Incidentally, note that under this framework the core idea behind the GHOST scoring rule becomes a natural generalization – instead of just counting the length of the longest chain toward the status, count every block on each side of the fork:

In by-block consensus, there is once again the tug of war, though this time the “status” is simply an arbitrary number that can be increased or decreased by certain actions connected to the protocol; at every block height, clients process the block if the status is positive and do not process the block if the status is negative. Note that even though proof of work is currently by-chain, it doesn’t have to be: one can easily imagine a protocol where instead of providing a parent block, a block with a valid proof of work solution must provide a +1 or -1 vote on every block height in its history; +1 votes would be rewarded only if the block that was voted on does get processed, and -1 votes would be rewarded only if the block that was voted on does not get processed:

Of course, in proof of work such a design would not work well for one simple reason: if you have to vote on absolutely every previous height, then the amount of voting that needs to be done will increase quadratically with time and fairly quickly grind the system to a halt. With consensus-by-bet, however, because the tug of war can converge to complete finality exponentially, the voting overhead is much more tolerable.

One counterintuitive consequence of this mechanism is the fact that a block can remain unconfirmed even when blocks after that block are completely finalized. This may seem like a large hit in efficiency, as if there is one block whose status is flip-flopping with ten blocks on top of it then each flip would entail recalculating state transitions for an entire ten blocks, but note that in a by-chain model the exact same thing can happen between chains as well, and the by-block version actually provides users with more information: if their transaction was confirmed and finalized in block 20101, and they know that regardless of the contents of block 20100 that transaction will have a certain result, then the result that they care about is finalized even though parts of the history before the result are not. By-chain consensus algorithms can never provide this property.

So how does Casper work anyway?

In any security-deposit-based proof of stake protocol, there is a current set of bonded validators, which is kept track of as part of the state; in order to make a bet or take one of a number of critical actions in the protocol, you must be in the set so that you can be punished if you misbehave. Joining the set of bonded validators and leaving the set of bonded validators are both special transaction types, and critical actions in the protocol such as bets are also transaction types; bets may be transmitted as independent objects through the network, but they can also be included into blocks.

In keeping with Serenity’s spirit of abstraction, all of this is implemented via a Casper contract, which has functions for making bets, joining, withdrawing, and accessing consensus information, and so one can submit bets and take other actions simply by calling the Casper contract with the desired data. The state of the Casper contract looks as follows:

The contract keeps track of the current set of validators, and for each validator it keeps track of six primary things:

  • The return address for the validator’s deposit
  • The current size of the validator’s deposit (note that the bets that the validator makes will increase or decrease this value)
  • The validator’s validation code
  • The sequence number of the most recent bet
  • The hash of the most recent bet
  • The validator’s opinion table

The concept of “validation code” is another abstraction feature in Serenity; whereas other proof of stake protocols require validators to use one specific signature verification algorithm, the Casper implementation in Serenity allows validators to specify a piece of code that accepts a hash and a signature and returns 0 or 1, and before accepting a bet checks the hash of the bet against its signature. The default validation code is an ECDSA verifier, but one can also experiment with other verifiers: multisig, threshold signatures (potentially useful for creating decentralized stake pools!), Lamport signatures, etc.

Every bet must contain a sequence number one higher than the previous bet, and every bet must contain a hash of the previous bet; hence, one can view the series of bets made by a validator as being a kind of “private blockchain”; viewed in that context, the validator’s opinion is essentially the state of that chain. An opinion is a table that describes:

  • What the validator thinks the most likely state root is at any given block height
  • What the validator thinks the most likely block hash is at any given block height (or zero if no block hash is present)
  • How likely the block with that hash is to be finalized

A bet is an object that looks like this:

The key information is the following:

  • The sequence number of the bet
  • The hash of the previous bet
  • A signature
  • A list of updates to the opinion

The function in the Casper contract that processes a bet has three parts to it. First, it validates the sequence number, previous hash and signature of a bet. Next, it updates the opinion table with any new information supplied by the bet. A bet should generally update a few very recent probabilities, block hashes and state roots, so most of the table will generally be unchanged. Finally, it applies the scoring rule to the opinion: if the opinion says that you believe that a given block has a 99% chance of finalization, and if, in the particular universe that this particular contract is running in, the block was finalized, then you might get 99 points; otherwise you might lose 4900 points.

Note that, because the process of running this function inside the Casper contract takes place as part of the state transition function, this process is fully aware of what every previous block and state root is at least within the context of its own universe; even if, from the point of view of the outside world, the validators proposing and voting on block 20125 have no idea whether or not block 20123 will be finalized, when the validators come around to processing that block they will be – or, perhaps, they might process both universes and only later decide to stick with one. In order to prevent validators from providing different bets to different universes, we have a simple slashing condition: if you make two bets with the same sequence number, or even if you make a bet that you cannot get the Casper contract to process, you lose your entire deposit.

Withdrawing from the validator pool takes two steps. First, one must submit a bet whose maximum height is -1; this automatically ends the chain of bets and starts a four-month countdown timer (20 blocks / 100 seconds on the testnet) before the bettor can recover their funds by calling a third method, withdraw. Withdrawing can be done by anyone, and sends funds back to the same address that sent the original join transaction.

Block proposition

A block contains (i) a number representing the block height, (ii) the proposer address, (iii) a transaction root hash and (iv) a signature. For a block to be valid, the proposer address must be the same as the validator that is scheduled to generate a block for the given height, and the signature must validate when run against the validator’s own validation code. The time to submit a block at height N is determined by T = G + N * 5 where G is the genesis timestamp; hence, a block should ordinarily appear every five seconds.

An NXT-style random number generator is used to determine who can generate a block at each height; essentially, this involves taking missing block proposers as a source of entropy. The reasoning behind this is that even though this entropy is manipulable, manipulation comes at a high cost: one must sacrifice one’s right to create a block and collect transaction fees in order to manipulate it. If it is deemed absolutely necessary, the cost of manipulation can be increased several orders of magnitude further by replacing the NXT-style RNG with a RANDAO-like protocol.

The Validator Strategy

So how does a validator operate under the Casper protocol? Validators have two primary categories of activity: making blocks and making bets. Making blocks is a process that takes place independently from everything else: validators gather transactions, and when it comes time for them to make a block, they produce one, sign it and send it out to the network. The process for making bets is more complicated. The current default validator strategy in Casper is one that is designed to mimic aspects of traditional Byzantine-fault-tolerant consensus: look at how other validators are betting, take the 33rd percentile, and move a step toward 0 or 1 from there.

To accomplish this, each validator collects and tries to stay as up-to-date as possible on the bets being made by all other validators, and keeps track of the current opinion of each one. If there are no or few opinions on a particular block height from other validators, then it follows an initial algorithm that looks roughly as follows:

  • If the block is not yet present, but the current time is still very close to the time that the block should have been published, bet 0.5
  • If the block is not yet present, but a long time has already passed since the block should have been published, bet 0.3
  • If the block is present, and it arrived on time, bet 0.7
  • If the block is present, but it arrived either far too early or far too late, bet 0.3

Some randomness is added in order to help prevent “stuck” scenarios, but the basic principle remains the same.

If there are already many opinions on a particular block height from other validators, then we take the following strategy:

  • Let L be the value such that two thirds of validators are betting higher than L. Let M be the median (ie. the value such that half of validators are betting higher than M). Let H be the value such that two thirds of validators are betting lower than H.
  • Let e(x) be a function that makes x more “extreme”, ie. pushes the value away from 0.5 and toward 1. A simple example is the piecewise function e(x) = 0.5 + x / 2 if x > 0.5 else x / 2.
  • If L > 0.8, bet e(L)
  • If H < 0.2, bet e(H)
  • Otherwise, bet e(M), though limit the result to be within the range [0.15, 0.85] so that less than 67% of validators can’t force another validator to move their bets too far

Validators are free to choose their own level of risk aversion within the context of this strategy by choosing the shape of e. A function where f(e) = 0.99999 for e > 0.8 could work (and would in fact likely provide the same behavior as Tendermint) but it creates somewhat higher risks and allows hostile validators making up a large portion of the bonded validator set to trick these validators into losing their entire deposit at a low cost (the attack strategy would be to bet 0.9, trick the other validators into betting 0.99999, and then jump back to betting 0.1 and force the system to converge to zero). On the other hand, a function that converges very slowly will incur higher inefficiencies when the system is not under attack, as finality will come more slowly and validators will need to keep betting on each height longer.

Now, how does a client determine what the current state is? Essentially, the process is as follows. It starts off by downloading all blocks and all bets. It then uses the same algorithm as above to construct its own opinion, but it does not publish it. Instead, it simply looks at each height sequentially, processing a block if its probability is greater than 0.5 and skipping it otherwise; the state after processing all of these blocks is shown as the “current state” of the blockchain. The client can also provide a subjective notion of “finality”: when the opinion at every height up to some k is either above 99.999% or below 0.001%, then the client considers the first k blocks finalized.

Further Research

There is still quite a bit of research to do for Casper and generalized consensus-by-bet. Particular points include:

  • Coming up with results to show that the system economically incentivizes convergence, even in the presence of some quantity of Byzantine validators
  • Determining optimal validator strategies
  • Making sure that the mechanism for including the bets in blocks is not exploitable
  • Increasing efficiency. Currently, the POC1 simulation can handle ~16 validators running at the same time (up from ~13 a week ago), though ideally we should push this up as much as possible (note that the number of validators the system can handle on a live network should be roughly the square of the performance of the POC, as the POC runs all nodes on the same machine).

The next article in this series will deal with efforts to add a scaffolding for scalability into Serenity, and will likely be released around the same time as POC2.

profile

Vitalik Buterin

https://ethereum.org

Comments
user

Author Nikolai Mushegian

Posted at 3:31 am December 28, 2015.

Typo?

“this automatically ends the chain of bets and starts a four-month countdown timer (20 blocks on the testnet)”

Should be four minutes? Or many more than 20 blocks?

Reply
    user

    Author Simon Janin

    Posted at 1:57 pm December 28, 2015.

    No, I believe this is correct. That means your stake will be bonded for 4 months in the livenet, but only 20 blocks on the testnet.

    Reply
      user

      Author xeroc

      Posted at 3:45 pm December 28, 2015.

      At least it is very easy to be misinterpreted

      Reply
      user

      Author Nikolai Mushegian

      Posted at 6:27 pm December 28, 2015.

      Yep, I misread it.

      Reply
user

Author Simon Janin

Posted at 2:19 pm December 28, 2015.

Very interesting!

Micro suggestion: it seems that you use floating point arithmetics to represents probabilities and bets — since every probability and bet essentially are a fraction, why not use rational numbers instead?

Let m,n be integers with n≠0 and gcd(m,n)=1, then we represent each probability as P:=m/n.

While this has the disadvantage of not being a native type, it has the nice property that bets are uniquely represented and no rounding error occurs. Maybe this could be interesting for slow strategies where you only bet slightly more than the validators pools.

For example we want our bet B to bet above P, then since P=m/n, we choose B = (10m + 1)/(10n).
This has the property that you can make extremely fine tuned bets. We also don’t lose any precision by dividing, multiplying, adding and subtracting bets to each other.

Reply
    user

    Author Vitalik Buterin

    Posted at 4:25 pm December 28, 2015.

    Micro suggestion: it seems that you use floating point arithmetics to
    represents probabilities and bets — since every probability and bet
    essentially are a fraction, why not use rational numbers instead?

    Right now I am using fixed point with a denominator of 10**9; also, in order to pack a probability into one byte, I’m using a custom form of base 2 scientific notation. I think both of those features are too important for efficiency to sacrifice, unfortunately.

    Reply
user

Author Dima Starodubcev

Posted at 3:34 pm December 28, 2015.

Do I understand correctly that in order to maximise my earnings as a validator in a consensus-by-bet all I potentially need is to:
(1) monitor other validators ROI and risks ratios.
(2) copy paste validation strategy that seems to have the highest profitability/risk ratio

Reply
    user

    Author Vitalik Buterin

    Posted at 4:37 pm December 28, 2015.

    That definitely is one way of doing it.

    Reply
      user

      Author Dima Starodubcev

      Posted at 12:17 am December 29, 2015.

      I suppose for 95% validators that will be the only way of doing that due to complexity of building proper validation strategy and contract’s code. Would Eth/Dev provide kind of recommended bulletproof validation code before hardfork happens?

      Reply
        user

        Author earonesty

        Posted at 5:41 pm March 11, 2016.

        providing good defaults would, of course, be any eth/dev’s goal

        Reply
user

Author Aurel IANCU

Posted at 7:33 pm December 28, 2015.

Are incentives the same for this two operations”making blocks and making bets” ? Can this operations be done on a RaspberryPi ?

Reply
user

Author Carsten Stöcker

Posted at 10:36 am December 29, 2015.

I am wondering if the Serenity PoS already offers the “Currency-agnostic Proof of Stake” feature described in Vitalik’s post on “Abstraction” … As this feature is not mentioned here and it adds another layer of complexity I assume it is coming in a later release of Casper … What is the plan for Currency-agnostic PoS?

Reply
    user

    Author Martin Köppelmann

    Posted at 9:39 pm January 19, 2016.

    As far as I understand Casper will not be effected by this abstraction. That means all betting will be done with the Ether security deposits. However – transaction fees can be payed with any kind of token. By the way – this is already possible today. In principal you can create transactions that include a transfer of any token to the miner and set the gas price to 0. Than you convince a miner to include the tx.

    Reply
user

Author Bob McElrath

Posted at 6:54 pm December 29, 2015.

Betting requires fungibility between forks, which is fundamentally impossible. The time horizon of an attack enabled by this non-fungibility is set by the time required to commit funds to be a bonded validator and then withdraw them. You’ve lengthened the time window of an attack by making it 4 months, but I don’t think the attack can be prevented. To put it another way, in order to achieve consensus you must assume consensus on the Casper contract. This is a circular argument. You must use assets external to the system to create consensus. Bitcoin’s proof-of-work energy expenditure is precisely an asset that IS fungible between forks.

Reply
    user

    Author Simon Janin

    Posted at 5:30 pm December 30, 2015.

    What is the difference between trusting the Casper contract and trusting the Bitcoin protocol?

    Have you read Vitalik’s article on weak-subjectivity?

    And for your last point, this is indeed a valid concern in my mind. Network connectivity is critical for Casper. But to be fair, such a partition would also be very damaging to Bitcoin, because of doublespends — although arguably less.

    Reply
      user

      Author Bob McElrath

      Posted at 6:55 pm December 30, 2015.

      The set of validators is Casper contract state. The bitcoin protocol is protocol whose state is defined by chain tips. Each chain tip in Ethereum and each “finalized block” define a new state of the Casper contract. Two chain tips in general have a different set of validators, so the state of the contract cannot be used to select between the chain tips.

      In order for Casper to work, the set of validators must be defined outside the state of the Ethereum chain tips, otherwise its argument is circular. But if the Casper state is external to the Ethereum state, now you have to define how Casper state comes to consensus, which is now a separate problem. You’re now back to essentially signing blocks by external trusted authorities.

      Reply
        user

        Author Vitalik Buterin

        Posted at 8:55 am January 6, 2016.

        In order for Casper to work, the set of validators must be defined
        outside the state of the Ethereum chain tips, otherwise its argument is
        circular

        The set of validators for day N can be deterministically determined from the state at day N-1, because of the 24 hour induction period. So, no problems here; the validator strategy simply needs to make sure that it only calculates ahead one day at a time.

        Reply
          user

          Author Bob McElrath

          Posted at 6:44 pm January 6, 2016.

          That only works if you’ve already determined consensus on the set of validators for day N-1. A long-lived fork (longer than 24 hours) destroys your algorithm because the two forks have different validators, and you can’t guarantee that won’t happen. Hence, you need an external sense of consensus for day N-1, in order to use it to impose consensus on day N.

          user

          Author Vitalik Buterin

          Posted at 4:07 am January 8, 2016.

          OK, I agree that there will need to be special-case logic for handling forks above 24 hours; I can see two ways out:

          Increase the induction time from 1 day to, say, 7 days
          Just follow the same algorithm that we follow already of favoring the fork that has higher average betting percentages on it, and have everyone converge on that.

          user

          Author Bob McElrath

          Posted at 4:23 am January 8, 2016.

          There is no value of the induction time that removes this problem. It’s a fundamental flaw.
          This doesn’t work exactly because coins aren’t fungible between forks. By induction you can reduce this problem to the choices of the initial validator.

          With one effective validator, this is a centralized system.

          user

          Author Simon Janin

          Posted at 9:05 pm January 8, 2016.

          In the end, it’s a probabilistic system just like proof of work — as long as the likelihood that the network stays partitioned is not 1, convergence is assured eventually.

          Also, incentives to be connected should be high if validators are mostly paid by transaction fees.

          It’s unclear how well a PoW system would handle a week long partition.

          user

          Author Bob McElrath

          Posted at 9:22 pm January 8, 2016.

          This is false.

          Your argument is: I had consensus blocks ago, therefore I can use that consensus to decide on <N+1>, which is true but centralized and by induction one could have a reorg all the way back to the genesis block, and the genesis creator decides. This is past looking, not forward looking. So I don’t understand your comment #1. Forward looking would require one to bet on one fork over another, which requires fungibility between forks, which is not present.

          A PoW system would handle a week long partition by evaluating the work on each side of the partition and going with the side that had more miners, it’s very clear.

          We do need to develop better metrics for “confirmation” in a PoW system — 6 blocks isn’t enough if there is a partition. BUT I can measure that e.g. I’m on the 75% partition (by seeing a hashrate loss) so therefore I should wait 8 blocks. If I’m on a minority partition nothing should ever be confirmed until the partition was resolved, regardless of how many blocks are created on the minority side.

          user

          Author Shelby Moore III

          Posted at 7:11 pm January 26, 2016.

          Correct. A PoS system (which is what context-by-betting reduces to and it is mathematically incorrect to claim it is a superclass of PoW) has bounded entropy because the resource consumed is finite and most importantly is self-referential. Whereas, 51% attacking PoW requires ongoing (unbounded and external) consumption of a resource. And allowing Partition tolerance means either Consistency (e.g. fungibility of partitions a.k.a. tips) or Access must be lost due to the inviolable CAP theorem. It doesn’t matter how much higher maths Greg Meredith uses to obscure the flaw, the CAP theorem will not be violated.

          Due to high computational cost of verifying long-running scripts, Ethereum has a more acute Tragedy of the Commons, because miners that win more of the blocks win more of the rewards (gas, fees) but all miners have to verify all blocks, thus minority (stake or hashrate) miners’ capital is redistributed to majority miners thus centralizing mining over time. The context-by-betting and the use of Partitions with validators is an attempt to overcome this most fundamental flaw that Ethereum never fixed. But it simply can’t be fixed. The problem is insoluble, because proof by propagation is not offline provable consensus. Thus any proofs of cheating or any other self-referential proxies prostituting for actual verification such as this flawed context-by-betting proposal, are all doomed.

          Why did it take such smart guys more than a year and $millions in ICO funds to finally realize they pursued a dead end???

          user

          Author Vitalik Buterin

          Posted at 5:32 pm February 16, 2016.

          Due to high computational cost of verifying long-running scripts,
          Ethereum has a more acute Tragedy of the Commons, because miners that
          win more of the blocks win more of the rewards (gas, fees) but all
          miners have to verify all blocks, thus minority (stake or hashrate)
          miners’ capital is redistributed to majority miners thus centralizing
          mining over time.

          Umm, you DO realize that bitcoin, and every other blockchain, currently has the exact same problem?

          user

          Author Shelby Moore III

          Posted at 2:43 pm March 4, 2016.

          Umm, you DO realize that bitcoin, and every other blockchain, currently has the exact same problem?

          Yes of course I do.

          Also, the gas limit quite well prevents “long-running scripts”; I think
          that because of gas limits it’s a fallacy to state that ethereum’s block
          verification issues are qualitiatively different from bitcoin’s, as
          both are capped in some metric.

          Gas can’t be sharded (partitioned) without centralizing verification which defeats the scaling advantage of partitioning. You are up river without a paddle.

          user

          Author bruno cecchini

          Posted at 10:18 pm March 8, 2016.

          This problem will be solved this year or next year at $%:) worst.

          user

          Author Shelby Moore III

          Posted at 2:59 am March 9, 2016.

          Sorry to inform you, there is no solution that can employ sharding. It is insoluble problem and direction that Casper is wasting time on that can only end up in failure. Transaction receipts (in lieu of every validator validating every shard that shares cross-shard state) create a Prisoner’s dilemma. It doesn’t matter if Greg Meredith has been able to model the self-referential consensus-by-betting with the Pi calculus, because such a process model does not model such economically driven externalities.

          user

          Author bruno cecchini

          Posted at 6:19 am March 9, 2016.

          I’m confident about my self, for sure we are not talking about the same proof of concept.

          user

          Author bruno cecchini

          Posted at 7:09 pm March 10, 2016.

          The biggest problem came from the school of taught of the university of Chicago, the problem of the solution start form there.

          user

          Author Vitalik Buterin

          Posted at 5:34 pm February 16, 2016.

          by induction one could have a reorg all the way back to the genesis block, and the genesis creator decides

          No, because why would clients accept a new genesis block that is incompatible with the genesis block that they already have? Remember that I am assuming induction in my security model, ie. it is an assumption that a client which logs on was logged on and synced at some point in the last 4 months.

          user

          Author Bob McElrath

          Posted at 10:03 pm February 17, 2016.

          Let me explain this in a different way. Your plan seems to fall for some of the fallacies of distributed computing: https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing in particular that topology doesn’t change and that the network is reliable.

          If you define consensus inductively, there is a time window over which you are stating that a network split absolutely cannot happen. (Sounds like 4 months for you) There is no value for this time window that makes network splits not happen: undersea cables get dredged, wars happen, embargoes happen. People will also leave an ethereum node mining in a closet and forget about it for 4 months. Once the fork extends beyond the window of your validator bonding, you fundamentally have two forks and absolutely no way to distinguish between them. How do you re-join nodes mining inside an embargoed country with those outside? Both sides of the fork will say “why should I trust your validators that I haven’t seen in 4 months?” At this point the network is forked, and irretrievably broken. You also open an attack: mine a 4-month long fork and start advertising it to new nodes joining the network. They will be unable to tell (add a Sybil attack to overwhelm the “legitimate” fork by number of nodes).

          The inductive plan also has a problem with the first induction window, in which you cannot possibly have bonded validators, and you must fall back to centralized selection.

          Stick with PoW, please.

          user

          Author Vitalik Buterin

          Posted at 10:18 pm February 17, 2016.

          Okay fine, I accept in the security model that users should authenticate the current validator set out-of-band when logging on for the first time, or after they leave their node offline for 4 months. I don’t see how this is any more dangerous than downloading software updates out of band, which users currently do just fine even with bitcoin.

          user

          Author Bob McElrath

          Posted at 11:08 pm February 17, 2016.

          Consensus is then entirely defined by what you choose as your out-of-band mechanism.

          If the “out of band” defines a Stiftung Ethereum server, you have a centralized system.

          What if one side of the fork can’t reach the Stiftung Ethereum server?

          Downloading software has a server or provider source, and is by definition centralized. That’s in no way dangerous, but I do choose to trust Canonical software, for instance.

          user

          Author Vitalik Buterin

          Posted at 3:23 pm February 18, 2016.

          Read https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/ .

          The reason why automated decentralized consensus is useful is not because of some misanthropic idea that social consensus is evil and should be avoided in favor of The Mightiest Chain Rules All™; the reason why it’s useful is that even while humans can theoretically agree on every block over an internet forum, such a process would be horrendously inefficient (and if used that frequently, quite corruptible over time), and so we prefer to have an automated process do the work in the normal case. The edge cases where we do fall back to out of band consensus can be limited and carefully circumscribed so that there is a single obvious schelling point about what the answer should be (note: one of my goals would be to maximally automate this process; Lamport’s result that you can achieve consensus with 100%-epsilon byzantine faults in a fully synchronous model is perhaps the most encouraging sign that this is actually doable).

          If the “out of band” defines a Stiftung Ethereum server, you have a centralized system.

          I imagine users would just go look up the relevant hash from whatever block explorers are popular at the time, and check two if they’re paranoid. No need for Stiftung Ethereum involvement. Automatically inspecting hash graphs (cf. Jeff Coleman’s Universal Hash Time) is an even better solution.

          user

          Author earonesty

          Posted at 5:39 pm March 11, 2016.

          Doesn’t PoW checkpointing handle this…. reducing the amount PoW needed to secure the chain?

          user

          Author earonesty

          Posted at 8:04 pm March 8, 2016.

          Alternatively, a hybrid proof-of-work algo can deal with longer forks. 99% casper 1% PoW blocks as checkpoints.

user

Author Guy Zyskind

Posted at 7:51 pm December 29, 2015.

Great post and superb work. IMHO, what’s lacking is a more formal analysis proving security/stability (does one exist?). Most importantly, were you able to prove the statement that this validator strategy leads to an equilibrium? Is it a strictly dominant equilibrium or – do all equilibria lead to a ‘good’ outcome (i.e., a correct state)?

There are many other similar questions. To name a couple – is the equilibrium stable given some other signaling and collusion, or how strong are the assumptions on the utilities – i.e., what fraction can be totally byzantine.

Some of these questions were pointed out, so I wonder how much progress has been made on the analytical front? It’s awesome that you’re testing this protocol in simulations, but given its complexity it’s really necessary to state the assumptions on the agents and the properties of the consensus they provide in a more formal way.

Reply
    user

    Author Simon Janin

    Posted at 4:58 pm December 30, 2015.

    Formal analysis is indeed necessary — but that will take years.
    In the meantime we can run Monte Carlo simulations with X% Byzantine nodes under some assumptions and see the results.

    I actually believe that for biologically influenced system, truth is in the Monte Carlo.

    How about running a massive collaborative simulation? I already started tweaking the parameters of the PoC (typically network failure rate and number of nodes, among other things) and seeing the outcomes.
    If I have time I’d like to make a Monte Carlo simulator for the PoC that anyone can run on Linux.

    Reply
user

Author RJ C

Posted at 8:22 pm December 29, 2015.

I’m not sure I’m seeing how this can be automated for speed and to move quickly as you suggest.

If I’m understanding this correctly, you are talking about creating a prediction market not just on when a block is processed and at what height, but you are also asking for a bet on what the state will be…that seems fairly difficult to go about accurately predicting with objectivity. How is state working with consensus to be calculated ie what gives predictors an incentive to compute the transactions at hand? If anyone can give me an explanation I’m all ears.

Reply
    user

    Author RJ C

    Posted at 11:00 pm December 29, 2015.

    Okay I can visualize it right now. The hard part in understanding this is in understanding the difference in chain head vs by block.

    Reply
user

Author Thiago Martins

Posted at 2:14 pm January 5, 2016.

Some thoughts about Proof-of-Work VS Proof-of-Stake:

https://bytemaster.github.io/article/2016/01/04/The-Benefits-of-Proof-of-Work/

😉

Reply
    user

    Author earonesty

    Posted at 8:10 pm March 8, 2016.

    I think the ideal is a hybrid PoS, where PoS is used for 99 blocks and PoW is used for, on average, 1 block out of 100. This allows for a “mining” industry, whose purpose is to secure protocols like Casper from long PoW attacks. It allows the protocol to be 100 times more efficient, while retaining an extraordinarily high degree of assurance. Another way of looking at it is: do we really need to secure against 10000 block fork attacks? No. We only need to secure against reasonable fork lengths. And whatever that reasonable number is becomes the PoW checkpoint (1 out of 100, even 1 out of 1000 maybe (1/8th of a ethereum day)… something like that). Indeed other coins attempting PoS eventually fell back on checkpointing as a way to secure the coin.

    Reply
      user

      Author Embedded Thought

      Posted at 4:48 am September 14, 2016.

      So, Peercoin? I know that its check-pointing is not protocol enforced, rather an option and easily disabled if one chooses to edit the client. The default client has checkpointing enabled (though this will be removed in v0.6) but one can easily disable it if they choose to do so. Check out the Peercoin Advanced Relay Subnet project. A group in the PPC community made their own client in which they removed the 80 byte op_return size limit per transaction. They were able to store the entire book of ‘Alice in Wonderland’ within one transaction and have it pushed to the main chain.

      Reply
user

Author camrrudani

Posted at 3:06 pm January 5, 2016.

This is an awesome article. Worth reading throughout…. – Hanuman Chalisa

Reply
user

Author paul firth

Posted at 8:44 pm February 17, 2016.

“if a block has six confirmations, then reverting it only costs miners (in equilibrium) roughly six times the block reward”

That is only true if the attacker has 50% of the hashing power of the network. In reality, the cost is superlinear in the number of blocks, and the exponent is proportional to the attacker’s hashing power.

Reply

Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

View Comments (45) ...
Navigation