Ethereum Blog

Proof of Stake: How I Learned to Love Weak Subjectivity

Introduction

user

Vitalik Buterin


LATEST POSTS

Roundup Q2 08th July, 2017

Roundup Round III 24th May, 2017

technical

Proof of Stake: How I Learned to Love Weak Subjectivity

Posted on .

Proof of stake continues to be one of the most controversial discussions in the cryptocurrency space. Although the idea has many undeniable benefits, including efficiency, a larger security margin and future-proof immunity to hardware centralization concerns, proof of stake algorithms tend to be substantially more complex than proof of work-based alternatives, and there is a large amount of skepticism that proof of stake can work at all, particularly with regard to the supposedly fundamental “nothing at stake” problem. As it turns out, however, the problems are solvable, and one can make a rigorous argument that proof of stake, with all its benefits, can be made to be successful – but at a moderate cost. The purpose of this post will be to explain exactly what this cost is, and how its impact can be minimized.

Economic Sets and Nothing at Stake

First, an introduction. The purpose of a consensus algorithm, in general, is to allow for the secure updating of a state according to some specific state transition rules, where the right to perform the state transitions is distributed among some economic set. An economic set is a set of users which can be given the right to collectively perform transitions via some algorithm, and the important property that the economic set used for consensus needs to have is that it must be securely decentralized – meaning that no single actor, or colluding set of actors, can take up the majority of the set, even if the actor has a fairly large amount of capital and financial incentive. So far, we know of three securely decentralized economic sets, and each economic set corresponds to a set of consensus algorithms:

  • Owners of computing power: standard proof of work, or TaPoW. Note that this comes in specialized hardware, and (hopefully) general-purpose hardware variants.
  • Stakeholders: all of the many variants of proof of stake
  • A user’s social network: Ripple/Stellar-style consensus

Note that there have been some recent attempts to develop consensus algorithms based on traditional Byzantine fault tolerance theory; however, all such approaches are based on an M-of-N security model, and the concept of “Byzantine fault tolerance” by itself still leaves open the question of which set the N should be sampled from. In most cases, the set used is stakeholders, so we will treat such neo-BFT paradigms are simply being clever subcategories of “proof of stake”.

Proof of work has a nice property that makes it much simpler to design effective algorithms for it: participation in the economic set requires the consumption of a resource external to the system. This means that, when contributing one’s work to the blockchain, a miner must make the choice of which of all possible forks to contribute to (or whether to try to start a new fork), and the different options are mutually exclusive. Double-voting, including double-voting where the second vote is made many years after the first, is unprofitablem since it requires you to split your mining power among the different votes; the dominant strategy is always to put your mining power exclusively on the fork that you think is most likely to win.

With proof of stake, however, the situation is different. Although inclusion into the economic set may be costly (although as we will see it not always is), voting is free. This means that “naive proof of stake” algorithms, which simply try to copy proof of work by making every coin a “simulated mining rig” with a certain chance per second of making the account that owns it usable for signing a block, have a fatal flaw: if there are multiple forks, the optimal strategy is to vote on all forks at once. This is the core of “nothing at stake”.

Note that there is one argument for why it might not make sense for a user to vote on one fork in a proof-of-stake environment: “altruism-prime”. Altruism-prime is essentially the combination of actual altruism (on the part of users or software developers), expressed both as a direct concern for the welfare of others and the network and a psychological moral disincentive against doing something that is obviously evil (double-voting), as well as the “fake altruism” that occurs because holders of coins have a desire not to see the value of their coins go down.

Unfortunately, altruism-prime cannot be relied on exclusively, because the value of coins arising from protocol integrity is a public good and will thus be undersupplied (eg. if there are 1000 stakeholders, and each of their activity has a 1% chance of being “pivotal” in contributing to a successful attack that will knock coin value down to zero, then each stakeholder will accept a bribe equal to only 1% of their holdings). In the case of a distribution equivalent to the Ethereum genesis block, depending on how you estimate the probability of each user being pivotal, the required quantity of bribes would be equal to somewhere between 0.3% and 8.6% of total stake (or even less if an attack is nonfatal to the currency). However, altruism-prime is still an important concept that algorithm designers should keep in mind, so as to take maximal advantage of in case it works well.

Short and Long Range

If we focus our attention specifically on short-range forks – forks lasting less than some number of blocks, perhaps 3000, then there actually is a solution to the nothing at stake problem: security deposits. In order to be eligible to receive a reward for voting on a block, the user must put down a security deposit, and if the user is caught either voting on multiple forks then a proof of that transaction can be put into the original chain, taking the reward away. Hence, voting for only a single fork once again becomes the dominant strategy.

Another set of strategies, called “Slasher 2.0” (in contrast to Slasher 1.0, the original security deposit-based proof of stake algorithm), involves simply penalizing voters that vote on the wrong fork, not voters that double-vote. This makes analysis substantially simpler, as it removes the need to pre-select voters many blocks in advance to prevent probabilistic double-voting strategies, although it does have the cost that users may be unwilling to sign anything if there are two alternatives of a block at a given height. If we want to give users the option to sign in such circumstances, a variant of logarithmic scoring rules can be used (see here for more detailed investigation). For the purposes of this discussion, Slasher 1.0 and Slasher 2.0 have identical properties.

The reason why this only works for short-range forks is simple: the user has to have the right to withdraw the security deposit eventually, and once the deposit is withdrawn there is no longer any incentive not to vote on a long-range fork starting far back in time using those coins. One class of strategies that attempt to deal with this is making the deposit permanent, but these approaches have a problem of their own: unless the value of a coin constantly grows so as to continually admit new signers, the consensus set ends up ossifying into a sort of permanent nobility. Given that one of the main ideological grievances that has led to cryptocurrency’s popularity is precisely the fact that centralization tends to ossify into nobilities that retain permanent power, copying such a property will likely be unacceptable to most users, at least for blockchains that are meant to be permanent. A nobility model may well be precisely the correct approach for special-purpose ephemeral blockchains that are meant to die quickly (eg. one might imagine such a blockchain existing for a round of a blockchain-based game).

One class of approaches at solving the problem is to combine the Slasher mechanism described above for short-range forks with a backup, transactions-as-proof-of-stake, for long range forks. TaPoS essentially works by counting transaction fees as part of a block’s “score” (and requiring every transaction to include some bytes of a recent block hash to make transactions not trivially transferable), the theory being that a successful attack fork must spend a large quantity of fees catching up. However, this hybrid approach has a fundamental flaw: if we assume that the probability of an attack succeeding is near-zero, then every signer has an incentive to offer a service of re-signing all of their transactions onto a new blockchain in exchange for a small fee; hence, a zero probability of attacks succeeding is not game-theoretically stable. Does every user setting up their own node.js webapp to accept bribes sound unrealistic? Well, if so, there’s a much easier way of doing it: sell old, no-longer-used, private keys on the black market. Even without black markets, a proof of stake system would forever be under the threat of the individuals that originally participated in the pre-sale and had a share of genesis block issuance eventually finding each other and coming together to launch a fork.

Because of all the arguments above, we can safely conclude that this threat of an attacker building up a fork from arbitrarily long range is unfortunately fundamental, and in all non-degenerate implementations the issue is fatal to a proof of stake algorithm’s success in the proof of work security model. However, we can get around this fundamental barrier with a slight, but nevertheless fundamental, change in the security model.

Weak Subjectivity

Although there are many ways to categorize consensus algorithms, the division that we will focus on for the rest of this discussion is the following. First, we will provide the two most common paradigms today:

  • Objective: a new node coming onto the network with no knowledge except (i) the protocol definition and (ii) the set of all blocks and other “important” messages that have been published can independently come to the exact same conclusion as the rest of the network on the current state.
  • Subjective: the system has stable states where different nodes come to different conclusions, and a large amount of social information (ie. reputation) is required in order to participate.

Systems that use social networks as their consensus set (eg. Ripple) are all necessarily subjective; a new node that knows nothing but the protocol and the data can be convinced by an attacker that their 100000 nodes are trustworthy, and without reputation there is no way to deal with that attack. Proof of work, on the other hand, is objective: the current state is always the state that contains the highest expected amount of proof of work.

Now, for proof of stake, we will add a third paradigm:

  • Weakly subjective: a new node coming onto the network with no knowledge except (i) the protocol definition, (ii) the set of all blocks and other “important” messages that have been published and (iii) a state from less than N blocks ago that is known to be valid can independently come to the exact same conclusion as the rest of the network on the current state, unless there is an attacker that permanently has more than X percent control over the consensus set.

Under this model, we can clearly see how proof of stake works perfectly fine: we simply forbid nodes from reverting more than N blocks, and set N to be the security deposit length. That is to say, if state S has been valid and has become an ancestor of at least N valid states, then from that point on no state S’ which is not a descendant of S can be valid. Long-range attacks are no longer a problem, for the trivial reason that we have simply said that long-range forks are invalid as part of the protocol definition. This rule clearly is weakly subjective, with the added bonus that X = 100% (ie. no attack can cause permanent disruption unless it lasts more than N blocks).

Another weakly subjective scoring method is exponential subjective scoring, defined as follows:

  1. Every state S maintains a “score” and a “gravity”
  2. score(genesis) = 0, gravity(genesis) = 1
  3. score(block) = score(block.parent) + weight(block) * gravity(block.parent), where weight(block) is usually 1, though more advanced weight functions can also be used (eg. in Bitcoin, weight(block) = block.difficulty can work well)
  4. If a node sees a new block B' with B as parent, then if n is the length of the longest chain of descendants from B at that time, gravity(B') = gravity(B) * 0.99 ^ n (note that values other than 0.99 can also be used).

Essentially, we explicitly penalize forks that come later. ESS has the property that, unlike more naive approaches at subjectivity, it mostly avoids permanent network splits; if the time between the first node on the network hearing about block B and the last node on the network hearing about block B is an interval of k blocks, then a fork is unsustainable unless the lengths of the two forks remain forever within roughly k percent of each other (if that is the case, then the differing gravities of the forks will ensure that half of the network will forever see one fork as higher-scoring and the other half will support the other fork). Hence, ESS is weakly subjective with X roughly corresponding to how close to a 50/50 network split the attacker can create (eg. if the attacker can create a 70/30 split, then X = 0.29).

In general, the “max revert N blocks” rule is superior and less complex, but ESS may prove to make more sense in situations where users are fine with high degrees of subjectivity (ie. N being small) in exchange for a rapid ascent to very high degrees of security (ie. immune to a 99% attack after N blocks).

Consequences

So what would a world powered by weakly subjective consensus look like? First of all, nodes that are always online would be fine; in those cases weak subjectivity is by definition equivalent to objectivity. Nodes that pop online once in a while, or at least once every N blocks, would also be fine, because they would be able to constantly get an updated state of the network. However, new nodes joining the network, and nodes that appear online after a very long time, would not have the consensus algorithm reliably protecting them. Fortunately, for them, the solution is simple: the first time they sign up, and every time they stay offline for a very very long time, they need only get a recent block hash from a friend, a blockchain explorer, or simply their software provider, and paste it into their blockchain client as a “checkpoint”. They will then be able to securely update their view of the current state from there.

This security assumption, the idea of “getting a block hash from a friend”, may seem unrigorous to many; Bitcoin developers often make the point that if the solution to long-range attacks is some alternative deciding mechanism X, then the security of the blockchain ultimately depends on X, and so the algorithm is in reality no more secure than using X directly – implying that most X, including our social-consensus-driven approach, are insecure.

However, this logic ignores why consensus algorithms exist in the first place. Consensus is a social process, and human beings are fairly good at engaging in consensus on our own without any help from algorithms; perhaps the best example is the Rai stones, where a tribe in Yap essentially maintained a blockchain recording changes to the ownership of stones (used as a Bitcoin-like zero-intrinsic-value asset) as part of its collective memory. The reason why consensus algorithms are needed is, quite simply, because humans do not have infinite computational power, and prefer to rely on software agents to maintain consensus for us. Software agents are very smart, in the sense that they can maintain consensus on extremely large states with extremely complex rulesets with perfect precision, but they are also very ignorant, in the sense that they have very little social information, and the challenge of consensus algorithms is that of creating an algorithm that requires as little input of social information as possible.

Weak subjectivity is exactly the correct solution. It solves the long-range problems with proof of stake by relying on human-driven social information, but leaves to a consensus algorithm the role of increasing the speed of consensus from many weeks to twelve seconds and of allowing the use of highly complex rulesets and a large state. The role of human-driven consensus is relegated to maintaining consensus on block hashes over long periods of time, something which people are perfectly good at. A hypothetical oppressive government which is powerful enough to actually cause confusion over the true value of a block hash from one year ago would also be powerful enough to overpower any proof of work algorithm, or cause confusion about the rules of blockchain protocol.

Note that we do not need to fix N; theoretically, we can come up with an algorithm that allows users to keep their deposits locked down for longer than N blocks, and users can then take advantage of those deposits to get a much more fine-grained reading of their security level. For example, if a user has not logged in since T blocks ago, and 23% of deposits have term length greater than T, then the user can come up with their own subjective scoring function that ignores signatures with newer deposits, and thereby be secure against attacks with up to 11.5% of total stake. An increasing interest rate curve can be used to incentivize longer-term deposits over shorter ones, or for simplicity we can just rely on altruism-prime.

Marginal Cost: The Other Objection

One objection to long-term deposits is that it incentivizes users to keep their capital locked up, which is inefficient, the exact same problem as proof of work. However, there are four counterpoints to this.

First, marginal cost is not total cost, and the ratio of total cost divided by marginal cost is much less for proof of stake than proof of work. A user will likely experience close to no pain from locking up 50% of their capital for a few months, a slight amount of pain from locking up 70%, but would find locking up more than 85% intolerable without a large reward. Additionally, different users have very different preferences for how willing they are to lock up capital. Because of these two factors put together, regardless of what the equilibrium interest rate ends up being, the vast majority of the capital will be locked up at far below marginal cost.

Second, locking up capital is a private cost, but also a public good. The presence of locked up capital means that there is less money supply available for transactional purposes, and so the value of the currency will increase, redistributing the capital to everyone else, creating a social benefit. Third, security deposits are a very safe store of value, so (i) they substitute the use of money as a personal crisis insurance tool, and (ii) many users will be able to take out loans in the same currency collateralized by the security deposit. Finally, because proof of stake can actually take away deposits for misbehaving, and not just rewards, it is capable of achieving a level of security much higher than the level of rewards, whereas in the case of proof of work the level of security can only equal the level of rewards. There is no way for a proof of work protocol to destroy misbehaving miners’ ASICs.

Fortunately, there is a way to test those assumptions: launch a proof of stake coin with a stake reward of 1%, 2%, 3%, etc per year, and see just how large a percentage of coins become deposits in each case. Users will not act against their own interests, so we can simply use the quantity of funds spent on consensus as a proxy for how much inefficiency the consensus algorithm introduces; if proof of stake has a reasonable level of security at a much lower reward level than proof of work, then we know that proof of stake is a more efficient consensus mechanism, and we can use the levels of participation at different reward levels to get an accurate idea of the ratio between total cost and marginal cost. Ultimately, it may take years to get an exact idea of just how large the capital lockup costs are.

Altogether, we now know for certain that (i) proof of stake algorithms can be made secure, and weak subjectivity is both sufficient and necessary as a fundamental change in the security model to sidestep nothing-at-stake concerns to accomplish this goal, and (ii) there are substantial economic reasons to believe that proof of stake actually is much more economically efficient than proof of work. Proof of stake is not an unknown; the past six months of formalization and research have determined exactly where the strengths and weaknesses lie, at least to as large extent as with proof of work, where mining centralization uncertainties may well forever abound. Now, it’s simply a matter of standardizing the algorithms, and giving blockchain developers the choice.

profile

Vitalik Buterin

https://ethereum.org

Comments
user

Author Simon de la Rouviere

Posted at 3:59 pm November 25, 2014.

Does this is any away affect autonomous agents? They won’t know where to find “friends” who can give them extra information?

Reply
    user

    Author ChuckOne

    Posted at 5:41 pm November 25, 2014.

    Interesting question indeed. In the end human beings need to define the trust lines between them e.g. via certificates etc. One part of the answer would be identifying classes of agents.

    Btw. nice to have some wording for the consensus mechanism of Nxt now: “weakly subjective consensus”. Thanks Vitalik.

    Reply
    user

    Author Vitalik Buterin

    Posted at 3:27 pm November 29, 2014.

    The creator of the autonomous agent will provide a trusted hash at the start. From that point on, the autonomous agent will just need to take care to be online at least once every security deposit interval. Security deposit intervals can probably be set as long as 12 months.

    Reply
user

Author Bill White

Posted at 10:27 pm November 25, 2014.

This was a very helpful article indeed. I’ve been thinking about these PoS issues lately and your analysis and terminology make things much clearer.

By the way, there seem to be bugs in some of the figures. It looks like 0.9 + 0.1 = 0.5 (explicitly in the second figure and implicitly in the third and fourth).

Reply
user

Author lmm

Posted at 10:29 pm November 25, 2014.

“EV = 0.9 + 0.1 = 0.5” must be some new kind of maths 😛

Reply
    user

    Author Currency Forward

    Posted at 5:51 am January 7, 2015.

    what he meant to say is “split the vote, then 0.9*1/2+0.1*1/2=0.45+0.05=0.5”

    Reply
user

Author psztorc

Posted at 1:02 am November 26, 2014.

But haven’t you now given your friend a direct *reason* to lie to you about what the hash is? You are not simply “trusting” your friend to be honest, you are hoping he will act against his self-interest.

And how many friends do you think will run full nodes at all?

Do you plan to respond to my comments concerning the decentralized exchange created by PoW? Or to my responses to zack in the comments section ( of http://www.truthcoin.info/blog/pow-and-mining/ )?

Reply
    user

    Author Joshua Davis

    Posted at 9:05 am November 27, 2014.

    I don’t find your argument convincing at all. Are you saying that it is in the realm of my friend’s self interest that he lie to me? What type of friend is that? This matter of how many friends run full nodes to me seems to be a criticism that doesn’t do much to convince me that what Vitalik is saying here won’t work. A friend of 20 of my friends would still be a reliable source so long as that party had a solid reputation among my 20 friends.

    Finally I want to repeat what Daniel said because to me this is the key to consensus. Blockchains are a social convention that go beyond software. Consensus is a human construct not a technological one and the software is there merely to aid humans not be fully automatic. Its the combination of the human factor of trusting a reliable party who can convey to me the state of the network with the software factor of the algorithm doing what I cannot do which is index the state of the network and allow me to know if any one transaction is valid or not that creates consensus.

    Great job Vitalik! I am reading the tendermint whitepaper now and I hope to gain at least a superficial understanding of what you are sharing in this post with a bit more effort and consideration!

    Reply
      user

      Author psztorc

      Posted at 9:54 pm November 28, 2014.

      One would hope it would be obvious by now: If you’re going to assume that everyone can always find “the real block hash” from a trusted 3rd party, then there’s no reason to use blockchain technology at all. Satoshi’s rules for invalidating block hashes would be superfluous.

      RE: full nodes. The claim is “they need only get a recent block hash from a friend”, and it demands proof (at a minimum) that [1] such a person will exist when I want to access the network (personally, my friends/family do not run servers or keep their computers on at night), [2] that they will choose to provide me with information on-demand, and [3] that no one will interrupt, intercept, or attack this process (by DoSing or hacking my friend).

      Your second paragraph is nonsensical because the stated purpose behind Blockchain technology is to avoid trusting 3rd parties.

      Reply
        user

        Author Martin Köppelmann

        Posted at 9:04 am November 29, 2014.

        “Your second paragraph is nonsensical because the stated purpose behind Blockchain technology is to avoid trusting 3rd parties.”
        To some degree you are always trusting 3rd parties. Have you wrote the code of BitcoinQT? Yes? Ok, have you compiled it for yourself? Yes, ok, have you checked the code of the compiler? How can you trust your hardware? You always have to trust others to some degree.

        With the proposed PoS Version you just need to find a single valid hash from the last year. That is really something completely different from “always find “the real block hash” from a trusted 3rd party. Or to paraphrase your words:
        “if I need to trust the hardware of my computer and der C compiler in the first place than this hole hole Blockchain technology is pointless”

        Reply
          user

          Author psztorc

          Posted at 11:10 pm November 29, 2014.

          This is the logical fallacy of equivocation, of the concepts of “trust” and “personal verification”.

          I haven’t “personally” written the code of BitcoinQT. I didn’t “personally” create-or-verify The Universe, or the Laws of Physics/Chemistry/Computer-Networking, but I’m comfortable relying on those.

          There is overwhelming evidence which I can “trust” without “personally verifying” it. I don’t “personally” understand the proof of Fermat’s last theorem, but from what I *did* “personally learn” about the-way-humans-behave, I know that, if there were an error with the proof, several individuals would have independently publicized the error, and the the error would start appearing in discussions everywhere.

          The same is true for *relevant* errors in the compiler or the hardware (errors that would affect my life somehow)…if they existed, with just a little time they would appear publically. In a way, I “trust” these things because I have “personally verified” that there hasn’t been an internet/media explosion concerning problems with these things. I also understand evolution and the nature of economic competition, where life forms and competing-agents seek out flaws in their rivals for exploitation.

          All it takes to verify that my Bitcoin version works is a little time, and indeed most people wait a very long time to upgrade their Bitcoin software, if they upgrade at all ( https://bitcoinfoundation.org/2014/09/bitnodes-project-2014-q3-report-the-state-of-bitcoin-p2p-network/ ). Attacks do not take place on Bitcoin upgrades, precisely because it is so difficult to get away with one. If it were easier to try, people would “trust” less and “personally verify” more. People might never upgrade at all.

          It is an intellectual cop-out to say that because someone doesn’t personally know everything, they are doomed to a lifetime of “faith” or “subjectivity”. This is the pseudoscientific Postmodernism-nonsense which (thankfully) has long been dead!

          user

          Author Martin Köppelmann

          Posted at 11:25 pm November 29, 2014.

          ” I know that, if there were an error with the proof, several individuals would have independently publicized the error, and the the error would start appearing in discussions everywhere.”

          The same is true for a alternative blockchain. If somehow overnight a blockchain would appear that would fork the common one from a really long time ago this news would be spread. The “trust” or “personal verification” verification of Vitaliks proposal is not bigger than downloading the right bitcoin-qt version (there are for sure forks that start at a different genesis block, or have different checkpoints) (Yes – Statoshi used to add checkpoints from time to time to Bitcoin-QT, so this is really not different from Bitcoin at all…)

          Note that this only hold true for “no revert” times like 2-12 month. Less than 24h like NXT is really a different story.

          user

          Author psztorc

          Posted at 11:40 pm November 29, 2014.

          I’m afraid it still isn’t the same. Someone can make several forks branch out from each block in realtime. How to reach consensus on which single fork is the true fork? Without Satoshi’s rules it is a he-said-she-said, Sybil attack free-for-all.

          user

          Author Martin Köppelmann

          Posted at 11:52 pm November 29, 2014.

          “Someone can make several forks branch out from each block in realtime. How to reach consensus on which single fork is the true fork?” I am not sure what you mean. Of course – if you can make one fork you can make n forks. But they all start before the reorg time limit. (we are still talking about long range attacks from lets say the initial share holders) If you mean short range attacks you should explain how you expect them to work.
          So my understanding is that the only thing that is indeed important is, that a new user needs to download a version witch a correct checkpoint within the last year. But this is to me a similar “personal verification” task to – making sure that the page I download BitcoinQt from is not hacked.

          user

          Author psztorc

          Posted at 12:27 am November 30, 2014.

          What if there are 1,000,000 possible “versions” (hashes) for your friend to choose from at any given time? These are easy to make in parallel because they do not require PoW, as many others have explained.

          user

          Author Martin Köppelmann

          Posted at 1:00 am November 30, 2014.

          I think you have a misconception here. Yes – an attacker (or attacker group) can create 1,000,000 possible “versions” of a blockchain that all start at a point in time where they control the large majority of all coins. And indeed if someone publishes old private keys or the initial stakeholders collude this can happen.

          However, all this 1,000,000 chains are provable different from the real chain if you only know a single block from the real chain after the block where the attacker holds the majority of the keys.

          user

          Author psztorc

          Posted at 2:17 am November 30, 2014.

          This line of reasoning is specifically to refute your previous comparison (Fermat vs BitcoinQT). Proving a single chain false is not the same as proving another chain is true. However, the “absence of evidence” that “my BitcoinQT is defective” IS evidence that any such defect is absent.

          user

          Author Martin Köppelmann

          Posted at 2:30 am November 30, 2014.

          same holds true for POS and “weak subjectivity”. If you have the chain what you consider to be the true chain you can do so because of the “absence of evidence”. that there is no other chain that is same to yours but different in the last “n” blocks.

          user

          Author psztorc

          Posted at 3:23 am November 30, 2014.

          The multiple problems of PoS are separate from what we were talking about: your incorrect equivocation of “trust” and “personal verification”. You’re trying to hop away from that mistake back into the larger issues.

          Even the idea that a tiny security deposit can discourage a large double-spend against an exchange is far-fetched. Then, we have missed-signers and PoW-baiting, and much more. Plenty to talk about, but the conversation needs to stay organized.

          user

          Author Mark Pey

          Posted at 7:41 am December 22, 2015.

          It all seems like overkill, solution looking for a problem stuff. Why not pick a social network consensus group, decide that you do trust enough of them not to collude, and be done with it. I’m thinking of non-market participants like Microsoft, IBM, PwC, etc for example running Ripple or Stellar nodes. Lock down to 25 or so in that category, can’t flood with new validating nodes, consensus is arrived at, and off you go, no unnatural acts required.

          user

          Author psztorc

          Posted at 8:21 am December 22, 2015.

          I’m thinking of non-market participants like Microsoft, IBM, PwC, etc for example running Ripple or Stellar nodes.

          You are describing banking.

          user

          Author Vitalik Buterin

          Posted at 1:39 pm November 30, 2014.

          > How to reach consensus on which single fork is the true fork?

          The true fork is the fork that has existed the whole time, and didn’t just suddenly appear out of nowhere long after the fact. A few IQ points of plain old human collective intelligence is enough to determine which one that is.

          user

          Author Vitalik Buterin

          Posted at 1:36 pm November 30, 2014.

          So, there are two potential long-range attack vectors against weak subjective consensus:

          1. Some large effort somehow convinces everyone on the internet that the correct checkpoint at block N – 10000 is C’, when it was actually C.

          This imo is absurd. Large global-scale campaigns to convince people of an untrue statement can sometimes succeed, but all practical instances of that are either (i) under governments whose subjects lack internet access or aren’t used to getting their info from internet sources, which is a hopeless case for blockchain tech anyway, or (ii) where there is plenty of subjectivity and room for error involved. The value of a 32-byte hash from a particular block number a few months ago is as clear as night and day.

          2. An attacker targets one node specifically, trying to convince it that the checkpoint at block N – 10000 is C’ instead of C, while the rest of the world knows that it is actually C.

          I propose several possible solutions to that:

          1. Use the same trust-by-default-if-no-media-explosion mechanism for checkpoints that you do for software versions: when you go back online and download a checkpoint, look for one which has been published by some reliable sources and has been around for more than two weeks.
          2. Have your client keep track of a few dozen nodes which had a large stake in the system last time it logged on, and then take a poll of them (nodes are motivated to answer honestly because (i) altruism-prime, and (ii) if they answer dishonestly then you will remember that and later ignore them or do the same to them, ie. the iterated prisoner’s dilemma argument; (i) is more important). If there is substantial discord between the different sources, then the user would be alerted that nodes are probably lying to them and they should go out and check a few forums and see what the real hash is.
          3. Check a few block explorers. Block explorers are motivated to tell the truth because if they don’t then there will be a media explosion and they will lose market share and thus ad revenue. Also, the media explosion effect applies to them as well, so you can trust them in the same way that you trust software providers, and they will at most be able to cheat a few people (realistically, 0 people if you check more than one)

          The mistake you’re making is confusing “trust” with “irrational trust”. Most people’s trust that multiple large reputable organizations are not going to collude to screw them over by actually lying about such a black-and-white fact as a 32 byte hash from a few months ago is very much rational, motivated by an understanding of (i) economics, (ii) moral psychology and (iii) historical empirical data. Deception happens on the margin, when agents with weak morality realize that they can get away with some slight malfeasance that they can justify to themselves and to others as actually being OK if you interpret your ethical principles and rules somewhat creatively (see Daniel Kahneman’s “The Honest Truth About Dishonesty”).

          Additionally, note that attackers don’t just have the ability to freely create new forks at will. They still need the cooperation of >25% of stake; the only thing that changes is that you lose the deposit incentive not to double-vote and all you have left is altruism-prime and social incentives.

          Sound complex? Sure, multi-heuristic approaches are. But you have to have a really really high amount of complexity overhead before it actually becomes worse than the billions of dollars of expenditure that will be annually wasted by proof of work, and all this doesn’t even begin to reach 1% of that.

          user

          Author psztorc

          Posted at 6:26 am December 1, 2014.

          I did not realize that you are limiting “weak subjectivity” to only preventing multi-year
          reorganizations? I agree that those are very unlikely…I would even say that they are not a problem at all (and so no need for a “solution”…Gavin proposed similar ideas that
          are much more tractable than subjectivity, several **years** ago [
          https://bitcointalk.org/index.php?topic=78403.msg874553#msg874553 ]). I frequently talk about PoS-attacks in this “whole chain rewrite” extreme (as I’ve done on my
          blog), but the point is that attackers can build ‘catch-up chains’ to
          reverse transactions that took place >1 hour ago (which they “can’t”
          do in PoW).

          I doubt the security deposit will discourage attempts to conduct multimillion-dollar double-spends (unless the security deposit is itself millions of dollars, which has it’s own problems). But perhaps this isn’t the right forum to reopen the entire case against PoS.

          user

          Author Vitalik Buterin

          Posted at 9:31 am December 1, 2014.

          So, Gavin’s idea is basically a version of TaPoS, and that’s economically exploitable – specifically, if an attacker purchases private keys of genesis holders on the black market, then the attacker can simulate block signing _and_ transaction activity.

          Yeah, the plan is to make the deposit length at least a few months and my pref is a year. I would totally not support subjectivity at the multi-hour timescales that nxt and bitshares are currently going with. And the idea is precisely to make security deposits very large; a large part of the whole point of PoS is that via deposits the security margin can be made to be larger than the reward, which is not the case with PoW, hence why PoS is cheaper (I completely forgot to add that argument into my post for some reason)

          user

          Author psztorc

          Posted at 5:22 pm December 2, 2014.

          Very interesting. If the security deposit were greater than the sum of all transactions in a block, one would be unable to profitably doublespend (unless they could control or destroy the network in some other way from that point forward).

          You might then limit the “sum of transactions per block”, as with fast blocks individuals might just make several consecutive transactions, and the limit would also stop the deposit from needing to be several million dollars.

          Don’t you think the security deposits will have to be, not only “very large”, but “very very large”, nearly millions of dollars? I think that you should consider writing your next blog post about the implications of this. If someone steals my signing key, I lose my millions…all to gain a small blockreward. You have reduced the marginal benefit (and therefore the marginal cost, as you contrived), but how many individuals will still want to act as signers, and how variable will this number be? Are you worried that this will scale, in the opposite sense: that it can’t work while the chain is small/obscure? How might you get it off the ground?

          user

          Author Vitalik Buterin

          Posted at 5:55 pm December 2, 2014.

          So first of all, I really do recommend you talk to Vlad; I invited you to the CCRG slack and he’s frequently active there. He spends even more time thinking about security deposit theory than I do.

          Your suggestion of making the security deposit greater than the sum of all tx volume in a block is a very good idea, but unfortunately it’s not quite sufficient exactly as you stated it, since an attacker can use the same deposits for N blocks in a row and get them slashed only once but double-spend all txs in those blocks; what you want for _perfect_ security is a deposit equal to the sum of all tx volume during a max reorg period (or a maximum of having the deposit be 50% of all coins, since if you double-spend the same coins twice the first double-spend double-spends the second double-spend). But in practice I think a lower margin should be fine.

          If you want Bitcoin-equivalent security, then the security deposits that you will need in Bitcoin are a total of $70 million (we’ll make it $250 million because money is much more liquid and quick to obtain, so the community has less of a chance to respond) – 5% of the supply of total BTC, and equivalent to half a week worth of transaction volume.

          So for a signer’s profit you have the formula:

          signing_profit(d) = (r – h) * d

          Where r is the rate of return for signing, h is the annualized probability of getting hacked, and d is the deposit length. Now, suppose that the interest rate in that currency (equal to the global prevailing interest rate plus a risk premium for the currency’s volatility) is i. A signer will sign if:

          signing_profit(d) > i * d

          (r – h) * d > i * d

          Hence, the algorithm will need to set:

          r > i + h

          Note that this formula assumes zero risk aversion; if we assume a logarithmic utility model, then this only holds for depositing small amounts; for depositing half of one’s wealth we would have:

          signing_utility(d) = ((1 – h) * U((1 + r/2) * d) + h * U(d / 2)) – U(d)

          = (1 – h) * U((1 + r/2) * d) – (1 – h) * U(d) + h * U(d / 2) – h * U(d)
          = (1 – h) * ln(1 + r/2) – h * ln(2)
          ~= r * 0.5 – h * ln(2)

          So we need:

          r * 0.5 > i * 0.5 + h * ln(2)
          r > i + h * 1.386

          So, slightly worse, but still, “the market will find a balance”. Note that:

          1. People are willing to invest in Bitcoin-denominated investments like Huobi even though those are vulnerable to hacks (I’m pretty sure they have a “caveat emptor” policy there; anything else would be too vulnerable to scams) which have interest rates at ~15%, so we know i + h < 0.15 and thus (i + h) * 1.386 < 0.21.
          2. We could cynically assume that Dunning-Kruger reduces people's _expectation_ of h to below the _actual value_ of h. However, if we're analyzing this from a welfare economics perspective then this doesn't really matter since the difference between the two will be accounted positively as a reduced cost to the system and negatively as a negative externality, so we can ignore this.
          3. h is not the probability of being hacked at all, it's actually the marginal increased probability of being hacked if you are signing versus keeping funds in secure cold storage. Given that my algos (slasher ghost, etc) are designed to be multisig-friendly, the only delta involved is the multisig cold storage vs multisig hot storage distinction, which is probably much smaller than the total probability of being hacked in single-sig environments.

          For your point on the algo not working when the chain is small-scale, I think that's resolvable. In fact, note that chains that are small-scale tend to have a high speculation-to-volume ratio, therefore a high value-to-volume ratio, so a volume-indexed PoS algo will actually need to spend less on security.

        user

        Author Vitalik Buterin

        Posted at 3:25 pm November 29, 2014.

        > One would hope it would be obvious by now: If you’re going to assume that everyone can always find “the real block hash” from a trusted 3rd party, then there’s no reason to use blockchain technology at all. Satoshi’s rules for invalidating block hashes would be superfluous.

        Well, except for the whole bounded rationality argument I made in my post…

        > [1] such a person will exist when I want to access the network (personally, my friends/family do not run servers or keep their computers on at night)

        Fine then, trust your software provider whom you are trusting already.

        > [2] that they will choose to provide me with information on-demand

        Providing a 32-byte hash is close to zero-cost.

        > [3] that no one will interrupt, intercept, or attack this process (by DoSing or hacking my friend).

        The same argument can be applied to the process of yourself downloading the blockchain software.

        Reply
          user

          Author psztorc

          Posted at 11:15 pm November 29, 2014.

          > Well, except for the whole bounded rationality argument I made in my post…

          It seems pointless to point out that ‘arguments exist’ when instead one could just ‘argue’, right?

          > Fine then, trust your software provider whom you are trusting already.

          You are wrong about that. I do not “trust” my software provider (in the sense that you use it here for the friend’s hash, of “trust without confirming evidence”) and I don’t have to (see my response to Martin).

          > Providing a 32-byte hash is close to zero-cost.

          Power, hardware, computer/connection must be on, opportunity cost of lying, threats/bribery, …

          > The same argument can be applied to the process of yourself downloading the blockchain software.

          No, it can’t (see below). I can wait for 6 months to have “everyone else” check the blockchain software, and download that version, and then I can choose to never download another version again. It is *because* executing a double-spend this way (via corrupt version) has such a high cost/benefit that no one attempts to attack this way, and, therefore, it appears secure. People are aiming for a different weakest link now, they’ll change their aim if you give them a different chain.

          user

          Author Joshua Davis

          Posted at 11:10 pm October 22, 2015.

          Also with IPFS you simply ask the network for a file by specifying the hash if I am not mistaken so it seems that IPFS would establish file hashes as being an incredibly reliable way of getting access to the real file.

user

Author Dan Rizzatz

Posted at 1:28 am November 26, 2014.

I understand some of these words.

Reply
    user

    Author Alex Millar

    Posted at 1:40 am January 1, 2016.

    You make a good point. If regular people can’t understand what secures proof of stake, then we’ll have a much harder time trusting platforms that run on it.

    Reply
      user

      Author Cath Thomas

      Posted at 9:48 am February 8, 2016.

      Do regular people understand SSL? TCP/IP? Has this prevented them from using them? No, at some point we trust others to understand other things that we do not. There’s nothing wrong with that.

      Reply
        user

        Author earonesty

        Posted at 5:32 pm March 5, 2016.

        I understand both TCP/IP and SSL, and I helped work on them. I kindof get elliptical curve cryptography…. but right now PoS, PoW are not really as formally analyzed. Crypto-currencies live at the intersection of computer science and game theory… and are inherently harder to analyze as a result.

        Reply
user

Author chepurnoy

Posted at 1:32 am November 26, 2014.

We in Consensus Research are investigating multibranch forging & it’s consequences. We’ve just published first paper on multibranch model & simulation tool we’ve developed(will be opensourced within next few days). Next paper will be on Nothing-at-Stake formal definition / simulation / requirements & consequences estimations. Join discussion please! https://nxtforum.org/consensus-research/multibranch-forging-approach/

Reply
user

Author gubatron

Posted at 4:38 am November 26, 2014.

that vitalik sure has time to blog very long posts. hopefully this is part of the focus necessary to deliver ethereum.

Reply
    user

    Author Gritt N. Auld

    Posted at 8:57 am November 26, 2014.

    Can people not have time to themselves?

    Reply
      user

      Author gubatron

      Posted at 6:04 pm November 26, 2014.

      not that much if you want to change the world and when counterparty is not fucking around.

      Reply
    user

    Author crainbf

    Posted at 2:52 pm December 11, 2014.

    Figuring out the most efficient blockchain architecture is essential for deploying a system that survives in the long term.

    Reply
user

Author Vitalik Buterin

Posted at 4:53 pm November 26, 2014.

And Bitshares. And Tendermint. And our Slasher prototypes since September. I have not invented anything new here, it’s simply a formalization explaining why revert limits are legitimate. However, I will note that other algos seem to have revert limits in the 1-24 hour range, whereas I’m targeting a revert limit of 1-12 months.

Reply
    user

    Author jaekwon

    Posted at 5:36 am November 27, 2014.

    Except Tendermint, which is targeting a revert limit of around 12 months. It’s already implemented, BTW. Feel free to use it. You’re concerned about scaling issues with the number of validators, but my initial simulations show that it’s fine on a proper gossip network implementation.

    Reply
      user

      Author Benjamin_Bit

      Posted at 7:37 am November 27, 2014.

      Not sampling (random or otherwise) seems undesirable to me. What if people disappear, so that the network falls short of a quorum? Wouldn’t that be an absorbing state?

      Reply
        user

        Author jaekwon

        Posted at 9:49 am November 27, 2014.

        It’s not an absorbing state in the sense that if these validators come back online, consensus continues. If the validators disappeared because they’re kamikaze Byzantine or because their private keys were lost, then yes it would be an absorbing state. But consider a network split scenario where two continents are disconnected from each other. If neither side has a sufficient quorum, they *should not* be committing any blocks.

        You might counter that a fair random sampling of validators could ensure that even in the case of a network split, the sample is representative enough that the chances of a sufficient quorum being in the sample is vanishingly small. Perhaps, as long as you can construct a random number generator that is sufficiently resistant to “grind attacks”. In any case, this solution would still suffer from the halting problem in the case where the network falls short of a quorum. So while sampling would help with cutting down on bandwidth, there’s no real need if you’re satisfied with the performance characteristics of Tendermint (1 minute blocks, 5000 validators, 10Mbit/s is my estimate, & better on faster networks).

        Sampling also introduces complications with incentive alignment. If only a small sample is required to commit a block, the penalty of forking the block chain goes down, perhaps to zero. So one quorum of a sample may commit a block H, but what prevents another quorum of another sample from committing an alternative block at H? You might counter that this problem is solved by requiring more block confirmations before a transaction is committed. But that doesn’t provide any benefits over requiring a quorum of the whole network to validate the next block.

        Finally, there is a tradeoff between the quorum size needed to commit a block, and the maximum guaranteed penalty of forking the block chain. So if you’re not comfortable with a 34% quorum disappearing leading to a halt in consensus, then you can bump that up to say 45%, but then you only need potentially as little as 10% of duplicitous signers to to fork the block chain. This is a tradeoff worth considering, especially since only duplicitous signing actually generates hard evidence. More information here: http://goo.gl/TxOJzC

        I think the best thing to do is to get over our fear of absorbing (halting) states in consensus, and recognize that in practice, in fatal scenarios we humans can reboot consensus as necessary. I strongly suspect that with the right incentives (e.g. incentivizing validators to explicitly “sign off” vs timing them out implicitly, incentivizing hackers to announce a successful hack of a validator), a consensus network based on Tendermint will operate smoothly with sufficient market cap, decentralized coin distribution, and reasonable validator server allocation distribution. There may be hiccups along the way, but the network will get stronger over time as byzantine validators get pruned away.

        Reply
          user

          Author Vitalik Buterin

          Posted at 3:43 pm November 29, 2014.

          > Sampling also introduces complications with incentive alignment. If only a small sample is required to commit a block, the penalty of forking the block chain goes down, perhaps to zero. So one quorum of a sample may commit a block H, but what prevents another quorum of another sample from committing an alternative block at H

          Right, so there are two solutions to that:

          * Slasher 1.0 approach: pre-select the subset many blocks beforehand (see original post at https://blog.ethereum.org/2014/01/15/slasher-a-punitive-proof-of-stake-algorithm/ )
          * Slasher 2.0 approach: penalize not double-voting, but voting on the wrong chain, ie. voting for A penalizes you in B, and voting for B penalizes you in A.

          > Perhaps, as long as you can construct a random number generator that is sufficiently resistant to “grind attacks”

          The NXT approach (basically, generationSeed(genesis) = 0, generationSeed(block) = hash(generationSeed(block.parent) + blockmaker_pubkey) ) seems to work pretty well. The only way for a blockmaker to influence the seed result is to drop out, and that is costly. Low-influence functions are the other category that is worth researching.

          user

          Author jaekwon

          Posted at 9:22 pm November 29, 2014.

          Well, what I mean by incentive-alignment is that you want the guaranteed penalty of a double-spend attack to be rather large. Even assuming a perfect random oracle used for sampling, if only a few validator are required to commit a block, then only a few coins are “at stake”. So while sampling works well assuming that most nodes are honest and only a fraction (e.g. <1/2 or <1/3) are Byzantine, what we should be doing for cryptocurrency protocols is *not* assume honesty and *still* guarantee that large amounts are "at stake" in the event of double-spending. What happens when validators are slightly incentivized to run an alternative protocol that is revised to collude to double-spend large amounts whenever profitable? Wouldn't validators switch to this colluding strategy?

          I wrote a post about this, which became the intro to the revised whitepaper: http://goo.gl/86I0Yn

        user

        Author Vitalik Buterin

        Posted at 3:37 pm November 29, 2014.

        > What if people disappear, so that the network falls short of a quorum? Wouldn’t that be an absorbing state?

        I suppose that:

        1. That is still a problem with sampling algos, though to a lesser extent because you can get lucky during intervals; eg. if 50% participation is required with 30 signatories sampled, and 33% are online, then 1 in 25 blocks will process.
        2. It’s not really likely to happen, and if it does happen it will happen slowly, so you can detect it and remove inactive validators from the quorum
        3. You can penalize non-voting, so you know that voters are committed to stay

        Reply
      user

      Author Vitalik Buterin

      Posted at 3:32 pm November 29, 2014.

      > Tendermint doesn’t sample, it requires all bonded validators to participate. And it’s not based on cryptographic m-of-n threshold signatures either so bond amounts can be variable. Why sample when you can have block commit times in the order of 1 minute for 5000 validators on 10Mbit connections, and where internet bandwidth increases every year?

      Well, I would argue that requiring everyone to participate is a degenerate case of sampling, which is a legitimate usage for the purpose of that particular paragraph.

      Now, why sample non-degenerately? Because in python an ECC verification takes 0.04s, so 5000 of them will be 200 seconds, which is longer than a block time. You pretty much have to choose between centralization (requiring a minimum deposit size, leaving the system open only to the rich) and making your system unfriendly to nodes that do not have very high computing power.

      Reply
        user

        Author jaekwon

        Posted at 7:20 pm November 29, 2014.

        Use C bindings to batch verify Ed25519. Commodity server/desktop/laptops are sufficient.

        Reply
          user

          Author Vitalik Buterin

          Posted at 7:42 pm November 29, 2014.

          1. That’s not particularly nice to light clients who are trying to verify a header chain off of phones or IoT devices.
          2. If you rely on batch verification and a particular signature algorithm you lose out on many opportunities for generalization, both in terms of what sig algorithm you’re using and what access policy an account has. You really don’t want to have an opportunity to trigger a large security deposit loss to be hidden behind a single-sig. So that will increase the load by maybe 2-5x between the extra signatures and the unique work of verifying each one.
          3. 64 bytes per sig * 5000 users = 320k per minute = 500MB per day is actually quite a lot imo. Even Bitcoin right now processes a _total_ of 50 MB per day of data.
          4. I think 5000 users is substantially understated; if this goes another couple of orders of magnitude mainstream 100000 might be more reasonable. There are at least 100000 Bitcoin users, and you can bet the majority of them would be interested in stake-voting with at least some part of their coins, and if you discourage them from doing so you risk the development of stake pools. Centralized stake pools are obviously very bad, and decentralized stake pools (ie. ethereum contracts that randomly choose a subset of their participants to determine the contract’s vote on each block) just get you right back to where you started.

          5. I like my block times at 5-15s, not 60s; once you go beyond money decentralized application developers demand that kind of speed imo.

          The basic principle that a good blockchain protocol should have is that the block headers should have O(1) (or at most O(log(n))) verification complexity; requiring every signer to sign every block is O(n). That’s basically why I like the slasher 2.0 solution (penalize voting on other chains, not double-voting) more.

          user

          Author jaekwon

          Posted at 9:04 pm November 29, 2014.

          1. Light devices can use sampling // challenge response for verifying payments. The amounts are typically smaller, plus each device would use different samples, and it doesn’t contribute to consensus so the risk of sampling is minimal.

          2. Use whatever signature scheme you want for transactions, ed25519 can be limited to block validation.

          3. Not all of those signatures need to be stored, even for secure onboarding of new nodes with the same guaranteed security. More on that later.

          4. I don’t think 100,000 validators is even that much more desirable than 1000, and certainly the incentives for validation doesn’t need to be as strong is Bitcoin’s today. You just need to ensure that there’s enough incentive for enough coins to be held in bond. In any case, 100,000 validators sounds feasible in the not-too-distant future, but even if it isn’t, you can shard the blockchain into many and either have pegging or floating exchange rates. The latter sounds best, with regionally local validators.

          5. 15s or 60s, they’re both inconvenient for real time applications. There are several potential solutions. You could have the next X proposers sign transactions as a promise to include them in the next proposal. You could shard the blockchain into smaller regional consensus groups, or wait until fiber becomes more common (consensus is faster with faster internet). You could add an extension to the protocol to allow one validator to be part of a 2-2 multisig contract: if they attempt a double-spend, their bonds are destroyed. Lots of details omitted.

          6. It’s O(n) for online validators, and as mentioned previously, much less for storage or onboarding new nodes. All you need is a modification to the validation contract that says, a commit-vote for chain X is a promise that all previous commit-votes had been on this chain, then most signatures can be pruned away without loss of security.

          There are schemes that involve noninteractive multisignatures that could work to condense all those commits into signature. The BLS multisignature scheme looks promising in this regard, though there are subtle nuances in managing performance. I haven’t considered it further because I don’t see a practical need for it. Though it’s still technically O(n), it would be much shorter in bytes. You can have O(1) verification of a block with cryptographic threshold signatures, but this requires bond amounts to be multiples of the same unit && probably requires regeneration of key shares when participation changes — pretty clunky. If you have O(1) block verification with a constant number of signatures (e.g. no cryptographic aggregation) then the scheme is probably not secure esp in terms of incentive alignment against double spending. Consensus requires consensus.

user

Author Andreas Pauley

Posted at 11:03 pm December 7, 2014.

Given the positive conclusion that PoS can be made secure and is more economically efficient than PoW, is Ethereum considering using some version of PoS?
Based on my understanding of the white paper, Ethereum will be using PoW in the form of random contract execution. Why did that approach win over PoS?

Reply
user

Author Mathias Bucher

Posted at 8:47 am December 29, 2014.

Excellent work, Vitalik.

When it comes to this: “Fortunately, for them, the solution is simple: the first time they sign
up, and every time they stay offline for a very very long time, they
need only get a recent block hash from a friend, a blockchain explorer,
or simply their software provider, and paste it into their blockchain
client as a “checkpoint”. They will then be able to securely update
their view of the current state from there.”

Suggestion: just add a smart jury mechanism to the algo instead (at least as an option), and the criticism of “Centralization” is gone.
Just as an afterthought: it also might reduce likelihood of imposturing; respectively the required efforts to keep that in check

Reply
    user

    Author bshanks

    Posted at 7:18 pm October 24, 2015.

    What is a “smart jury”?

    Reply
user

Author Mathias Bucher

Posted at 8:56 am December 29, 2014.

Daniel:
The required automatization is a function of (amongst others)
* size of the network
* sophistication of participants
* regulatory requirements (watch out for this when crypto 2.0 goes mainstream)

If automatization leads to the same final state (at least probabilistically so), and adds an economically bearable cost (to judge this is certainly subjective to a degree), automatization is preferred to the need of an active human choice

Reply
user

Author paul firth

Posted at 10:53 am September 7, 2015.

I know this is a bit late considering the age of this article, but I’d like to question your initial assumption that the class of problems being called ‘nothing at stake’ are the root problem with proof of stake.

I suggest that the root problem with POS is that since producing a block has no ongoing cost, neither does attacking the chain. Once an attacker has acquired the relevant current (or historical) stake, they are free to produce forks or attack in any other way they see fit.

Simply put, POS has a constant attack cost, vs POW where the cost is linear in the number of blocks produced.

Reply
user

Author earonesty

Posted at 5:14 pm February 11, 2016.

This “rely on a friend” is similar to peercoin’s “the developer is your friend” solution. I think ethereum would be best served moving to hybrid POW/POS (https://eprint.iacr.org/2014/452.pdf) to reduce fees and costs, and stabilize the network later. Or possibly moving to hybrid POB/POS (proof of burn/proof of stake) with a similar “random signer” algorithm. POW has the problem of centralization, POS has the problem of botnets, POB has the problem of monopoly. Hybrid systems seem more stable to me. An alternative to the proof of activity paper (above), way is to use the “proof of burn” blocks as a checkpoint every 100 blocks. This mitigates proof of burns’ issues, while stabilizing POS.

Reply
user

Author earonesty

Posted at 5:30 pm March 5, 2016.

Proof of Burn solves “nothing at stake”… the burn is at stake. And if you include a protocol for not allowing burn-reuse (burns have to be older than a 100 blocks but not older than 500 blocks, for example)… there’s a big cost to blocks failing to be mined… and a big reward for being an honest actor. I think this is cleaner than slasher, since it doesn’t require actual checkpointing, and it mirrors proof of work – a reasonably tested protocol – for security.

Reply

Leave a Reply to Mathias Bucher
Cancel Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

View Comments (58) ...
Navigation