Ethereum Blog

The Subjectivity / Exploitability Tradeoff

Introduction

user

Vitalik Buterin


LATEST POSTS

Roundup Round III 24th May, 2017

Ethereum Dev Roundup: Q1 (Boring Edition) 07th April, 2017

technical

The Subjectivity / Exploitability Tradeoff

Posted on .

One of the issues inherent in many kinds of consensus architectures is that although they can be made to be robust against attackers or collusions up to a certain size, if an attacker gets large enough they are still, fundamentally, exploitable. If attackers in a proof of work system have less than 25% of mining power and everyone else is non-colluding and rational, then we can show that proof of work is secure; however, if an attacker is large enough that they can actually succeed, then the attack costs nothing – and other miners actually have the incentive to go along with the attack. SchellingCoin, as we saw, is vulnerable to a so-called P + epsilon attack in the presence of an attacker willing to commit to bribing a large enough amount, and is itself capturable by a majority-controlling attacker in much the same style as proof of work.

One question that we may want to ask is, can we do better than this? Particularly if a pseudonymous cryptocurrency like Bitcoin succeeds, and arguably even if it does not, there doubtlessly exists some shadowy venture capital industry willing to put up the billions of dollars needed to launch such attacks if they can be sure that they can quickly earn a profit from executing them. Hence, what we would like to have is cryptoeconomic mechanisms that are not just stable, in the sense that there is a large margin of minimum “size” that an attacker needs to have, but also unexploitable – although we can never measure and account for all of the extrinsic ways that one can profit from attacking a protocol, we want to at the very least be sure that the protocol presents no intrinsic profit potential from an attack, and ideally a maximally high intrinsic cost.

For some kinds of protocols, there is such a possibility; for example, with proof of stake we can punish double-signing, and even if a hostile fork succeeds the participants in the fork would still lose their deposits (note that to properly accomplish this we need to add an explicit rule that forks that refuse to include evidence of double-signing for some time are to be considered invalid). Unfortunately, for SchellingCoin-style mechanisms as they currently are, there is no such possibility. There is no way to cryptographically tell the difference between a SchellingCoin instance that votes for the temperature in San Francisco being 4000000000’C because it actually is that hot, and an instance that votes for such a temperature because the attacker committed to bribe people to vote that way. Voting-based DAOs, lacking an equivalent of shareholder regulation, are vulnerable to attacks where 51% of participants collude to take all of the DAO’s assets for themselves. So what can we do?

Between Truth and Lies

One of the key properties that all of these mechanisms have is that they can be described as being objective: the protocol’s operation and consensus can be maintained at all times using solely nodes knowing nothing but the full set of data that has been published and the rules of the protocol itself. There is no additional “external information” (eg. recent block hashes from block explorers, details about specific forking events, knowledge of external facts, reputation, etc) that is required in order to deal with the protocol securely. This is in contrast to what we will describe as subjective mechanisms – mechanisms where external information is required to securely interact with them.

When there exist multiple levels of the cryptoeconomic application stack, each level can be objective or subjective separately: Codius allows for subjectively determined scoring of oracles for smart contract validation on top of objective blockchains (as each individual user must decide for themselves whether or not a particular oracle is trustworthy), and Ripple’s decentralized exchange provides objective execution on top of an ultimately subjective blockchain. In general, however, cryptoeconomic protocols so far tend to try to be objective where possible.

Objectivity has often been hailed as one of the primary features of Bitcoin, and indeed it has many benefits. However, at the same time it is also a curse. The fundamental problem is this: as soon as you try to introduce something extra-cryptoeconomic, whether real-world currency prices, temperatures, events, reputation, or even time, from the outside world into the cryptoeconomic world, you are trying to create a link where before there was absolutely none. To see how this is an issue, consider the following two scenarios:

  • The truth is B, and most participants are honestly following the standard protocol through which the contract discovers that the truth is B, but 20% are attackers or accepted a bribe.
  • The truth is A, but 80% of participants are attackers or accepted a bribe to pretend that the truth is B.

From the point of view of the protocol, the two are completely indistinguishable; between truth and lies, the protocol is precisely symmetrical. Hence, epistemic takeovers (the attacker convincing everyone else that they have convinced everyone else to go along with an attack, potentially flipping an equilibrium at zero cost), P + epsilon attacks, profitable 51% attacks from extremely wealthy actors, etc, all begin to enter the picture. Although one might think at first glance that objective systems, with no reliance on any actor using anything but information supplied through the protocol, are easy to analyze, this panoply of issues reveals that to a large extent the exact opposite is the case: objective protocols are vulnerable to takeovers, and potentially zero-cost takeovers, and standard economics and game theory quite simply have very bad tools for analyzing equilibrium flips. The closest thing that we currently have to a science that actually does try to analyze the hardness of equilibrium flips is chaos theory, and it will be an interesting day when crypto-protocols start to become advertised as “chaos-theoretically guaranteed to protect your grandma’s funds”.

Hence, subjectivity. The power behind subjectivity lies in the fact that concepts like manipulation, takeovers and deceit, not detectable or in some cases even definable in pure cryptography, can be understood by the human community surrounding the protocol just fine. To see how subjectivity may work in action, let us jump straight to an example. The example supplied here will define a new, third, hypothetical form of blockchain or DAO governance, which can be used to complement futarchy and democracy: subjectivocracy. Pure subjectivocracy is defined quite simply:

  1. If everyone agrees, go with the unanimous decision.
  2. If there is a disagreement, say between decision A and decision B, split the blockchain/DAO into two forks, where one fork implements decision A and the other implements decision B.

All forks are allowed to exist; it’s left up to the surrounding community to decide which forks they care about. Subjectivocracy is in some sense the ultimate non-coercive form of governance; no one is ever forced to accept a situation where they don’t get their own way, the only catch being that if you have policy preferences that are unpopular then you will end up on a fork where few others are left to interact with you. Perhaps, in some futuristic society where nearly all resources are digital and everything that is material and useful is too-cheap-to-meter, subjectivocracy may become the preferred form of government; but until then the cryptoeconomy seems like a perfect initial use case.

For another example, we can also see how to apply subjectivocracy to SchellingCoin. First, let us define our “objective” version of SchellingCoin for comparison’s sake:

  1. The SchellingCoin mechanism has an associated sub-currency.
  2. Anyone has the ability to “join” the mechanism by purchasing units of the currency and placing them as a security deposit. Weight of participation is proportional to the size of the deposit, as usual.
  3. Anyone has the ability to ask the mechanism a question by paying a fixed fee in that mechanism’s currency.
  4. For a given question, all voters in the mechanism vote either A or B.
  5. Everyone who voted with the majority gets a share of the question fee; everyone who voted against the majority gets nothing.

Note that, as mentioned in the post on P + epsilon attacks, there is a refinement by Paul Sztorc under which minority voters lose some of their coins, and the more “contentious” a question becomes the more coins minority voters lose, right up to the point where at a 51/49 split the minority voters lose all their coins to the majority. This substantially raises the bar for a P + epsilon attack. However, raising the bar for us is not quite good enough; here, we are interested in having no exploitability (once again, we formally define “exploitability” as “the protocol provides intrinsic opportunities for profitable attacks”) at all. So, let us see how subjectivity can help. We will elide unchanged details:

  1. For a given question, all voters in the mechanism vote either A or B.
  2. If everyone agrees, go with the unanimous decision and reward everyone.
  3. If there is a disagreement, split the mechanism into two on-chain forks, where one fork acts as if it chose A, rewarding everyone who voted A, and the other fork acts as if it chose B, rewarding everyone who voted B.

Each copy of the mechanism has its own sub-currency, and can be interacted with separately. It is up to the user to decide which one is more worth asking questions to. The theory is that if a split does occur, the fork specifying the correct answer will have increased stake belonging to truth-tellers, the fork specifying the wrong answer will have increased stake belonging to liars, and so users will prefer to ask questions to the fork where truth-tellers have greater influence.

If you look at this closely, you can see that this is really just a clever formalism for a reputation system. All that the system does is essentially record the votes of all participants, allowing each individual user wishing to ask a question to look at the history of each respondent and then from there choose which group of participants to ask. A very mundane, old-fashioned, and seemingly really not even all that cryptoeconomic approach to solving the problem. Now, where do we go from here?

Moving To Practicality

Pure subjectivocracy, as described above, has two large problems. First, in most practical cases, there are simply far too many decisions to make in order for it to be practical for users to decide which fork they want to be on for every single one. In order to prevent massive cognitive load and storage bloat, it is crucial for the set of subjectively-decided decisions to be as small as possible.

Second, if a particular user does not have a strong belief that a particular decision should be answered in one way or another (or, alternatively, does not know what the correct decision is), then that user will have a hard time figuring out which fork to follow. This issue is particularly strong in the context of a category that can be termed “very stupid users” (VSUs) – think not Homer Simpson, but Homer Simpson’s fridge. Examples include internet-of-things/smart property applications (eg. SUVs), other cryptoeconomic mechanisms (eg. Ethereum contracts, separate blockchains, etc), hardware devices controlled by DAOs, independently operating autonomous agents, etc. In short, machines that have (i) no ability to get updated social information, and (ii) no intelligence beyond the ability to follow a pre-specified protocol. VSUs exist, and it would be nice to have some way of dealing with them.

The first problem, surprisingly enough, is essentially isomorphic to another problem that we all know very well: the blockchain scalability problem. The challenge is exactly the same: we want to have the strength equivalent to all users performing a certain kind of validation on a system, but not require that level of effort to actually be performed every time. And in blockchain scalability we have a known solution: try to use weaker approaches, like randomly selected consensus groups, to solve problems by default, only using full validation as a fallback to be used if an alarm has been raised. Here, we will do a similar thing: try to use traditional governance to resolve relatively non-contentious issues, only using subjectivocracy as a sort of fallback and incentivizer-of-last-resort.

So, let us define yet another version of SchellingCoin:

  1. For a given question, all voters in the mechanism vote either A or B.
  2. Everyone who voted with the majority gets a share of the question fee (which we will call P); everyone who voted against the majority gets nothing. However, deposits are frozen for one hour after voting ends.
  3. A user has the ability to put down a very large deposit (say, 50*P) to “raise the alarm” on a particular question that was already voted on – essentially, a bet saying “this was done wrong”. If this happens, then the mechanism splits into two on-chain forks, with one answer chosen on one fork and the other answer chosen on the other fork.
  4. On the fork where the chosen answer is equal to the original voted answer, the alarm raiser loses the deposit. On the other form, the alarm raiser gets back a reward of 2x the deposit, paid out from incorrect voters’ deposits. Additionally, the rewards for all other answerers are made more extreme: “correct” answerers get 5*P and “incorrect” answerers lose 10*P.

If we make a maximally generous assumption and assume that, in the event of a split, the incorrect fork quickly falls away and becomes ignored, the (partial) payoff matrix starts to look like this (assuming truth is A):

You vote A You vote B You vote against consensus, raise the alarm
Others mainly vote A P 0 -50P – 10P = -60P
Others mainly vote A, N >= 1 others raise alarm 5P -10P -10P – (50 / (N + 1)) * P
Others mainly vote B 0 P 50P + 5P = 55P
Others mainly vote B, N >= 1 others raise alarm 5P -10P 5P + (50 / (N + 1)) * P

The strategy of voting with the consensus and raising the alarm is clearly self-contradictory and silly, so we will omit it for brevity. We can analyze the payoff matrix using a fairly standard repeated-elimination approach:

  1. If others mainly vote B, then the greatest incentive is for you to raise the alarm.
  2. If others mainly vote A, then the greatest incentive is for you to vote A.
  3. Hence, each individual will never vote B. Hence, we know that everyone will vote A, and so everyone’s incentive is to vote A.

Note that, unlike the SchellingCoin game, there is actually a unique equilibrium here, at least if we assume that subjective resolution works correctly. Hence, by relying on what is essentially game theory on the part of the users instead of the voters, we have managed to avoid the rather nasty set of complications involving multi-equilibrium games and instead have a clearer analysis.

Additionally note that the “raise the alarm by making a bet” protocol differs from other approaches to fallback protocols that have been mentioned in previous articles here in the context of scalability; this new mechanism is superior to and cleaner than those other approaches, and can be applied in scalability theory too.

The Public Function of Markets

Now, let us bring our cars, blockchains and autonomous agents back into the fold. The reason why Bitcoin’s objectivity is so valued is to some extent precisely because the objectivity makes it highly amenable to such applications. Thus, if we want to have a protocol that competes in this regard, we need to have a solution for these “very stupid users” among us as well.

Enter markets. The key insight behind Hayek’s particular brand of libertarianism in the 1940s, and Robin Hanson’s invention of futarchy half a century later, is the idea that markets exist not just to match buyers and sellers, but also to provide a public service of information. A prediction market on a datum (eg. GDP, unemployment, etc) reveals the information of what the market thinks will be value of that datum at some point in the future, and a market on a good or service or token reveals to interested individuals, policymakers and mechanism designers how much the public values that particular good or service or token. Thus, markets can be thought of as a complement to SchellingCoin in that they, like SchellingCoin, are also a window between the digital world and the “real” world – in this case, a window that reveals just how much the real world cares about something.

So, how does this secondary “public function” of markets apply here? In short, the answer is quite simple. Suppose that there exists a SchellingCoin mechanism, of the last type, and after one particular question two forks appear. One fork says that the temperature in San Francisco is 20’C; the other fork says that the temperature is 4000000000’C. As a VSU, what do you see? Well, let’s see what the market sees. On the one hand, you have a fork where the larger share of the internal currency is controlled by truth-tellers. On the other hand, you have a fork where the larger share is controlled by liars. Well, guess which of the two currencies has a higher price on the market…

In cryptoeconomic terms, what happened here? Simply put, the market translated the human intelligence of the intelligent users in what is an ultimately subjective protocol into a pseudo-objective signal that allows the VSUs to join onto the correct fork as well. Note that the protocol itself is not objective; even if the attacker manages to successfully manipulate the market for a brief period of time and massively raise the price of token B, the users are still going to have a higher valuation for token A, and when the manipulator gives up token A will go right back to being the dominant one.

Now, what are the robustness properties of this market against attack? As was brought up in the Hanson/Moldbug debate on futarchy, in the ideal case a market will provide the correct price for a token for as long as the economic weight of the set of honestly participating users exceeds the economic weight of any particular colluding set of attackers. If some attackers bid the price up, an incentive arises for other participants to sell their tokens and for outsiders to come in and short it, in both cases earning an expected profit and at the same time helping to push the price right back down to the correct value. In practice, manipulation pressure does have some effect, but a complete takeover is only possible if the manipulator can outbid everyone else combined. And even if the attacker does succeed, they pay dearly for it, buying up tokens that end up being nearly valueless once the attack ends and the fork with the correct answer reasserts itself as the most valuable fork on the market.

Of course, the above is only a sketch of how quasi-subjective SchellingCoin may work; in reality a number of refinements will be needed to disincentivize asking ambiguous or unethical questions, handling linear and not just binary bets, and optimizing the non-exploitability property. However, if P + epsilon attacks, profit-seeking 51% attacks, or any other kind of attack ever actually do become a problem with objective SchellingCoin mechanisms, the basic model stands ready as a substitute.

Listening to Markets and Proof of Work

Earlier in this post, and in my original post on SchellingCoin, I posited a sort of isomorphism between SchellingCoin and proof of work – in the original post reasoning that because proof of work works so will SchellingCoin, and above that because SchellingCoin is problematic so is proof of work. Here, let us expand on this isomorphism further in a third direction: if SchellingCoin can be saved through subjectivity, then perhaps so can proof of work.

The key argument is this: proof of work, at the core, can be seen in two different ways. One way of seeing proof of work is as a SchellingCoin contest, an objective protocol where the participants that vote with the majority get rewarded 25 BTC and everyone else gets nothing. The other approach, however, is to see proof of work as a sort of constant ongoing “market” between a token and a resource that can be measured purely objectively: computational power. Proof of work is an infinite opportunity to trade computational power for currency, and the more interest there is in acquiring units in a currency the more work will be done on its blockchain. “Listening” to this market consists simply of verifying and computing the total quantity of work.

Seeing the description in the previous section of how our updated version of SchellingCoin might work, you may have been inclined to propose a similar approach for cryptocurrency, where if a cryptocurrency gets forked one can see the price of both forks on an exchange, and if the exchange prices one fork much more highly that implies that that fork is legitimate. However, such an approach has a problem: determining the validity of a crypto-fiat exchange is subjective, and so the problem is beyond the reach of a VSU. But with proof of work as our “exchange”, we can actually get much further.

Here is the equivalence: exponential subjective scoring. In ESS, the “score” that a client attaches to a fork depends not just on the total work done on the fork, but also on the time at which the fork appeared; forks that come later are explicitly penalized. Hence, the set of always-online users can see that a given fork came later, and therefore that it is a hostile attack, and so they will refuse to mine on it even if its proof of work chain grows to have much more total work done on it. Their incentive to do this is simple: they expect that eventually the attacker will give up, and so they will continue mining and eventually overtake the attacker, making their fork the universally accepted longest one again; hence, mining on the original fork has an expected value of 25 BTC and mining on the attacking fork has an expected value of zero.

VSUs that are not online at the time of a fork will simply look at the total proof of work done; this strategy is equivalent to the “listen to the child with the higher price” approach in our version of SchellingCoin. During an attack, such VSUs may of course temporarily be tricked, but eventually the original fork will win and so the attacker will have massively paid for the treachery. Hence, the subjectivity once again makes the mechanism less exploitable.

Conclusion

Altogether, what we see is that subjectivity, far from being an enemy of rigorous analysis, in fact makes many kinds of game-theoretic analysis of cryptoeconomic protocols substantially easier. However, if this kind of subjective algorithm design becomes accepted as the most secure approach, it has far-reaching consequences. First of all, Bitcoin maximalism, or any kind of single-cryptocurrency maximalism generally, cannot survive. Subjective algorithm design inherently requires a kind of loose coupling, where the higher-level mechanism does not actually control anything of value belonging to a lower-level protocol; this condition is necessary in order to allow higher-level mechanism instances to copy themselves.

In fact, in order for the VSU protocol to work, every mechanism would need to contain its own currency which would rise and fall with its perceived utility, and so thousands or even millions of “coins” would need to exist. On the other hand, it may well be possible to enumerate a very specific number of mechanisms that actually need to be subjective – perhaps, basic consensus on block data availability validation and timestamping and consensus on facts, and everything else can be built objectively on top. As is often the case, we have not even begun to see substantial actual attacks take place, and so it may well be over a decade until anything close to a final judgement needs to be made.

profile

Vitalik Buterin

https://ethereum.org

Comments
user

Author William Mougayar

Posted at 4:17 pm February 14, 2015.

Could there be 3 decisions paths, i.e. A/B/C?
And how about learning from the outcome of historical behavior- does that enter into the consensus tree?

Reply
user

Author AlexPetrenko

Posted at 2:15 am February 16, 2015.

I liked this mechanism where voters are intended to raise an alarm after the incorrect vote, and I see how this can work in 20 vs 4000000 degrees temperature. But what about 20 degrees vs 21 degree? Where values get close, one cannot be 100% sure that only HIS vote was correct and thus wont put down his deposit (because he can lose it).

I wonder how really working Shelling Coin would actually look like 🙂

Reply
    user

    Author Adrian Smith

    Posted at 12:33 am March 6, 2015.

    My biggest problem with this is that raising the alarm costs money, so poor actors will be unable to raise the alarm if a Shelling Coin mechanism produces what they believe is false result.

    Reply
      user

      Author lifeofmatt

      Posted at 7:19 am November 22, 2015.

      A simple way to solve this could just be that an alarm is raised if the TOTAL security deposits for a certain decision reach above the threshold, thus you could crowdfund an alarm.

      Reply
user

Author Joshua Davis

Posted at 12:58 am March 3, 2015.

I have read your post 3 or 4 times in the past 2 weeks and I still haven’t been able to understand a great deal of it. I keep thinking perhaps I need to come back to it when I’m better able to understand it but to no avail.

First I am not sure why Schelling coin games must be set up to decide things as either A or B and then award all the participants in the correct answer set. Although this is a traditional way to understand how Schelling coin works it doesn’t seem to me to be the only way it can be implemented. People participate because they are incentivized to do so and in some ways people are more incentivized by a smaller chance of winning bigger money than a larger chance of winning smaller money. So you could say that if group A wins and you are in group A then 10% of A’s participants will be selected at random to receive a reward which is 10x larger than if all of A’s participants were to receive a reward. Now the type of bribe which must be paid to move such a market should also be substantially larger since you just increased the cost of the total bribe by 10x times? Maybe my thinking on this is flawed somehow in which case please correct me.

In your example you talk about forking a blockchain to let markets decide which fork has the most value and that value is tied to people telling the truth. You point out that the market becomes a type of reputation system to identify truth tellers. I’m trying to wrap my mind around how this would actually be implemented and I can’t do it. If I need to receive a payment from someone and I know that the currency was just forked into two different currencies to try and test out which one of them the market determines came up with the right answer then what I will do is not accept any payment in that currency till the market resolves this question.

I thought forks in a currency created opportunities to double spend which is why we want to avoid forks in the first place. You write “Each copy of the mechanism has its own sub-currency, and can be interacted with separately.” – To me I can’t wrap my mind around how you could move currency onto and off of an exchange with these sub-currency protocols creating new branches or offshoots of the original currency.

If both sub-currencies are equally valid I should be able to move coins from one sub-currency to exchange A and coins from another sub-currency to exchange B. Also how can the wallet in my mist browser know to spend sub-currency A or sub-currency B. Understanding the mechanism of this splitting action within a traditional cryptocurrency context gives me a headache.

Your solution to “there are simply far too many decisions to make in order for it to be practical for users to decide which fork they want to be on” is that an alarm system be used to more quickly resolve forks. So how do users validate that the person who sounded the alarm made a wise decision? Sounding the alarm is prohibitively expensive, I understand that but it doesn’t seem to eliminate the fact that the “market” needs to decide if the person sounding the alarm was justified in doing so. If I need to use my coins and an alarm was just sounded do I need to stop what I am doing to evaluate a specific instance of when the currency was forked and inform my wallet to use either fork A or fork B? The market is made up of people, these are potentially the same people who participated in the Schelling game initially but if you need to make a transaction when a fork is in play this might force you to have to decide. The alarm concept seems to simply be a way of requiring an attacker to essentially double down on his attempt to influence the outcome. If he bribed people the first time why can’t he bribe them for a second round? Seems like the answer is “because the stakes are higher” ok why not cut to the chase and make the stakes higher for the first round and avoid a two round system? Seems like the way you do that is what I described above but again after reading your post a number of times I still am missing something.

I want to understand this post but this time I am pretty sure this subject is over my head. Maybe once I have answers to these initial questions I can tackle understanding the rest of your post.

~J

Reply
    user

    Author lifeofmatt

    Posted at 7:32 am November 22, 2015.

    People participate because they are incentivized to do so and in some ways people are more incentivized by a smaller chance of winning bigger money than a larger chance of winning smaller money. So you could say that if group A wins and you are in group A then 10% of A’s participants will be selected at random to receive a reward which is 10x larger than if all of A’s participants were to receive a reward. Now the type of bribe which must be paid to move such a market should also be substantially larger since you just increased the cost of the total bribe by 10x times?

    No, because you’ll get the bribe 10Xs as often as you get the bigger reward. In classic expected utility theory, this is exactly equal. And in modern behavioral economics, this is actually more desirable, because people will choose more regularity/less percieved risk if it’s the same expected payout.

    Reply
      user

      Author Joshua Davis

      Posted at 8:47 pm November 30, 2015.

      I don’t think you understood what I wrote. Since its unclear if someone would end up getting rewarded for participation in a schelling game the reward is 10x bigger but allocated to only 10% of participants at random from among the winning players. What you are proposing is that the bribes simply are much smaller than what a honest participant might be able to claim but the payout likelyhood is higher but the guarantee is still not 100% that they would be paid a bribe to change their vote which would be bad for them. If you mean that they can be paid a bribe without proving that they voted a certain way they should take your bribe regardless of how they vote which would be bad for you.

      Either way you are comparing apples to oranges which are unlike things. less than 100% chance but better than a 10% chance of getting a X reward is not equivalent to a guaranteed 10% chance of getting a 10X reward. So no this is not the same payout and no your outcome is not predicted by utility theory.

      Reply
        user

        Author lifeofmatt

        Posted at 6:51 pm December 17, 2015.

        What you are proposing is that the bribes simply are much smaller than what a honest participant might be able to claim but the payout likelyhood is higher but the guarantee is still not 100% that they would be paid a bribe to change their vote which would be bad for them.

        You’re right, I don’t understand what you’re saying here. Perhaps we can move to math instead of english language because I have no idea how to parse the above sentence.

        Let’s define the payoff function
        U as U(x) = (.1)10x, from my understanding that’s your proposed payoff function.

        This is exactly equivalent to U(x) = 1x, which is Vitaliks original proposed payoff function.

        Are you actually proposing a different payoff function?

        Reply
          user

          Author Joshua Davis

          Posted at 7:25 pm December 17, 2015.

          Again you are comparing apples and oranges. That and we don’t speak the same language. To me it doesn’t make sense to comment on what someone else wrote if you don’t understand it. I do it all the time however but people quickly correct me and I have to realize that there is something I don’t understand. The basic point you are missing is that not everyone gets a reward in a modified Schelling point protocol.

          You have a certain chance of getting a large reward and this modification is what makes bribes difficult because the attackers bribe cannot exceed the potential reward instead an attacker can only offer a reward that is substantially less than the potential award but with a higher chance for payout.

          I think you are assuming that users would conclude that they could both take the bribe and also find themselves in the majority opinion which would allow them to receive a bribe as well as a schelling point reward. Bad assumption.

          How about users take your bribe and then vote their conscience? Or if the bribe only becomes available after proof of voting is concluded how can users be assured that they will be paid a bribe? You can generate all kinds of fancy escrows and deposits and smart contracts but I think from a human perspective the average user won’t care about what you are doing. Most users don’t want to spend hours trying to figure out your complicated rube goldberg machine just to know if they are going to be benefiting from participating i.e. receive a bribe.

          Bottom line is that your bribe is too small and the modified schelling point payout is way larger and in order to participate to receive your smaller bribe requires effort on the part of the user to make a moral and financial evaluation asking themselves “is this worth it” and the answer is “I’m hungry” or “I’m running late” or “do I really have time for this?”

          Blockchain theorists tend to deny the practicality of human beings using these technologies in the real world. Once this technology becomes reality you will soon realize that there is a utility in keeping things simple and getting rewards that are straightforward and predictable. In the end its user psychology that defeats your attack and not better mathematical proofs that payouts should match some “utility theory”. Users see large 10x rewards and your measly 1x bribe. User doesn’t think to deeply and ignores your bribe. If you promise a 10% chance of a 10x bribe a user is going to say “how in the world is this any better than honest behavior?” So then you guarantee a 1x bribe but the user says “I’d rather have a chance at winning a 10x honest reward”. The imbalance between these large 10x awards and what the attacker pays out plays to human psychology in a way that your math simply doesn’t.

          I mean if you can’t even convince me of what you are selling don’t even bother attempting to sell it to the average user. You want me to speak your math language that right there is like you just waved the white flag. I understand blockchain tech and I don’t get what you are saying. Even attempting to approach a user on a rational level and all of a sudden its like “umm… I don’t get it I think I’ll just vote the normal way.”

          Bottom line. Attacker cannot pay 10x bribes. You are comparing apples to oranges. Your bribe is too complicated for users. Users have already moved on deciding that it was much simpler just to stick to the standard reward which was appealing enough to encourage their participation.

          user

          Author lifeofmatt

          Posted at 8:30 am December 18, 2015.

          Then you guarantee a 1x bribe but the user says “I’d rather have a chance at winning a 10x honest reward”.

          This is something you can test tomorrow. Go out with 6 dollars and a die, and ask people if they’d rather get a guaranteed dollar or roll to get 6 dollars. I suspect more than 60% of people will actually take the first option, although a substantial amount would take the second.

          user

          Author Joshua Davis

          Posted at 12:22 am December 19, 2015.

          Completely missed the point

          user

          Author Joshua Davis

          Posted at 6:53 am January 7, 2016.

          Actually people will choose the option that requires the least effort. Simply tell people that if they roll a dice they get a 1 in 6 chance of 6 dollar reward. If they want 1 dollar instead they simply have to:
          1. write out their name
          2. the reason why they feel 1 guaranteed dollar is better than a 1 in 6 chance at 6 dollars
          3. their moral opinion on whether or not rolling dice is gambling
          4. Place their response into an envelope

          You then put the dice in one hand and a pen and a form pad in the other hand.

          If you post a video on youtube where you try this with 10 people I will be impressed. Your results will depend on where you do it. If in Vegas 100% dice rollers. If in the heart of the bible belt not so much. Again a bribe has the moral connotation of “evil.” Its more moral effort to go that way and its more mental effort to calculate what your financial benefit would be relative honest “non-evil” participation.

          Again I will be impressed if you actually run the experiment and video record it but my first question will be “so what was your demographic?” in other words your result would be impressive based on effort but I’d be stubborn anyways (apologies in advance).

          user

          Author lifeofmatt

          Posted at 3:19 am January 11, 2016.

          If someone wanted to bribe, they would make it as easy as possible(and smart contracts allow it to be easy). It would be more like my experiment than yours.

          They’ve already done these types of experiments. If you want to confirm it for yourself you can do that, but I side with the existing research.

user

Author Michael

Posted at 9:18 pm May 4, 2016.

In a recent article, I describe a different mechanism by which a cryptocurrency can identify subjective Schelling point resolutions to normative questions. http://arizonalawreview.org/pdf/58-2/58arizlrev359.pdf I also advance similar ideas about decentralized law in a book (http://www.amazon.com/Predictocracy-Market-Mechanisms-Private-Decision/dp/0300115997), but the book does not discuss cryptocurrencies.

Reply

Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

View Comments (14) ...
Navigation