EF Blog

ETH top background starting image
ETH bottom background ending image
Skip to content

The Problem of Censorship

Posted by Vitalik Buterin on June 6, 2015

The Problem of Censorship

One of the interesting problems in designing effective blockchain technologies is, how can we ensure that the systems remain censorship-proof? Although lots of work has been done in cryptoeconomics in order to ensure that blockchains continue pumping out new blocks, and particularly to prevent blocks from being reverted, substantially less attention has been put on the problem of ensuring that transactions that people want to put into the blockchain will actually get in, even if "the powers that be", at least on that particular blockchain, would prefer otherwise.

Censorship-resistance in decentralized cryptoeconomic systems is not just a matter of making sure Wikileaks donations or Silk Road 5.0 cannot be shut down; it is in fact a necessary property in order to secure the effective operation of a number of different financial protocols. To take a completely uncontroversial, but high-value, example, consider contracts for difference. Suppose that parties A and B both place 100 ETH into a contract betting on the gold/USD price, with the condition that if the price after 30 days is $1200, both get 100 ETH back, but for every $1 that the price increases A gets 1 ETH more and B gets 1 ETH less. At the extremes, at $1000 B gets the entire 200 ETH, and at $1200 A gets the entire 200 ETH. In order for this contract to be a useful hedging tool, one more feature is required: if the price hits $1190 or $1010 at any point during those 30 days, the contract should process immediately, allowing both parties to take out their money and enter another contract to maintain the same exposure (the $10 difference is a safety margin, to give the parties the ability to withdraw and enter a new contract without taking a loss).

Now, suppose that the price hits $1195, and B has the ability to censor the network. Then, B can prevent A from triggering the force-liquidation clause. Such a drastic price change likely signals more volatility to come, so perhaps we can expect that when the contract ends there is a 50% chance the price will go back to $1145 and a 50% chance that it will hit $1245. If the price goes back to $1145, then once the contract ends B loses 45 ETH. However, if the price hits $1245, then B loses only 100 ETH from the price moving $145; hence, B's expected loss is only 72.5 ETH and not the 95 ETH that it would be if A had been able to trigger the force-liquidation clause. Hence, by preventing A from publishing a transaction to the blockchain at that critical time, B has essentially managed to, in common economic and political parlance, privatize the profits and socialize the losses.

Other examples include auditable computation, where the ability to publish evidence of malfeasance within a particular time-frame is crucial to the mechanism's economic security, decentralized exchanges, where censorship allows users to force others to keep their exchange orders open longer than they intended, and Schellingcoin-like protocols, where censors may force a particular answer by censoring all votes that give any other answer. Finally, in systems like Tendermint, consensus participants can use censorships to prevent other validators from joining the consensus pool, thereby cementing the power of their collusion. Hence, all things taken together, anti-censorship is not even about civil liberties; it is about making it harder for consensus participants to engage in large-scale market manipulation conspiracies - a cause which seems high on the regulatory agenda.

What Is The Threat Model?

The first question to ask is, what is the economic model under which we are operating? Who are the censors, how much can they do, and how much does it cost them? We will split this up into two cases. In the first case, the censors are not powerful enough to independently block transactions; in the Tendermint case, this entails the censors having less than 33% of all validator positions, in which case they can certainly restrict transactions from their own blocks, but those transactions would simply make it into the next block that does not censor them, and that block would still get its requisite 67% signatures from the other nodes. In the second case, the censors are powerful enough; in the Bitcoin case, we can think of the top five mining firms and data centers colluding, and in the Tendermint case a group of very large stakeholders.

This may seem like a silly scenario to worry about - after all, many have argued that cryptoeconomic systems rely on a security assumption that such a large group of consensus participants cannot collude, and if they can then we have already lost. However, in those cases, we actually have a secondary defense: such a collusion would destroy the underlying ecosystem and currency, and thus be highly unprofitable to the parties involved. This argument is not perfect; we know that with bribe attacks it's possible for an attacker to set up a collusion where non-participation is a public good, and so all parties will participate even if it is collectively irrational for them, but it nevertheless does set up a powerful defense against one of the more important collusion vectors.

With history reversion (ie. 51% attacks), it's clear why carrying out such an attack would destroy the ecosystem: it undermines literally the only guarantee that makes blockchains a single bit more useful than BitTorrent. With censorship, however, it is not nearly clear that the same situation applies. One can conceivably imagine a scenario where a large group of stakeholders collude to first undermine specific highly undesirable types of transactions (eg. child porn, to use a popular boogeyman of censors and civil liberties activists complaining about censors alike), and then expand the apparatus over time until eventually it gets into the hands of some enterprising young hotshots that promptly decide they can make a few billion dollars through the cryptoeconomic equivalent of LIBOR manipulation. In the later stages, the censorship may even be done in such a careful and selective way that it can be plausibly denied or even undetected.

Knowing the results of Byzantine fault tolerance theory, there is no way that we can prevent a collusion with more than 33% participation in the consensus process from doing any of these actions absolutely. However, what we can try to do is one of two things:

  1. Make censorship costly.
  2. Make it impossible to censor specific things without censoring absolutely everything, or at least without shutting down a very large portion of the features of the protocol entirely.

Now, let us look at some specific ways in which we can do each one.

Cost

The first, and simplest, way to discourage censorship is a simple one: making it unprofitable, or at least expensive. Notably, proof of work actually fails this property: censorship is profitable, since if you censor a block you can (i) take all of its transactions for yourself, and (ii) in the long run take its block reward, as the difficulty adjustment process will reduce difficulty to ensure the block time remains at 10 minutes (or 15 seconds, or whatever) despite the loss of the miner that has been censored away. Proof of stake protocols are also vulnerable to (i) by default, but because we can keep track of the total number of validators that are supposed to be participating there are specific strategies that we can take in order to make it less profitable.

The simplest is to simply penalize everyone for anyone's non-participation. If 100 out of 100 validators sign a block, everyone gets 100% of the reward. But if only 99 validators sign, then everyone gets 99% of the reward. Additionally, if a block is skipped, everyone can be slightly penalized for that as well. This has two sets of consequences. First, censoring blocks produced by other parties will cost the censors. Second, the protocol can be designed in such a way that if censorship happens, altruists (ie. default software clients) can refuse to sign the censoring blocks, and thus inflict on the censors an additional expense. Of course, some degree of altruism is required for this kind of cost strategy to have any effect - if no one was altruistic, then everyone would simply anticipate being censored and not include any undesirable transactions in the first place, but given that assumption it does add substantial costs.

Timelock consensus

As for the second approach, there are two primary strategies that can be undertaken. The first is to use timelock puzzles, a kind of encryption where a piece of data takes a particular amount of time in order to decrypt and which cannot be sped up via parallelization. The typical approach to timelock puzzles is using modular exponentiation; the basic underlying idea is to take a transaction d and generate an encrypted value c with the property:

If you know p and q, then computing c from d and d from c are both easy; use the Chinese remainder theorem to decompose the problem into:

And then use Fermat's little theorem to further decompose into:

Which can be done in a paltry log(n) steps using two rounds of the square-and-multiply algorithm, one for the inner modular exponent and one for the outer modular exponent. One can use the extended Euclidean algorithm to compute modular inverses in order to run this calculation backwards. Lacking p and q, however, someone would need to literally multiply c by itself n times in order to get the result - and, very importantly, the process cannot be parallelized, so it would take just as long for someone with one computer as it would for someone with a thousand. Hence, a transaction-sending protocol can be constructed as follows:

  1. Sender creates transaction t
  2. Sender encrypts t using p and q to get c, and sends c and pq to a validator alongside a zero-knowledge proof that the values were produced correctly.
  3. The validator includes c and pq into the blockchain
  4. There is a protocol rule that the validator must submit the correct original transaction t into the blockchain within 24 hours, or else risk losing a large security deposit.

Honest validators would be willing to participate because they know that they will be able to decrypt the value in time, but they have no idea what they are including into the blockchain until it is too late. Under normal circumstances, the sender will also submit t into the blockchain themselves as soon as c is included simply to speed up transaction processing, but if the validators are malicious they will be required to submit it themselves within 24 hours in any case. One can even make the process more extreme: a block is not valid if there remain c values from more than 24 hours ago that have not yet been included.

This approach has the advantage that gradual introduction of censorship is impossible outright; it's either all or nothing. However, the "all" is still not that much. The simplest way to get around the mechanism is for validators to simply collude and start requiring senders to send t, p and q alongside c, together with a zero-knowledge proof that all the values are correct. It would be a highly obvious and blatant move, but all in all not a very expensive one. An additional problem of the scheme is that it's highly unnatural, requiring substantial expense of computing power (not nearly as much as proof of work, but nevertheless an hour's worth of computing time on a single core) and slightly non-standard cryptography in order to accomplish. Hence, one question is, is there some way in which we can do better?

For a simple transaction processing system, the answer is likely no, barring improved versions of timelock that rely on network latency rather than computing power, perhaps in the spirit of Andrew Miller's nonoutsourceable puzzles. For a Turing-complete object model, however, we do have some rather interesting alternatives.

A key tool in our arsenal is the halting problem: given a computer program, the only absolutely reliable way to determine what it will do after a number of steps of execution is to actually run it for that long (note: the original formulation asks only whether the program will halt, but the inherent impossibility can be generalized to very many types of output and intermediate behavior).

In the context of Ethereum, this opens up a particular denial-of-service attack vector: if a censor wishes to block transactions that have an undesirable effect (eg. sending messages to or from a particular address), then that effect could appear after running for millions of computational steps, and so the censor would need to process every transaction and discard the ones that they want censored. Normally, this is not a problem for Ethereum: as long as a transaction's signature is correct, the transaction is well-formatted and there is enough ether to pay for it, the transaction is guaranteed to be valid and includable into the blockchain, and the including miner is guaranteed to get a reward proprtional to the amount of computation that the transaction is allowed to take up. Here, however, the censor is introducing an additional artificial validity condition, and one that cannot be verified nearly so "safely".

However, we cannot immediately assume that this denial-of-service vulnerability will be fatal: it only takes perhaps a tenth of a second to verify a maximally sized transaction, and one certainly can overcome attacks of that size. Hence, we need to go a step further, and introduce an upcoming Ethereum 1.1 feature: events. Events are a feature that allows a contract to create a kind of delayed message that is only played at some prespecified block in the future. Once an event is made, any block at the height at which the event is supposed to mature must play the event in order to be valid. Hence, transaction senders can be clever, and create a hundred transactions that create a hundred events, only all of which together create an event that accomplishes some particular action that is not desired by censors.

Even now, censors trying to produce their blocks can still try to simulate a series of empty blocks following the block they are producing, to see if the sequence of events that they are generating will lead to any undesirable consequence. However, transaction senders can make life much harder for censors still: they can create sets of transactions that create events that don't by themselves do anything, but do lead to the sender's desired consequence in combination with some other transaction that happens regularly (eg. Bloomberg publishing some data feed into their blockchain contract). Relying on block timestamps or other unpredictable block data is another possibility. Note that this also makes it much harder to enact another defense against these anti-censorship strategies: requiring transaction senders themselves to produce a zero-knowledge proof that their transactions bear no undesirable intent.

To expand the functionality of this scheme, we can also add another protocol feature: create a specialized address where messages sent to that address are played as transactions. The messages would contain the transaction data in some form (eg. each message specifies one byte), after a few hundred blocks trigger events to combine the data together, and the data would then have to be immediately played as a regular transaction; once the initial transactions are in, there is no way around it. This would basically ensure that everything that can be done by sending transactions (the primary input of the system) can be done through this kind of covert latent message scheme.

Hence, we can see how blocking such circumventions will very likely be pretty much impossible to do completely and absolutely; rather, it will be likely a constant two-sided war of heuristics versus heuristics where neither side would have a permanent upper hand. We may see the development of centralized firms whose sole purpose is to accept any transaction and find some way to "sneak it in" to the blockchain in exchange for a fee, and these firms would consistently update their algorithms in response to the updated algorithms of the parties that are trying to work against their previous algorithms to block the attempt. Perhaps, this is the best that we can do.

Anti-censorship and Finality

It is important to note that the above by itself does not prove that censorship is extremely expensive all on its own. Rather, it shows that, if developers take care to add certain features into the blockchain protocol, censorship can be made as hard as reversion. This still leaves the question of how difficult reversion is in the first place. A lot of earlier consensus protocols, including proof of work and naive versions of proof of stake, do not make small-depth reversion very difficult; hence, if it takes a hundred blocks to realize that an undesirable transaction has successfully entered the system, then it would be a major inconvenience but the validators would be able to discard the old blockchain and create a new one, with all of the transactions from the old chain included in order in order to avoid inconveniencing anyone else (although anyone that was using the blockchain as a source of randomness would unfortunately be out of their luck). Newer protocols like Tendermint, however, use security deposits to make reverting even one block almost impossible, and so do not run into this problem; if you can get the delayed events into the blockchain at all, you've already won.

This, incidentally, is an important case study of the importance of "bribe attacks" as a theoretical concern in cryptoeconomics: even though literal bribes may in many cases be unrealistic, external incentive adjustments can come from any source. If one can prove that blockchains are extremely expensive to revert, then one can be assured that they will be extremely expensive to revert for any purpose, including attacker bribes and external desires to revert transactions for some particular purpose.

Subscribe to Protocol Announcements

Sign up to receive email notifications for protocol-related announcements, such as network upgrades, FAQs or security issues. You can opt-out of these at any time.


Categories