Scilla Design Story Piece by Piece: Part 1 (Why do we need a new language?)

Note: ➤ We’re on Slack! Join our community, ask us questions, and get updated on the latest (and hopefully the greatest!)

On May 23, 2018, we demonstrated a Crowdfunding smart contract written in Scilla — a smart contract language that we are developing for Zilliqa. Scilla has been designed as a principled language with smart contract safety in mind.

Scilla imposes a structure on smart contracts that will make applications running on Zilliqa less vulnerable to attacks by eliminating certain known vulnerabilities directly at the language-level. Furthermore, the principled structure of Scilla will make applications inherently more secure.

In order to present the design rationale behind Scilla and its “safety” features, we hereby start a series of articles in which we break down Scilla’s design piece by piece.

But, first things first, why do we even need to build a new smart contract language? And what are the problems that we aim to solve?

This first part of the series will answer these questions by presenting one of the main pain points of smart contracts: their “safety”.

Safety is the key

In the field of traditional software development, the usual purpose of designing a new programming language is to make certain tasks easier for developers. For instance, with the advent of object oriented programming languages, it became easier to reuse parts of a code. With scripting languages (such as Python), it became easier to automate the execution of system-level tasks. Languages such as Java made it easier to manage memory and Go has made it easier to handle threads and so on.

Smart contract languages despite being domain-specific still share some of the design principles of traditional programming languages. However, there are two particular aspects where they differ from their traditional counterparts.

First, due to the immutable nature of blockchains, smart contracts cannot be updated. Compare a smart contract with a traditional software, where, if a bug in the software is found, it is possible to fix it and release a new version. Smart contract bugs are hard to fix (possible only through a hard fork). The impossibility to update contracts is a serious limitation considering the fact that smart contract platforms drive an extremely large blockchain-based economy. Ethereum alone has a market capitalization of around USD 50 billion (as on June 17, 2018).

Second, smart contracts differ from traditional programs in the sense that they have a gas mechanism to pay for computational costs. Hence, while writing a contract, a developer must make sure that every function therein will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed to due gas limits. Such constraints are not present in traditional software systems. We will come back to this point in the coming sequel of this series.

As a result, it is extremely important to ensure that a smart contract deployed on a blockchain is bug free and safe. Safety of smart contracts is particularly critical because they are run in a Byzantine environment, where, every party involved with a contract can potentially be malicious. For instance, a malicious user interacting with a contract may want to steal money, a miner may want to order transactions in a block to produce some unexpected outcome or the worst being the case where a user calls a contract that in turn calls another contract (such as a library contract) which is under the control of an attacker and hence behaves maliciously.

Smart contract safety (issues) through examples

Let us take a tour of smart contracts to understand some of the safety issues and vulnerabilities that have been found in the past. The goal is to pinpoint the problems that we may want to solve by defining a new language.

The examples presented here are intentionally simple so as to ensure that we don’t get bogged down by any unnecessary complexity and jargon, and yet they capture core issues with some of the real-world incidents like the DAO and Parity hacks.

Example 1: Contracts that leak funds.

There are several ways in which a contract may leak funds. For instance, it could be the case that a contract transfers money to unintended recipients. Or, it may transfer more than the required amount to a legitimate recipient.

The contract below captures the attack on the DAO contract which allowed the attacker to steal around USD 60 million. The contract has a state variable shares. Consider state variables as global variables that can be accessed by any function. shares maintains a map between user addresses and the corresponding shares. Shareholders can invoke withdraw() to take back their share.

contract UnsafeContract1{
   // Mapping of address and share
   mapping(address => uint) shares;    // Withdraw a share
   function withdraw() public {
       if (msg.sender.call.value(shares[msg.sender])())
           shares[msg.sender] = 0;
   }
}

UnsafeContract1 behaves in a benign manner if a user (external to the blockchain) invokes withdraw(). In this case, the contract sends out a message to transfer the share to the user (via msg.sender.call.value()) and then sets the share to 0 by updating shares in the next line.

The attack manifests when the recipient of the message is a contract (not a user). When the caller contract invokes withdraw(), the callee contract executes msg.sender.call.value() and passes the execution control to the caller which being a contract in this case can then call back into withdraw().

Notice that in withdraw(), the caller’s entry in shares is updated to 0, only after if(msg.sender.call.value()) has terminated. When the malicious contract calls back withdraw(), it is actually preventing the program pointer from updating shares by forcing it to stay at the if() instruction. This allows the malicious contract to withdraw its share multiple times until the provided gas is consumed.

If the recipient of the message had been a user (not a contract), then it would not have been able to call back into the contract and hence the execution would have ended as expected.

This attack is also known as re-entrancy attack.

Example 2: Unexpected change to critical state variables.

Contracts have state variables, some of which can be instantiated at the contract creation time by the creator of the contract and cannot be changed later. Those that are not instantiated at the contract creation time can later be modified. Since those variables can be changed later, proper care needs to be taken if that variable is critical for the safety of the contract.

The contract below mimics an attack that happened on a multi-signature wallet named Parity, where an attacker was able to steal USD 31 million. Note that the actual attack was more involved but this example captures the essence of it.

The contract has an owner and it gets initialized not at the creation time but later via the function initowner(). Once initialized, the owner can invoke transferTo() and transfer a specified _amount from the contract to a given _recipient.

contract UnsafeContract2{
/* Define the contract owner*/
   address owner;/* This function sets the owner of the contract */
   function initowner(address _owner) { owner = _owner; }/* Function to transfer the funds in the contract */
   function transferTo(uint _amount, address _recipient) {
      if (msg.sender == owner)
          _recipient.transfer(_amount);
  }
}

Clearly, owner is a critical state variable and should be instantiated in a proper way. Unfortunately, initowner() allows any (malicious) user to invoke the function and set the owner to any address of her choice. Once owner is set, it becomes possible for the owner to steal funds and transfer them to any intended recipient().

Example 3: Killing a contract.

Extending on the previous example, once the critical variable owner is set, it may become possible for an attacker to invoke functions other than transferTo(). For instance, if the contract provides an interface for owner to kill the contract, then the entire code may be removed along with any other state variable.

contract UnsafeContract3{
/* Define the contract owner*/
   address owner;/* This function sets the owner of the contract */
   function initowner(address _owner) { owner = _owner; }/* Function to destroy the contract */
   function kill() { if (msg.sender == owner) suicide(owner); }
}

A similar attack was recently mount on Parity and as a result, the attacker (or probably a curious newbie) was able to freeze around USD 150 million.

Time for introspection

Now that we have seen some of the infamous bugs in smart contracts, the question that we should ask ourselves is the following: Are these bugs any different from traditional bugs and how can we prevent them from appearing again in the future?

Most of the bugs that we have seen are not necessarily specific to smart contracts and have been common in traditional software development. Hence, one may very well apply some of the previously acquired knowledge to handle such bugs. Let us see how well some of those ideas can fix these bugs.

For UnsafeContract1, the issue is that shares gets updated after msg.sender.call.value(). One possible solution to prevent the attack is to follow what is called a check-effect-communicate design pattern. This design pattern requires the contract to first get the amount that needs to be transferred (and do any other local checks), then update the state variable and finally communicate with the outside world. The contract below is a fix to UnsafeContract1.

contract FixedContract1{
   // Mapping of address and amount
   mapping(address => uint) shares;    // Withdraw a share
   function withdraw() public {
      uint share = shares[msg.sender];
      shares[msg.sender] = 0;
      msg.sender.transfer(share);
   }
}

Now, even if a malicious contract attempts to do a call back, it cannot withdraw its share twice, as the mapping shares would already be updated after the first call.

But, what happens when developers do not follow this security guideline? Can we design a new language that can impose this guideline at the language-level in a way that developers may simply not be able to make the same mistake again? The language should make a clean separation between computations (mathematical and state changes) and communications with the outside world. In other words, the language structure should disentangle the contract-specific effects (e.g., functions) from blockchain-wide interactions (i.e., sending/receiving funds and messages), thus providing a clean reasoning mechanism about potential contract composition and invariants.

Without a proper separation, a complex interleaving of computation and communication may lead to a “dirty” contract state that can be potentially exploited by malicious parties. Such separation is best to be defined at the language level.

For UnsafeContract2 and UnsafeContract3, the contract should again make a separation between mutable and immutable state variables. A new language can make a clear distinction for users to separate their mutable and immutable variables. Immutable variables can only be instantiated at the contract creation time and cannot be modified at any later point. Critical variables should only be declared as immutable.

Conclusion

Smart contracts as we see them today are not very complex and yet a simple bug in say ordering of two instructions as in UnsafeContract1 may lead to great financial losses. Moreover, because of the fact that smart contracts are run in a Byzantine environment, it is difficult for a smart contract developer to reason about the correctness and safety of a smart contract as attacks become hard to predict at the deployment stage.

With this in mind, we certainly need to define a better and safer smart contract language that can ease developers’ task in reasoning about a contract and is principled and structured in a way that it can help eliminate certain known bugs directly at the language-level and make contracts inherently safer.

But, what do we mean by “safety” of smart contracts? What are the safety properties that we want to guarantee and how should the language be designed in order to ensure those properties. How should the separation between the communication and computation aspects be defined in the language? This will be the topic of the next article. Stay tuned!

Here’s how you can follow our progress — we would love to have you join our community of technology, financial services, and crypto enthusiasts!

➤ Follow us on Twitter,

➤ Subscribe to our Blog,

➤ Ask us questions on Slack, Gitter or Reddit.