0

How to Audit a Smart Contract

A comprehensive guide on measuring Smart Contract security.

While the rise of blockchain presents a unique opportunity for distributed consensus, Smart Contract applications come with unique security concerns that have historically led to millions of USD in losses, such as the infamous DAO Attack. To mitigate these risks, it is necessary to conduct security audits on Smart Contracts. In this guide, we thoroughly detail various Smart Contract attacks and the auditing process one must undertake to assure security, in a manner consistent with latest developments, and taking inspiration from a variety of credible sources. 

A Smart Contract audit is fundamentally the same as a regular code audit – which is meticulously investigating code to find security flaws and vulnerabilities before the code is publicly deployed. It’s like testing a bridge before it’s opened to the public. In both cases, the builders have a responsibility for the security and safety of their products. As the blockchain is essentially a replicated, append-only linked “list” of merkle trees (that is therefore immutable), and Smart Contracts are self-executing, it is vital to find any vulnerabilities in the code before launch.

How to Audit a Smart Contract

 What Kinds of Smart Contract Attacks Are There?

This section discusses known attacks you should be aware of, followed by specific steps you can take to find such attacks in an audit.

Race Conditions are a general system’s behavior where events do not occur in the intended order. In Smart Contracts, Race Conditions may arise from calling external contracts that take over control flow.

Reentrancy describes one version of Race Conditions whereby some function is called repeatedly before the first function invocation is completed. For example, one of the DAO attack bugs resulted from different invocations of functions interacting in unintended ways. The key solution is to block concurrent calls from happening in certain functions, especially scrutinizing external calls.

Example of a contract with the reentrancy bug:

contract ReentrancyVulnerability {

function withdraw () {

uint transferAmount = 10 ether;

if (!msg.sender.call.value(transferAmount)()) throw;

}

function deposit() payable {} // make contract payable and send ether

}

Above, the line highlighted in red is an external call, which should generally be avoided. The function withdraw() transfers 10 ether to msg.sender. So far so good. However, the receiver may call the function multiple times with a recursive send exploit, as seen below.

Say the above contract is in a ReentrancyVulnerability.sol file and the attacker creates a Hacker.sol file with a Hacker contract, exploiting the external call. Such a contract may be used to “pen test” potential reentrancy vulnerabilities:

contract Hacker {

ReentrancyVulnerability r;

uint public count;

event LogFallback(uint c, uint balance);

function Attacker(address vulnerable) {

r = ReentrancyVulnerability(vulnerable);

}

function attack() {

r.withdraw();

}

function () payable {

        count++;

        LogFallback(count, this.balance);

        if (count < 10) {

          r.withdraw();

        }

}

In Hacker.sol, two primary functions are defined (besides a renaming function). The first is a withdraw function as payable() that calls the ReentrancyVulnerability contract withdraw() method, sending 10 ether to Hacker, triggering the second function: a function() payable{}, which is a fallback function (used to catch excess ether).

An event LogFallback(uint v, uint balance); is defined that is triggered whenever the fallback function defined above is called. This event is run through an if statement with a counter to act as a loop that stops the function at 10 calls to prevent ether reversion. Finally, the Hacker’s withdraw() method is called again. This is possible because before the original withdraw() method in payable() is finished, withdraw() is called again until the if statement is satisfied last and the counter condition is met.

Reentry is the symptom of a problem, not the root of it, so be sure to analyze where external functions are being used, rather than attempt to stop reentry directly at a function. What this means is to complete all internal work before calling the external function.

Takeaway: Minimize external code or otherwise complete all internal work before external calls.

Cross-function Race Conditions describe a similar attack of two functions sharing the same state, with the same solutions. One example is an attacker externally calling transfer() before the user balance has been set to 0, or although the attacker has already received the withdrawal:

function transfer(address to, uint amount) {

        transfer occurs here

}

function withdrawBalance() public {

        uint amountToWithdraw = userBalances[msg.sender];

        require(msg.sender.call.value(amountToWithdraw)());        

userBalances[msg.sender] = 0;

}

In the above code, the hacker may call transfer() when the red line in withdrawBalance() is executed, or when the external call is made. As you can see, the userBalance is only set to 0 afterward, so the hacker may transfer the tokens although the withdrawal has been made.

Transaction-Ordering Dependence (TOD) / Front Running is yet another race condition, but this time in the context of manipulations to the transaction orders within a block. This is in regards to the transaction being located in a mempool for a short time. Front Running allows one user to benefit from a manipulated transaction order at the expense of another user.

  • One condition that may allow Front Running is Timestamp Dependence, so you must scrutinize the uses of the timestamp, especially in cases where transaction time is financially important – such as in a betting contract. The Ethereum Timestamp is disconnected from a synchronized global clock, and this discrepancy can be taken advantage of by miners.
  • Another condition is Integer Overflow and Underflow, whereby a uint value reaching the maximum circles to zero or a uint value less than zero gets set to its maximum. Integer overflows and underflows can be detected by tools such as Mythril.

        

The code example for overflows and underflows is very straightforward:

uint public c = a + b;

An attacker may then subtract for underflow or add for overflow:

function underflow() public {

c -= 2**256-1;

}

function overflow() public {

c += 2**256-1;

}

Another form of attack is enabled by the ability to forcibly send Ether to a contract, to ensure that there is no attempt in the logic to restrict funding to a contract, as this attack would defeat that logic.

Any code with the following logic is a vulnerability for the above:

require(this.balance > 0); // note that 0 could be any number

In order to prevent such attacks from being launched successfully on the code you are analyzing, the auditing process should take an engineering approach – rigorous verification with a background of theory and practice, as well as tool application.

 The Steps for a Full Smart Contract Audit

Vulnerabilities enabling the above-known attacks, as well as other bugs and security concerns, would be found with the following auditing process, which takes inspiration from ConsenSys Best Practices, the HashEx audit framework, and public audits to create a more encompassing structure:

     0.   Ensure the Audit Will be Completed on a Deployed Smart Contract2: Your  

           audit should be performed on a release candidate (RC), or the final Smart

           Contract stage before public release, as this is what is closest to the end-user

           product.

  1. Provide A Legal Disclaimer: Note that the purpose of the audit is to foster discussion grounded in security principles, rather than to provide any guarantees.

For example: “The information appearing in this audit is for general discussion purposes only and is not intended to provide legal security guarantees to any individual or entity.”

  1. Explain Who You AreExplain your authority in the space, or why you can be trusted to conduct a rigorous analysis, and then back it up with a strong audit.
  2. Explain Your Audit ProcessOutline the Smart Contract(s) you are auditing and the process you will use, from a security perspective.
  3. Conduct Attack Vulnerability TestsAnalyze whether any of the relevant attacks documented above could be successfully launched against the contract.
  4. Detail Vulnerabilities Found and ConcernsIn this step, discuss critical medium, and low severity vulnerabilities, along with suggestions for fixes. There may be areas that are not immediately vulnerable, but a potential point of concern – make note of these as well.
  5. Analyze Contract Complexity: Complexity increases the likelihood of errors, so make note of complex contract logic, non-modularized code, proprietary tools and code, and performance over clarity. None of these is necessarily a red flag but should be avoided wherever possible.
  6. Analyze Failure PreparationHow would the contract respond in the event of failures, such as a bug or vulnerability? Check that the contract would pause and that money at risk would be managed.
  7. Analyze Code CurrencyAre all libraries and tools updated to their latest versions? Latest tool versions could come with vulnerability patches, so using older versions is an unnecessary and easily preventable risk.
  8. Analysis of Re-used Versus Duplicated CodeDuplicated code from previously-deployed, security-proven contracts does not require rigorous analysis. However, re-used code that has not been previously audited must be heavily scrutinized and should not be used if a well-tested and previously deployed version is available.
  9.  Analyze External Calls
  1. Are State Changes After External Calls AvoidedExternal calls may manipulate control flow, so be sure to complete all internal calls first.
  2. Are Untrusted Contracts MarkedExternal contracts should be clearly marked to convey that code interactions are potentially unsafe. This includes naming conventions, such as UntrustedSender as opposed to Sender.
  3. Are errors in external calls handled correctlyContract calls will automatically propagate a throw if an exception is encountered, and without handling this possibility (by checking the return value), the contract will fail.
  4. Do external calls favor push over pullMake sure external calls are isolated into their own transactions to minimize the consequences of external call failure.
  1.  Initial Balance Analysis: Does the code make any assumptions that the contract will begin with zero balance? A contract address may receive wei before the contract is created, so there should not be an initial balance assumption.
  2.  Analyze security of on-chain data: Ensure that the time at which certain on-chain data appears is not crucial to the contract functionality, as this data is public and the wrong order could favor one party over another (such as in a rock-paper-scissors game).
  3.  Analyze N-party Contracts: Is it OK if participants drop and do not return? This possibility must be taken into account.
  4.  Solidity Specific
  1. Are Invariants EnforcedA failed assertion triggers an assert guard. An assert() should be used when dealing with invariants, such as assert(this.balance >= totalSupply);
  2. Is Integer Division ConductedSimply, all integer division rounds down to the nearest integer in Solidity. If this is a problem, use a multiplier instead.
  3. What happens if Ether is forcibly sentSince ETH can be forcibly sent to an address, take note of any invariant coding that checks the balance of a contract, and how forced ETH behavior may impact the code.
  4. Is tx.origin usedtx.origin should never be used for authorization, as it contains your address, so another contract can call your contract and be authorized (if tx.origin is used, recommend using msg.sender() instead).
  5. Timestamp dependenceAs discussed in the Known Attacks section, The Ethereum Timestamp is disconnected from a synchronized global clock, and this discrepancy can be taken advantage of by miners, so timestamp dependence should be minimized.
  1.  Offer Next StepsSuggest fixes to the vulnerabilities found and steps moving forward. If these were fixed, would the contract be safe for mainnet usage?

More Audit and Bug Examples

Here, we’ll find insights from historical examples of audits and code snippets that you can apply to your own Smart Contract audits. There are a rising number of entities in the Smart Contract auditing space, with frameworks ranging from commentary-focused to testing-focused, both with strengths and weaknesses.

For an example of the “unchecked-send” bug, the Hacking, Distributed blog offers this code snippet:

if (gameHasEnded && !(prizePaidOut)) {

       winner.send(1000); // send a prize to the winner  

       prizePaidOut = True;

}

As discussed in the auditing steps section, use of send() should always be carefully investigated. In this case, the send() method can fail, allowing the game-winner to be unpaid. Similar vulnerabilities could exist for a use-case such as an auction, where potentially large funds are at risk.

As per the Ethereum documentation, this failure may occur “if the call stack depth is at 1024 (this can always be forced by the caller) and it also fails if the recipient runs out of gas.” The documentation offers the solution to “always check the return value of send, or even better: Use a pattern where the recipient withdraws the money.”

Based on the documentation suggestion, Hacking, Distributed bears this solution…

if (gameHasEnded && !(prizePaidOut)) {

       accounts[winner] += 1000

       accounts[loser] += 10

       prizePaidOut = True;

}

...

function withdraw(amount) {

       if (accounts[msg.sender] >= amount) {

              msg.sender.send(amount);

              accounts[msg.sender] -= amount;

       }

}

… whereby the code is refactored such that a failed send would affect only one party at a time.

The ConsenSys Best Practices framework offers many “good and bad code” examples, which cover known attacks.

pragma solidity ^0.4.4; // bad

pragma solidity 0.4.4; // good

For example, it is noted above that pragmas should be locked to a specific compiler version, to avoid contracts getting deployed using a different version, which may have a greater risk of undiscovered bugs.

uint256 constant private salt =  block.timestamp; // warning

Also, code for the timestamp of the block should be a flag for you to scrutinize any following timestamp use.

We encourage you to analyze the many other examples recommended by consenysus and bountyone

Auditing Tools As Supplements

A thorough audit may include tests alongside documentation and use-cases, which is explained by user behavior. In this case, one should use Behavior Driven Development (BDD) practices, which is comparable to the Smart Contract testing aspect of development, but with a focus on security rather than functionality.

In order to use truffle for auditing Ethereum Smart Contracts, use the standard npm install -g truffle to install the framework, and then truffle init to create the project structure (assuming you’ve previously acquired node.js).

This process focuses on writing tests and executing them in a test network, after importing the contract and library to check testing conditions. You can use the usual assert() or a test framework such as Chai. Finally, just test around the steps we have built, such as checking for overflows and underflows, testing the limits of functions, making sure return values are properly formatted, and so on.

Many Decentralized Applications, which are centered around Smart Contracts, have implemented a variety of software tools to aid in the auditing practice. These tools, such as automated code-checking for vulnerabilities, may be used as a supplement, but should not replace the formal auditing process. One option, as mentioned previously, is Mythril, which can be used for detecting uint overflows and underflows. Another tool is Etherscrape, used here to scrape live Ethereum contracts for reentrancy bugs when send() is being used. There are also decentralized auditing platforms like Bountyone that bring together companies and freelance auditors when tools aren’t enough. 

Providing Next Steps

Depending on the severity of the vulnerabilities found, recommend a focus on certain aspects of the contract to improve. You might also recommend a bug bounty as an effective means to find other bugs or concerns before launching, offering the Ethereum Bounty as a model.

Provide a note that new additions to contracts will place them into unaudited status, as code refactoring may introduce new vulnerabilities. Lastly, create a point of contact for questions – in either public or private audits.

 Conclusion

The audit outline this guide provides applies to Smart Contracts in general, but is tailored towards Ethereum contracts, which are by far the most popular, and thus transacting the most funds, putting them at the highest risk of attack and at greatest need of auditing.

Now that you have the tools, resources, and know-how to conduct a Smart Contract audit – go forth to improve the security and credibility of the blockchain space. If you are interested in getting paid to audit smart contracts, or if you need your smart contract audited check out Bountyone.io

Further Reading

As blockchain technology is still an emerging and rapidly growing field, there are no go-to resources for all-encompassing solutions, so we recommend a variety of sources to enhance your understanding of this guide.

01. “Smart Contract Security Best Practices” by ConsenSys

02. “Audit the Deployed Smart Contract, Not GitHub!” by ConsenSys

03. “The ultimate guide to audit a Smart Contract + Most dangerous attacks in Solidity”  by Merunas Grincalaitis

04. “Developing smart contracts: smart contract audit best practices” by SMARTYM

05. “The Importance Of Audits And Secure Coding For Smart Contracts” on ETHNews

06. “Onward with Ethereum Smart Contract Security” by Zeppelin Solutions

07. “EtherCamp’s Hacker Gold (HKG) public code audit” by Zeppelin Solutions

08. Solidified: A Full-audit Service for Smart Contracts

09. SmartDec: Tool-driven Smart Contract Security Platform

10. Harvard Innovation Lab Audits at Experfy

11. Security Audit performed on Ethereum Classic Multisig Wallet by Dexaran

12. “Scanning Live Ethereum Contracts for the ‘Unchecked-Send’ Bug” by Hacking, Distributed

Like what you read? Give us one like or share it to your friends

5
0

Related Guides

Join Blockgeeks

Create an account to access our exclusive point system, get instant notifications for new courses, workshops, free webinars and start interacting with our enthusiastic blockchain community. Don’t miss out and join right now!

Already have an account? Sign In

Comments

There are no comments yet