Note: ➤ We’re now on Slack! Join our community, ask us questions, and get updated on the latest (and hopefully, greatest!)
The Zilliqa team is excited to announce new and “oven-fresh” experimental results from our internal testnet. We call it “oven-fresh” as the experimental results came out only last night. With this, we are proud to “upgrade” our internal testnet to version 0.5 from version 0.1.
Remember that our previous internal testnet (dubbed version 0.1) reported a throughput of 1,389 transactions per second with four shards, i.e., 2,400 nodes. For the new results we expanded the network to 3,600 nodes (6 shards), and we observed a peak throughput of 2,488 transactions per second. A comparison between the two results is shown in the figure below:
Objectives achieved with testnet v0.5
For the new internal testnet, our goal was to increase the network size and stress test Zilliqa’s performance. Ensuring that the throughput increases with the network size is non-trivial when the network grows from small to moderately large. This is because several bottlenecks such as broadcasting that do not appear for small networks start to show up with larger networks and can have a serious impact on the overall throughput.
We are planning to write another article to share more technical details, but for now we would like to highlight the tasks that we finished to move from testnet v0.1 to testnet v0.5:
- We have implemented full support for sharing transaction body asynchronously between nodes, not pegged to transmitting the blocks themselves. This by itself posed significant challenges to the coordination of the protocol running on the nodes. This required careful tuning and protocol optimizations.
- We have reorganized the intra-shard and inter-shard network topology to make block and transaction propagation much more efficient.
- We have incorporated several optimizations for inter-node communication, data transmission and data processing.
This was a quick update on our new results and how we have improved from the previous ones. In case you have questions, feel free to ask them on our Slack or leave your comments below.
➤ Follow us on Twitter,
➤ Subscribe to our Newsletter,
➤ Subscribe to our Blog,
➤ Ask us questions on Slack