Taraxa AMA #1
For the first AMA of the year, we sat down with Taraxa’s Co-Founder and CEO Steven Pu to shed more light on the current state of our Testnet and Taraxa’s applications that we put at the core of our work for the next coming months.
Q.1: What are the major milestones that you’re setting for Taraxa to hit in 2020 considering the stage the project is at now?
SP: Our near-term vision is to build a platform that integrates with all forms of data sources, ranging from hardware modules & sensors to custom-made enterprise software. Ultimately, we’re on the mission to help enterprises move fast and iterate by enabling them to collect more operations data with exceptional confidence in its accuracy.
This year will be the year of driving the adoption for Taraxa, and we have aligned our goals for Taraxa’s applications to hit the number of solid use cases in the enterprise. Number one will be the scaling of hardware-enhanced asset leasing and mobility service offerings with our customers and channel partners in Japan and expand signed annual recurring revenues. We will also be expanding our coverage into the US market with our process change order offering.
Q 2: Now let’s dig deeper into Taraxa’s tech stack. Starting the core consensus algorithm — how is rapid finalization achieved, and why should we care about it in the first place?
SP: What is truly unique about Taraxa, is that it combines the benefits of a fast-advancing DAG topology with the instant and fair finality of a VRF-driven PBFT process to give smart contracts with state-dependent logic a definitive guarantee of immutability. Efficient block proposals enable Taraxa’s PoS system to produce fair, efficient and non-coordinated block proposals, which is essential for the network’s security. Proposing valid consensus blocks on the PBFT chain is the foundation of efficient on-chain consensus, and we keep working to ensure that synchronization queues and database commits are atomic for PBFT.
Another thing that is worth mentioning about Taraxa’s tech is our use of speculative execution, or the concurrent virtual machine. What we propose is parallel processing, i.e. running smart contracts in parallel in order to increase the throughput. Here is how this works. A full node schedules multiple smart contract calls for parallel execution, and then keeps track of their access to persistent storage via the Taraxa runtime APIs. Should there be conflicting access (i.e., read/write, write/write), the access is rejected, the conflict is reported to the scheduler, with the scheduler terminating the thread, rolling back its speculative changes to the persistent storage, and re-schedules these conflicting contract calls for sequential processing.
Q.3: Could you briefly explain how DAG solves the ‘double-spend’ issue?
SP: The way to resolve double-spend for any blockchains system is to have strictly defined ordering. Once you have well-defined ordering between transactions, then resolving double-spend is trivial — you simply discard any transaction that conflicts with a previous transaction. So the question you’re asking is really how does DAG in general resolve ordering. I covered this in detail in one of my earlier blogs on secure and fair ordering. In addition, we’ve observed a lot of confusion and misconceptions when it comes to DAG, so we covered that in-depth here.
Q4: The adaptive protocol in Taraxa is not well understood. For example, are parameters such as block generation rate and block size automatically calculated?
SP: Since the adaptive protocol is still under development we don’t have concrete answers to which ones are dynamic vs. static. But let’s use an example to help flesh out the concept. One thing we need to determine is on average how many blocks are produced at any given interval of time. Higher traffic would mean either more blocks or larger blocks, but since larger blocks run into problems of network propagation, let’s say that we stick with more blocks. More blocks mean more block proposers, which means the difficulty for the VDF would need to be relaxed. The best way to determine the difficulty level is by analyzing past network traffic. For example, we’re currently implementing an algorithm that looks back at the past 2–4 periods of the network to see if traffic is increasing or decreasing. You can observe this in many ways — how full the blocks in the block DAG are, the size of a mempool, etc. These are calculated and proposed during the finalization rounds and the subsequent period will make use of the new difficulty level.
Q.5: Taraxa mentions light nodes that do not need to completely trust the full nodes. Can you introduce the basic principles of light nodes?
SP: Light nodes are nodes that do less and consumes fewer resources than a full node. It is meant to be used for light devices that have limited system resources, such as a battery-powered IoT device at the network’s edge. Because light nodes typically cannot store the full network state or maintain a constantly updated connection, they rely on full nodes to relay their messages and receive verifications of the messages’ outcomes. In this way, light nodes are quite vulnerable to a full node’s machinations. We propose a randomized polling system whereby a light node can poll a random subset of the network to see if what it’s been told by the full node it’s working with is true. This is an optional feature that a light node can randomly activate to keep the full node it regularly works with in check and make sure it has not been deceived.
Q. 6: Taraxa’s applications span horizontally across multiple industries — from automotive (for one of the world’s largest OEMs in Japan) to construction (change orders) and heavy asset management. What’s your broader vision for the convergence of blockchain and other emerging tech like IoT in terms of optimizing organizational workflows?
SP: The technologies behind IoT are still far from perfect, so we still have a very long way to go. There are many reasons why people may be excited about IoT, the primary one is that these systems are able to act as an interface between the digital and analog worlds, so that we may remotely monitor and even control the physical world. One big application is automation, everything from self-driving cars, to modern automated factories, or even your home. If you take this very far into the future, you can imagine a world where everything just works automatically without you having to really say or do anything at all. Real-world physical systems connected via IoT will adapt and learn your needs (with your permission of course) and just make your life easier and better, automatically and without you noticing.