Bitcoincharts Charts

RESEARCH REPORT ABOUT KYBER NETWORK

RESEARCH REPORT ABOUT KYBER NETWORK
Author: Gamals Ahmed, CoinEx Business Ambassador

https://preview.redd.it/9k31yy1bdcg51.jpg?width=936&format=pjpg&auto=webp&s=99bcb7c3f50b272b7d97247b369848b5d8cc6053

ABSTRACT

In this research report, we present a study on Kyber Network. Kyber Network is a decentralized, on-chain liquidity protocol designed to make trading tokens simple, efficient, robust and secure.
Kyber design allows any party to contribute to an aggregated pool of liquidity within each blockchain while providing a single endpoint for takers to execute trades using the best rates available. We envision a connected liquidity network that facilitates seamless, decentralized cross-chain token swaps across Kyber based networks on different chains.
Kyber is a fully on-chain liquidity protocol that enables decentralized exchange of cryptocurrencies in any application. Liquidity providers (Reserves) are integrated into one single endpoint for takers and users. When a user requests a trade, the protocol will scan the entire network to find the reserve with the best price and take liquidity from that particular reserve.

1.INTRODUCTION

DeFi applications all need access to good liquidity sources, which is a critical component to provide good services. Currently, decentralized liquidity is comprised of various sources including DEXes (Uniswap, OasisDEX, Bancor), decentralized funds and other financial apps. The more scattered the sources, the harder it becomes for anyone to either find the best rate for their trade or to even find enough liquidity for their need.
Kyber is a blockchain-based liquidity protocol that aggregates liquidity from a wide range of reserves, powering instant and secure token exchange in any decentralized application.
The protocol allows for a wide range of implementation possibilities for liquidity providers, allowing a wide range of entities to contribute liquidity, including end users, decentralized exchanges and other decentralized protocols. On the taker side, end users, cryptocurrency wallets, and smart contracts are able to perform instant and trustless token trades at the best rates available amongst the sources.
The Kyber Network is project based on the Ethereum protocol that seeks to completely decentralize the exchange of crypto currencies and make exchange trustless by keeping everything on the blockchain.
Through the Kyber Network, users should be able to instantly convert or exchange any crypto currency.

1.1 OVERVIEW ABOUT KYBER NETWORK PROTOCOL

The Kyber Network is a decentralized way to exchange ETH and different ERC20 tokens instantly — no waiting and no registration needed.
Using this protocol, developers can build innovative payment flows and applications, including instant token swap services, ERC20 payments, and financial DApps — helping to build a world where any token is usable anywhere.
Kyber’s fully on-chain design allows for full transparency and verifiability in the matching engine, as well as seamless composability with DApps, not all of which are possible with off-chain or hybrid approaches. The integration of a large variety of liquidity providers also makes Kyber uniquely capable of supporting sophisticated schemes and catering to the needs of DeFi DApps and financial institutions. Hence, many developers leverage Kyber’s liquidity pool to build innovative financial applications, and not surprisingly, Kyber is the most used DeFi protocol in the world.
The Kyber Network is quite an established project that is trying to change the way we think of decentralised crypto currency exchange.
The Kyber Network has seen very rapid development. After being announced in May 2017 the testnet for the Kyber Network went live in August 2017. An ICO followed in September 2017, with the company raising 200,000 ETH valued at $60 million in just one day.
The live main net was released in February 2018 to whitelisted participants, and on March 19, 2018, the Kyber Network opened the main net as a public beta. Since then the network has seen increasing growth, with network volumes growing more than 500% in the first half of 2019.
Although there was a modest decrease in August 2019 that can be attributed to the price of ETH dropping by 50%, impacting the overall total volumes being traded and processed globally.
They are developing a decentralised exchange protocol that will allow developers to build payment flows and financial apps. This is indeed quite a competitive market as a number of other such protocols have been launched.
In Brief - Kyber Network is a tool that allows anyone to swap tokens instantly without having to use exchanges. - It allows vendors to accept different types of cryptocurrency while still being paid in their preferred crypto of choice. - It’s built primarily for Ethereum, but any smart-contract based blockchain can incorporate it.
At its core, Kyber is a decentralized way to exchange ETH and different ERC20 tokens instantly–no waiting and no registration needed. To do this Kyber uses a diverse set of liquidity pools, or pools of different crypto assets called “reserves” that any project can tap into or integrate with.
A typical use case would be if a vendor allowed customers to pay in whatever currency they wish, but receive the payment in their preferred token. Another example would be for Dapp users. At present, if you are not a token holder of a certain Dapp you can’t use it. With Kyber, you could use your existing tokens, instantly swap them for the Dapp specific token and away you go.
All this swapping happens directly on the Ethereum blockchain, meaning every transaction is completely transparent.

1.1.1 WHY BUILD THE KYBER NETWORK?

While crypto currencies were built to be decentralized, many of the exchanges for trading crypto currencies have become centralized affairs. This has led to security vulnerabilities, with many exchanges becoming the victims of hacking and theft.
It has also led to increased fees and costs, and the centralized exchanges often come with slow transfer times as well. In some cases, wallets have been locked and users are unable to withdraw their coins.
Decentralized exchanges have popped up recently to address the flaws in the centralized exchanges, but they have their own flaws, most notably a lack of liquidity, and often times high costs to modify trades in their on-chain order books.

Some of the Integrations with Kyber Protocol
The Kyber Network was formed to provide users with a decentralized exchange that keeps everything right on the blockchain, and uses a reserve system rather than an order book to provide high liquidity at all times. This will allow for the exchange and transfer of any cryptocurrency, even cross exchanges, and costs will be kept at a minimum as well.
The Kyber Network has three guiding design philosophies since the start:
  1. To be most useful the network needs to be platform-agnostic, which allows any protocol or application the ability to take advantage of the liquidity provided by the Kyber Network without any impact on innovation.
  2. The network was designed to make real-world commerce and decentralized financial products not only possible but also feasible. It does this by allowing for instant token exchange across a wide range of tokens, and without any settlement risk.
  3. The Kyber Network was created with ease of integration as a priority, which is why everything runs fully on-chain and fully transparent. Kyber is not only developer-friendly, but is also compatible with a wide variety of systems.

1.1.2 WHO INVENTED KYBER?

Kyber’s founders are Loi Luu, Victor Tran, Yaron Velner — CEO, CTO, and advisor to the Kyber Network.

1.1.3 WHAT DISTINGUISHES KYBER?

Kyber’s mission has always been to integrate with other protocols so they’ve focused on being developer-friendly by providing architecture to allow anyone to incorporate the technology onto any smart-contract powered blockchain. As a result, a variety of different dapps, vendors, and wallets use Kyber’s infrastructure including Set Protocol, bZx, InstaDApp, and Coinbase wallet.
Besides, dapps, vendors, and wallets, Kyber also integrates with other exchanges such as Uniswap — sharing liquidity pools between the two protocols.
A typical use case would be if a vendor allowed customers to pay in whatever currency they wish, but receive the payment in their preferred token. Another example would be for Dapp users. At present, if you are not a token holder of a certain Dapp you can’t use it. With Kyber, you could use your existing tokens, instantly swap them for the Dapp specific token and away you go.
Limit orders on Kyber allow users to set a specific price in which they would like to exchange a token instead of accepting whatever price currently exists at the time of trading. However, unlike with other exchanges, users never lose custody of their crypto assets during limit orders on Kyber.
The Kyber protocol works by using pools of crypto funds called “reserves”, which currently support over 70 different ERC20 tokens. Reserves are essentially smart contracts with a pool of funds. Different parties with different prices and levels of funding control all reserves. Instead of using order books to match buyers and sellers to return the best price, the Kyber protocol looks at all the reserves and returns the best price among the different reserves. Reserves make money on the “spread” or differences between the buying and selling prices. The Kyber wants any token holder to easily convert one token to another with a minimum of fuss.

1.2 KYBER PROTOCOL

The protocol smart contracts offer a single interface for the best available token exchange rates to be taken from an aggregated liquidity pool across diverse sources. ● Aggregated liquidity pool. The protocol aggregates various liquidity sources into one liquidity pool, making it easy for takers to find the best rates offered with one function call. ● Diverse sources of liquidity. The protocol allows different types of liquidity sources to be plugged into. Liquidity providers may employ different strategies and different implementations to contribute liquidity to the protocol. ● Permissionless. The protocol is designed to be permissionless where any developer can set up various types of reserves, and any end user can contribute liquidity. Implementations need to take into consideration various security vectors, such as reserve spamming, but can be mitigated through a staking mechanism. We can expect implementations to be permissioned initially until the maintainers are confident about these considerations.
The core feature that the Kyber protocol facilitates is the token swap between taker and liquidity sources. The protocol aims to provide the following properties for token trades: ● Instant Settlement. Takers do not have to wait for their orders to be fulfilled, since trade matching and settlement occurs in a single blockchain transaction. This enables trades to be part of a series of actions happening in a single smart contract function. ● Atomicity. When takers make a trade request, their trade either gets fully executed, or is reverted. This “all or nothing” aspect means that takers are not exposed to the risk of partial trade execution. ● Public rate verification. Anyone can verify the rates that are being offered by reserves and have their trades instantly settled just by querying from the smart contracts. ● Ease of integration. Trustless and atomic token trades can be directly and easily integrated into other smart contracts, thereby enabling multiple trades to be performed in a smart contract function.
How each actor works is specified in Section Network Actors. 1. Takers refer to anyone who can directly call the smart contract functions to trade tokens, such as end-users, DApps, and wallets. 2. Reserves refer to anyone who wishes to provide liquidity. They have to implement the smart contract functions defined in the reserve interface in order to be registered and have their token pairs listed. 3. Registered reserves refer to those that will be cycled through for matching taker requests. 4. Maintainers refer to anyone who has permission to access the functions for the adding/removing of reserves and token pairs, such as a DAO or the team behind the protocol implementation. 5. In all, they comprise of the network, which refers to all the actors involved in any given implementation of the protocol.
The protocol implementation needs to have the following: 1. Functions for takers to check rates and execute the trades 2. Functions for the maintainers to registeremove reserves and token pairs 3. Reserve interface that defines the functions reserves needs to implement
https://preview.redd.it/d2tcxc7wdcg51.png?width=700&format=png&auto=webp&s=b2afde388a77054e6731772b9115ee53f09b6a4a

1.3 KYBER CORE SMART CONTRACTS

Kyber Core smart contracts is an implementation of the protocol that has major protocol functions to allow actors to join and interact with the network. For example, the Kyber Core smart contracts provide functions for the listing and delisting of reserves and trading pairs by having clear interfaces for the reserves to comply to be able to register to the network and adding support for new trading pairs. In addition, the Kyber Core smart contracts also provide a function for takers to query the best rate among all the registered reserves, and perform the trades with the corresponding rate and reserve. A trading pair consists of a quote token and any other token that the reserve wishes to support. The quote token is the token that is either traded from or to for all trades. For example, the Ethereum implementation of the Kyber protocol uses Ether as the quote token.
In order to search for the best rate, all reserves supporting the requested token pair will be iterated through. Hence, the Kyber Core smart contracts need to have this search algorithm implemented.
The key functions implemented in the Kyber Core Smart Contracts are listed in Figure 2 below. We will visit and explain the implementation details and security considerations of each function in the Specification Section.

1.4 HOW KYBER’S ON-CHAIN PROTOCOL WORKS?

Kyber is the liquidity infrastructure for decentralized finance. Kyber aggregates liquidity from diverse sources into a pool, which provides the best rates for takers such as DApps, Wallets, DEXs, and End users.

1.4.1 PROVIDING LIQUIDITY AS A RESERVE

Anyone can operate a Kyber Reserve to market make for profit and make their tokens available for DApps in the ecosystem. Through an open reserve architecture, individuals, token teams and professional market makers can contribute token assets to Kyber’s liquidity pool and earn from the spread in every trade. These tokens become available at the best rates across DApps that tap into the network, making them instantly more liquid and useful.
MAIN RESERVE TYPES Kyber currently has over 45 reserves in its network providing liquidity. There are 3 main types of reserves that allow different liquidity contribution options to suit the unique needs of different providers. 1. Automated Price Reserves (APR) — Allows token teams and users with large token holdings to have an automated yet customized pricing system with low maintenance costs. Synthetix and Melon are examples of teams that run APRs. 2. Fed Price Reserves (FPR) — Operated by professional market makers that require custom and advanced pricing strategies tailored to their specific needs. Kyber alongside reserves such as OneBit, runs FPRs. 3. Bridge Reserves (BR) — These are specialized reserves meant to bring liquidity from other on-chain liquidity providers like Uniswap, Oasis, DutchX, and Bancor into the network.

1.5 KYBER NETWORK ROLES

There Kyber Network functions through coordination between several different roles and functions as explained below: - Users — This entity uses the Kyber Network to send and receive tokens. A user can be an individual, a merchant, and even a smart contract account. - Reserve Entities — This role is used to add liquidity to the platform through the dynamic reserve pool. Some reserve entities are internal to the Kyber Network, but others may be registered third parties. Reserve entities may be public if the public contributes to the reserves they hold, otherwise they are considered private. By allowing third parties as reserve entities the network adds diversity, which prevents monopolization and keeps exchange rates competitive. Allowing third party reserve entities also allows for the listing of less popular coins with lower volumes. - Reserve Contributors — Where reserve entities are classified as public, the reserve contributor is the entity providing reserve funds. Their incentive for doing so is a profit share from the reserve. - The Reserve Manager — Maintains the reserve, calculates exchange rates and enters them into the network. The reserve manager profits from exchange spreads set by them on their reserves. They can also benefit from increasing volume by accessing the entire Kyber Network. - The Kyber Network Operator — Currently the Kyber Network team is filling the role of the network operator, which has a function to adds/remove Reserve Entities as well as controlling the listing of tokens. Eventually, this role will revert to a proper decentralized governance.

1.6 BASIC TOKEN TRADE

A basic token trade is one that has the quote token as either the source or destination token of the trade request. The execution flow of a basic token trade is depicted in the diagram below, where a taker would like to exchange BAT tokens for ETH as an example. The trade happens in a single blockchain transaction. 1. Taker sends 1 ETH to the protocol contract, and would like to receive BAT in return. 2. Protocol contract queries the first reserve for its ETH to BAT exchange rate. 3. Reserve 1 offers an exchange rate of 1 ETH for 800 BAT. 4. Protocol contract queries the second reserve for its ETH to BAT exchange rate. 5. Reserve 2 offers an exchange rate of 1 ETH for 820 BAT. 6. This process goes on for the other reserves. After the iteration, reserve 2 is discovered to have offered the best ETH to BAT exchange rate. 7. Protocol contract sends 1 ETH to reserve 2. 8. The reserve sends 820 BAT to the taker.

1.7 TOKEN-TO-TOKEN TRADE

A token-to-token trade is one where the quote token is neither the source nor the destination token of the trade request. The exchange flow of a token to token trade is depicted in the diagram below, where a taker would like to exchange BAT tokens for DAI as an example. The trade happens in a single blockchain transaction. 1. Taker sends 50 BAT to the protocol contract, and would like to receive DAI in return. 2. Protocol contract sends 50 BAT to the reserve offering the best BAT to ETH rate. 3. Protocol contract receives 1 ETH in return. 4. Protocol contract sends 1 ETH to the reserve offering the best ETH to DAI rate. 5. Protocol contract receives 30 DAI in return. 6. Protocol contract sends 30 DAI to the user.

2.KYBER NETWORK CRYSTAL (KNC) TOKEN

Kyber Network Crystal (KNC) is an ERC-20 utility token and an integral part of Kyber Network.
KNC is the first deflationary staking token where staking rewards and token burns are generated from actual network usage and growth in DeFi.
The Kyber Network Crystal (KNC) is the backbone of the Kyber Network. It works to connect liquidity providers and those who need liquidity and serves three distinct purposes. The first of these is to collect transaction fees, and a portion of every fee collected is burned, which keeps KNC deflationary. Kyber Network Crystals (KNC), are named after the crystals in Star Wars used to power light sabers.
The KNC also ensures the smooth operation of the reserve system in the Kyber liquidity since entities must use third-party tokens to buy the KNC that pays for their operations in the network.
KNC allows token holders to play a critical role in determining the incentive system, building a wide base of stakeholders, and facilitating economic flow in the network. A small fee is charged each time a token exchange happens on the network, and KNC holders get to vote on this fee model and distribution, as well as other important decisions. Over time, as more trades are executed, additional fees will be generated for staking rewards and reserve rebates, while more KNC will be burned. - Participation rewards — KNC holders can stake KNC in the KyberDAO and vote on key parameters. Voters will earn staking rewards (in ETH) - Burning — Some of the network fees will be burned to reduce KNC supply permanently, providing long-term value accrual from decreasing supply. - Reserve incentives — KNC holders determine the portion of network fees that are used as rebates for selected liquidity providers (reserves) based on their volume performance.

Finally, the KNC token is the connection between the Kyber Network and the exchanges, wallets, and dApps that leverage the liquidity network. This is a virtuous system since entities are rewarded with referral fees for directing more users to the Kyber Network, which helps increase adoption for Kyber and for the entities using the Network.
And of course there will soon be a fourth and fifth uses for the KNC, which will be as a staking token used to generate passive income, as well as a governance token used to vote on key parameters of the network.
The Kyber Network Crystal (KNC) was released in a September 2017 ICO at a price around $1. There were 226,000,000 KNC minted for the ICO, with 61% sold to the public. The remaining 39% are controlled 50/50 by the company and the founders/advisors, with a 1 year lockup period and 2 year vesting period.
Currently, just over 180 million coins are in circulation, and the total supply has been reduced to 210.94 million after the company burned 1 millionth KNC token in May 2019 and then its second millionth KNC token just three months later.
That means that while it took 15 months to burn the first million KNC, it took just 10 weeks to burn the second million KNC. That shows how rapidly adoption has been growing recently for Kyber, with July 2019 USD trading volumes on the Kyber Network nearly reaching $60 million. This volume has continued growing, and on march 13, 2020 the network experienced its highest daily trading activity of $33.7 million in a 24-hour period.
Currently KNC is required by Reserve Managers to operate on the network, which ensures a minimum amount of demand for the token. Combined with future plans for burning coins, price is expected to maintain an upward bias, although it has suffered along with the broader market in 2018 and more recently during the summer of 2019.
It was unfortunate in 2020 that a beginning rally was cut short by the coronavirus pandemic, although the token has stabilized as of April 2020, and there are hopes the rally could resume in the summer of 2020.

2.1 HOW ARE KNC TOKENS PRODUCED?

The native token of Kyber is called Kyber Network Crystals (KNC). All reserves are required to pay fees in KNC for the right to manage reserves. The KNC collected as fees are either burned and taken out of the total supply or awarded to integrated dapps as an incentive to help them grow.

2.2 HOW DO YOU GET HOLD OF KNC TOKENS?

Kyber Swap can be used to buy ETH directly using a credit card, which can then be used to swap for KNC. Besides Kyber itself, exchanges such as Binance, Huobi, and OKex trade KNC.

2.3 WHAT CAN YOU DO WITH KYBER?

The most direct and basic function of Kyber is for instantly swapping tokens without registering an account, which anyone can do using an Etheruem wallet such as MetaMask. Users can also create their own reserves and contribute funds to a reserve, but that process is still fairly technical one–something Kyber is working on making easier for users in the future.

2.4 THE GOAL OF KYBER THE FUTURE

The goal of Kyber in the coming years is to solidify its position as a one-stop solution for powering liquidity and token swapping on Ethereum. Kyber plans on a major protocol upgrade called Katalyst, which will create new incentives and growth opportunities for all stakeholders in their ecosystem, especially KNC holders. The upgrade will mean more use cases for KNC including to use KNC to vote on governance decisions through a decentralized organization (DAO) called the KyberDAO.
With our upcoming Katalyst protocol upgrade and new KNC model, Kyber will provide even more benefits for stakeholders. For instance, reserves will no longer need to hold a KNC balance for fees, removing a major friction point, and there will be rebates for top performing reserves. KNC holders can also stake their KNC to participate in governance and receive rewards.

2.5 BUYING & STORING KNC

Those interested in buying KNC tokens can do so at a number of exchanges. Perhaps your best bet between the complete list is the likes of Coinbase Pro and Binance. The former is based in the USA whereas the latter is an offshore exchange.
The trading volume is well spread out at these exchanges, which means that the liquidity is not concentrated and dependent on any one exchange. You also have decent liquidity on each of the exchange books. For example, the Binance BTC / KNC books are wide and there is decent turnover. This means easier order execution.
KNC is an ERC20 token and can be stored in any wallet with ERC20 support, such as MyEtherWallet or MetaMask. One interesting alternative is the KyberSwap Android mobile app that was released in August 2019.
It allows for instant swapping of tokens and has support for over 70 different altcoins. It also allows users to set price alerts and limit orders and works as a full-featured Ethereum wallet.

2.6 KYBER KATALYST UPGRADE

Kyber has announced their intention to become the de facto liquidity layer for the Decentralized Finance space, aiming to have Kyber as the single on-chain endpoint used by the majority of liquidity providers and dApp developers. In order to achieve this goal the Kyber Network team is looking to create an open ecosystem that garners trust from the decentralized finance space. They believe this is the path that will lead the majority of projects, developers, and users to choose Kyber for liquidity needs. With that in mind they have recently announced the launch of a protocol upgrade to Kyber which is being called Katalyst.
The Katalyst upgrade will create a stronger ecosystem by creating strong alignments towards a common goal, while also strengthening the incentives for stakeholders to participate in the ecosystem.
The primary beneficiaries of the Katalyst upgrade will be the three major Kyber stakeholders: 1. Reserve managers who provide network liquidity; 2. dApps that connect takers to Kyber; 3. KNC holders.
These stakeholders can expect to see benefits as highlighted below: Reserve Managers will see two new benefits to providing liquidity for the network. The first of these benefits will be incentives for providing reserves. Once Katalyst is implemented part of the fees collected will go to the reserve managers as an incentive for providing liquidity.
This mechanism is similar to rebates in traditional finance, and is expected to drive the creation of additional reserves and market making, which in turn will lead to greater liquidity and platform reach.
Katalyst will also do away with the need for reserve managers to maintain a KNC balance for use as network fees. Instead fees will be automatically collected and used as incentives or burned as appropriate. This should remove a great deal of friction for reserves to connect with Kyber without affecting the competitive exchange rates that takers in the system enjoy. dApp Integrators will now be able to set their own spread, which will give them full control over their own business model. This means the current fee sharing program that shares 30% of the 0.25% fee with dApp developers will go away and developers will determine their own spread. It’s believed this will increase dApp development within Kyber as developers will now be in control of fees.
KNC Holders, often thought of as the core of the Kyber Network, will be able to take advantage of a new staking mechanism that will allow them to receive a portion of network fees by staking their KNC and participating in the KyberDAO.

2.7 COMING KYBERDAO

With the implementation of the Katalyst protocol the KNC holders will be put right at the heart of Kyber. Holders of KNC tokens will now have a critical role to play in determining the future economic flow of the network, including its incentive systems.
The primary way this will be achieved is through KyberDAO, a way in which on-chain and off-chain governance will align to streamline cooperation between the Kyber team, KNC holders, and market participants.
The Kyber Network team has identified 3 key areas of consideration for the KyberDAO: 1. Broad representation, transparent governance and network stability 2. Strong incentives for KNC holders to maintain their stake and be highly involved in governance 3. Maximizing participation with a wide range of options for voting delegation
Interaction between KNC Holders & Kyber
This means KNC holders have been empowered to determine the network fee and how to allocate the fees to ensure maximum network growth. KNC holders will now have three fee allocation options to vote on: - Voting Rewards: Immediate value creation. Holders who stake and participate in the KyberDAO get their share of the fees designated for rewards. - Burning: Long term value accrual. The decreasing supply of KNC will improve the token appreciation over time and benefit those who did not participate. - Reserve Incentives:Value creation via network growth. By rewarding Kyber reserve managers based on their performance, it helps to drive greater volume, value, and network fees.

2.8 TRANSPARENCY AND STABILITY

The design of the KyberDAO is meant to allow for the greatest network stability, as well as maximum transparency and the ability to quickly recover in emergency situations. Initally the Kyber team will remain as maintainers of the KyberDAO. The system is being developed to be as verifiable as possible, while still maintaining maximum transparency regarding the role of the maintainer in the DAO.
Part of this transparency means that all data and processes are stored on-chain if feasible. Voting regarding network fees and allocations will be done on-chain and will be immutable. In situations where on-chain storage or execution is not feasible there will be a set of off-chain governance processes developed to ensure all decisions are followed through on.

2.9 KNC STAKING AND DELEGATION

Staking will be a new addition and both staking and voting will be done in fixed periods of times called “epochs”. These epochs will be measured in Ethereum block times, and each KyberDAO epoch will last roughly 2 weeks.
This is a relatively rapid epoch and it is beneficial in that it gives more rapid DAO conclusion and decision-making, while also conferring faster reward distribution. On the downside it means there needs to be a new voting campaign every two weeks, which requires more frequent participation from KNC stakeholders, as well as more work from the Kyber team.
Delegation will be part of the protocol, allowing stakers to delegate their voting rights to third-party pools or other entities. The pools receiving the delegation rights will be free to determine their own fee structure and voting decisions. Because the pools will share in rewards, and because their voting decisions will be clearly visible on-chain, it is expected that they will continue to work to the benefit of the network.

3. TRADING

After the September 2017 ICO, KNC settled into a trading price that hovered around $1.00 (decreasing in BTC value) until December. The token has followed the trend of most other altcoins — rising in price through December and sharply declining toward the beginning of January 2018.
The KNC price fell throughout all of 2018 with one exception during April. From April 6th to April 28th, the price rose over 200 percent. This run-up coincided with a blog post outlining plans to bring Bitcoin to the Ethereum blockchain. Since then, however, the price has steadily fallen, currently resting on what looks like a $0.15 (~0.000045 BTC) floor.
With the number of partners using the Kyber Network, the price may rise as they begin to fully use the network. The development team has consistently hit the milestones they’ve set out to achieve, so make note of any release announcements on the horizon.

4. COMPETITION

The 0x project is the biggest competitor to Kyber Network. Both teams are attempting to enter the decentralized exchange market. The primary difference between the two is that Kyber performs the entire exchange process on-chain while 0x keeps the order book and matching off-chain.
As a crypto swap exchange, the platform also competes with ShapeShift and Changelly.

5.KYBER MILESTONES

• June 2020: Digifox, an all-in-one finance application by popular crypto trader and Youtuber Nicholas Merten a.k.a DataDash (340K subs), integrated Kyber to enable users to easily swap between cryptocurrencies without having to leave the application. • June 2020: Stake Capital partnered with Kyber to provide convenient KNC staking and delegation services, and also took a KNC position to participate in governance. • June 2020: Outlined the benefits of the Fed Price Reserve (FPR) for professional market makers and advanced developers. • May 2020: Kyber crossed US$1 Billion in total trading volume and 1 Million transactions, performed entirely on-chain on Ethereum. • May 2020: StakeWith.Us partnered Kyber Network as a KyberDAO Pool Master. • May 2020: 2Key, a popular blockchain referral solution using smart links, integrated Kyber’s on-chain liquidity protocol for seamless token swaps • May 2020: Blockchain game League of Kingdoms integrated Kyber to accept Token Payments for Land NFTs. • May 2020: Joined the Zcash Developer Alliance , an invite-only working group to advance Zcash development and interoperability. • May 2020: Joined the Chicago DeFi Alliance to help accelerate on-chain market making for professionals and developers. • March 2020: Set a new record of USD $33.7M in 24H fully on-chain trading volume, and $190M in 30 day on-chain trading volume. • March 2020: Integrated by Rarible, Bullionix, and Unstoppable Domains, with the KyberWidget deployed on IPFS, which allows anyone to swap tokens through Kyber without being blocked. • February 2020: Popular Ethereum blockchain game Axie Infinity integrated Kyber to accept ERC20 payments for NFT game items. • February 2020: Kyber’s protocol was integrated by Gelato Finance, Idle Finance, rTrees, Sablier, and 0x API for their liquidity needs. • January 2020: Kyber Network was found to be the most used protocol in the whole decentralized finance (DeFi) space in 2019, according to a DeFi research report by Binance. • December 2019: Switcheo integrated Kyber’s protocol for enhanced liquidity on their own DEX. • December 2019: DeFi Wallet Eidoo integrated Kyber for seamless in-wallet token swaps. • December 2019: Announced the development of the Katalyst Protocol Upgrade and new KNC token model. • July 2019: Developed the Waterloo Bridge , a Decentralized Practical Cross-chain Bridge between EOS and Ethereum, successfully demonstrating a token swap between Ethereum to EOS. • July 2019: Trust Wallet, the official Binance wallet, integrated Kyber as part of its decentralized token exchange service, allowing even more seamless in-wallet token swaps for thousands of users around the world. • May 2019: HTC, the large consumer electronics company with more than 20 years of innovation, integrated Kyber into its Zion Vault Wallet on EXODUS 1 , the first native web 3.0 blockchain phone, allowing users to easily swap between cryptocurrencies in a decentralized manner without leaving the wallet. • January 2019: Introduced the Automated Price Reserve (APR) , a capital efficient way for token teams and individuals to market make with low slippage. • January 2019: The popular Enjin Wallet, a default blockchain DApp on the Samsung S10 and S20 mobile phones, integrated Kyber to enable in-wallet token swaps. • October 2018: Kyber was a founding member of the WBTC (Wrapped Bitcoin) Initiative and DAO. • October 2018: Developed the KyberWidget for ERC20 token swaps on any website, with CoinGecko being the first major project to use it on their popular site.

Full Article

submitted by CoinEx_Institution to kybernetwork [link] [comments]

Dive Into Tendermint Consensus Protocol (I)

Dive Into Tendermint Consensus Protocol (I)
This article is written by the CoinEx Chain lab. CoinEx Chain is the world’s first public chain exclusively designed for DEX, and will also include a Smart Chain supporting smart contracts and a Privacy Chain protecting users’ privacy.
longcpp @ 20200618
This is Part 1 of the serialized articles aimed to explain the Tendermint consensus protocol in detail.
Part 1. Preliminary of the consensus protocol: security model and PBFT protocol
Part 2. Tendermint consensus protocol illustrated: two-phase voting protocol and the locking and unlocking mechanism
Part 3. Weighted round-robin proposer selection algorithm used in Tendermint project
Any consensus agreement that is ultimately reached is the General Agreement, that is, the majority opinion. The consensus protocol on which the blockchain system operates is no exception. As a distributed system, the blockchain system aims to maintain the validity of the system. Intuitively, the validity of the blockchain system has two meanings: firstly, there is no ambiguity, and secondly, it can process requests to update its status. The former corresponds to the safety requirements of distributed systems, while the latter to the requirements of liveness. The validity of distributed systems is mainly maintained by consensus protocols, considering the multiple nodes and network communication involved in such systems may be unstable, which has brought huge challenges to the design of consensus protocols.

The semi-synchronous network model and Byzantine fault tolerance

Researchers of distributed systems characterize these problems that may occur in nodes and network communications using node failure models and network models. The fail-stop failure in node failure models refers to the situation where the node itself stops running due to configuration errors or other reasons, thus unable to go on with the consensus protocol. This type of failure will not cause side effects on other parts of the distributed system except that the node itself stops running. However, for such distributed systems as the public blockchain, when designing a consensus protocol, we still need to consider the evildoing intended by nodes besides their failure. These incidents are all included in the Byzantine Failure model, which covers all unexpected situations that may occur on the node, for example, passive downtime failures and any deviation intended by the nodes from the consensus protocol. For a better explanation, downtime failures refer to nodes’ passive running halt, and the Byzantine failure to any arbitrary deviation of nodes from the consensus protocol.
Compared with the node failure model which can be roughly divided into the passive and active models, the modeling of network communication is more difficult. The network itself suffers problems of instability and communication delay. Moreover, since all network communication is ultimately completed by the node which may have a downtime failure or a Byzantine failure in itself, it is usually difficult to define whether such failure arises from the node or the network itself when a node does not receive another node's network message. Although the network communication may be affected by many factors, the researchers found that the network model can be classified by the communication delay. For example, the node may fail to send data packages due to the fail-stop failure, and as a result, the corresponding communication delay is unknown and can be any value. According to the concept of communication delay, the network communication model can be divided into the following three categories:
  • The synchronous network model: There is a fixed, known upper bound of delay $\Delta$ in network communication. Under this model, the maximum delay of network communication between two nodes in the network is $\Delta$. Even if there is a malicious node, the communication delay arising therefrom does not exceed $\Delta$.
  • The asynchronous network model: There is an unknown delay in network communication, with the upper bound of the delay known, but the message can still be successfully delivered in the end. Under this model, the network communication delay between two nodes in the network can be any possible value, that is, a malicious node, if any, can arbitrarily extend the communication delay.
  • The semi-synchronous network model: Assume that there is a Global Stabilization Time (GST), before which it is an asynchronous network model and after which, a synchronous network model. In other words, there is a fixed, known upper bound of delay in network communication $\Delta$. A malicious node can delay the GST arbitrarily, and there will be no notification when no GST occurs. Under this model, the delay in the delivery of the message at the time $T$ is $\Delta + max(T, GST)$.
The synchronous network model is the most ideal network environment. Every message sent through the network can be received within a predictable time, but this model cannot reflect the real network communication situation. As in a real network, network failures are inevitable from time to time, causing the failure in the assumption of the synchronous network model. Yet the asynchronous network model goes to the other extreme and cannot reflect the real network situation either. Moreover, according to the FLP (Fischer-Lynch-Paterson) theorem, under this model if there is one node fails, no consensus protocol will reach consensus in a limited time. In contrast, the semi-synchronous network model can better describe the real-world network communication situation: network communication is usually synchronous or may return to normal after a short time. Such an experience must be no stranger to everyone: the web page, which usually gets loaded quite fast, opens slowly every now and then, and you need to try before you know the network is back to normal since there is usually no notification. The peer-to-peer (P2P) network communication, which is widely used in blockchain projects, also makes it possible for a node to send and receive information from multiple network channels. It is unrealistic to keep blocking the network information transmission of a node for a long time. Therefore, all the discussion below is under the semi-synchronous network model.
The design and selection of consensus protocols for public chain networks that allow nodes to dynamically join and leave need to consider possible Byzantine failures. Therefore, the consensus protocol of a public chain network is designed to guarantee the security and liveness of the network under the semi-synchronous network model on the premise of possible Byzantine failure. Researchers of distributed systems point out that to ensure the security and liveness of the system, the consensus protocol itself needs to meet three requirements:
  • Validity: The value reached by honest nodes must be the value proposed by one of them
  • Agreement: All honest nodes must reach consensus on the same value
  • Termination: The honest nodes must eventually reach consensus on a certain value
Validity and agreement can guarantee the security of the distributed system, that is, the honest nodes will never reach a consensus on a random value, and once the consensus is reached, all honest nodes agree on this value. Termination guarantees the liveness of distributed systems. A distributed system unable to reach consensus is useless.

The CAP theorem and Byzantine Generals Problem

In a semi-synchronous network, is it possible to design a Byzantine fault-tolerant consensus protocol that satisfies validity, agreement, and termination? How many Byzantine nodes can a system tolerance? The CAP theorem and Byzantine Generals Problem provide an answer for these two questions and have thus become the basic guidelines for the design of Byzantine fault-tolerant consensus protocols.
Lamport, Shostak, and Pease abstracted the design of the consensus mechanism in the distributed system in 1982 as the Byzantine Generals Problem, which refers to such a situation as described below: several generals each lead the army to fight in the war, and their troops are stationed in different places. The generals must formulate a unified action plan for the victory. However, since the camps are far away from each other, they can only communicate with each other through the communication soldiers, or, in other words, they cannot appear on the same occasion at the same time to reach a consensus. Unfortunately, among the generals, there is a traitor or two who intend to undermine the unified actions of the loyal generals by sending the wrong information, and the communication soldiers cannot send the message to the destination by themselves. It is assumed that each communication soldier can prove the information he has brought comes from a certain general, just as in the case of a real BFT consensus protocol, each node has its public and private keys to establish an encrypted communication channel for each other to ensure that its messages will not be tampered with in the network communication, and the message receiver can also verify the sender of the message based thereon. As already mentioned, any consensus agreement ultimately reached represents the consensus of the majority. In the process of generals communicating with each other for an offensive or retreat, a general also makes decisions based on the majority opinion from the information collected by himself.
According to the research of Lamport et al, if there are 1/3 or more traitors in the node, the generals cannot reach a unified decision. For example, in the following figure, assume there are 3 generals and only 1 traitor. In the figure on the left, suppose that General C is the traitor, and A and B are loyal. If A wants to launch an attack and informs B and C of such intention, yet the traitor C sends a message to B, suggesting what he has received from A is a retreat. In this case, B can't decide as he doesn't know who the traitor is, and the information received is insufficient for him to decide. If A is a traitor, he can send different messages to B and C. Then C faithfully reports to B the information he received. At this moment as B receives conflicting information, he cannot make any decisions. In both cases, even if B had received consistent information, it would be impossible for him to spot the traitor between A and C. Therefore, it is obvious that in both situations shown in the figure below, the honest General B cannot make a choice.
According to this conclusion, when there are $n$ generals with at most $f$ traitors (n≤3f), the generals cannot reach a consensus if $n \leq 3f$; and with $n > 3f$, a consensus can be reached. This conclusion also suggests that when the number of Byzantine failures $f$ exceeds 1/3 of the total number of nodes $n$ in the system $f \ge n/3$ , no consensus will be reached on any consensus protocol among all honest nodes. Only when $f < n/3$, such condition is likely to happen, without loss of generality, and for the subsequent discussion on the consensus protocol, $ n \ge 3f + 1$ by default.
The conclusion reached by Lamport et al. on the Byzantine Generals Problem draws a line between the possible and the impossible in the design of the Byzantine fault tolerance consensus protocol. Within the possible range, how will the consensus protocol be designed? Can both the security and liveness of distributed systems be fully guaranteed? Brewer provided the answer in his CAP theorem in 2000. It indicated that a distributed system requires the following three basic attributes, but any distributed system can only meet two of the three at the same time.
  1. Consistency: When any node responds to the request, it must either provide the latest status information or provide no status information
  2. Availability: Any node in the system must be able to continue reading and writing
  3. Partition Tolerance: The system can tolerate the loss of any number of messages between two nodes and still function normally

https://preview.redd.it/1ozfwk7u7m851.png?width=1400&format=png&auto=webp&s=fdee6318de2cf1c021e636654766a7a0fe7b38b4
A distributed system aims to provide consistent services. Therefore, the consistency attribute requires that the two nodes in the system cannot provide conflicting status information or expired information, which can ensure the security of the distributed system. The availability attribute is to ensure that the system can continuously update its status and guarantee the availability of distributed systems. The partition tolerance attribute is related to the network communication delay, and, under the semi-synchronous network model, it can be the status before GST when the network is in an asynchronous status with an unknown delay in the network communication. In this condition, communicating nodes may not receive information from each other, and the network is thus considered to be in a partitioned status. Partition tolerance requires the distributed system to function normally even in network partitions.
The proof of the CAP theorem can be demonstrated with the following diagram. The curve represents the network partition, and each network has four nodes, distinguished by the numbers 1, 2, 3, and 4. The distributed system stores color information, and all the status information stored by all nodes is blue at first.
  1. Partition tolerance and availability mean the loss of consistency: When node 1 receives a new request in the leftmost image, the status changes to red, the status transition information of node 1 is passed to node 3, and node 3 also updates the status information to red. However, since node 3 and node 4 did not receive the corresponding information due to the network partition, the status information is still blue. At this moment, if the status information is queried through node 2, the blue returned by node 2 is not the latest status of the system, thus losing consistency.
  2. Partition tolerance and consistency mean the loss of availability: In the middle figure, the initial status information of all nodes is blue. When node 1 and node 3 update the status information to red, node 2 and node 4 maintain the outdated information as blue due to network partition. Also when querying status information through node 2, you need to first ask other nodes to make sure you’re in the latest status before returning status information as node 2 needs to follow consistency, but because of the network partition, node 2 cannot receive any information from node 1 or node 3. Then node 2 cannot determine whether it is in the latest status, so it chooses not to return any information, thus depriving the system of availability.
  3. Consistency and availability mean the loss of the partition tolerance: In the right-most figure, the system does not have a network partition at first, and both status updates and queries can go smoothly. However, once a network partition occurs, it degenerates into one of the previous two conditions. It is thus proved that any distributed system cannot have consistency, availability, and partition tolerance all at the same time.

https://preview.redd.it/456x2blv7m851.png?width=1400&format=png&auto=webp&s=550797373145b8fc1471bdde68ed5f8d45adb52b
The discovery of the CAP theorem seems to declare that the aforementioned goals of the consensus protocol is impossible. However, if you’re careful enough, you may find from the above that those are all extreme cases, such as network partitions that cause the failure of information transmission, which could be rare, especially in P2P network. In the second case, the system rarely returns the same information with node 2, and the general practice is to query other nodes and return the latest status as believed after a while, regardless of whether it has received the request information of other nodes. Therefore, although the CAP theorem points out that any distributed system cannot satisfy the three attributes at the same time, it is not a binary choice, as the designer of the consensus protocol can weigh up all the three attributes according to the needs of the distributed system. However, as the communication delay is always involved in the distributed system, one always needs to choose between availability and consistency while ensuring a certain degree of partition tolerance. Specifically, in the second case, it is about the value that node 2 returns: a probably outdated value or no value. Returning the possibly outdated value may violate consistency but guarantees availability; yet returning no value deprives the system of availability but guarantees its consistency. Tendermint consensus protocol to be introduced is consistent in this trade-off. In other words, it will lose availability in some cases.
The genius of Satoshi Nakamoto is that with constraints of the CAP theorem, he managed to reach a reliable Byzantine consensus in a distributed network by combining PoW mechanism, Satoshi Nakamoto consensus, and economic incentives with appropriate parameter configuration. Whether Bitcoin's mechanism design solves the Byzantine Generals Problem has remained a dispute among academicians. Garay, Kiayias, and Leonardos analyzed the link between Bitcoin mechanism design and the Byzantine consensus in detail in their paper The Bitcoin Backbone Protocol: Analysis and Applications. In simple terms, the Satoshi Consensus is a probabilistic Byzantine fault-tolerant consensus protocol that depends on such conditions as the network communication environment and the proportion of malicious nodes' hashrate. When the proportion of malicious nodes’ hashrate does not exceed 1/2 in a good network communication environment, the Satoshi Consensus can reliably solve the Byzantine consensus problem in a distributed environment. However, when the environment turns bad, even with the proportion within 1/2, the Satoshi Consensus may still fail to reach a reliable conclusion on the Byzantine consensus problem. It is worth noting that the quality of the network environment is relative to Bitcoin's block interval. The 10-minute block generation interval of the Bitcoin can ensure that the system is in a good network communication environment in most cases, given the fact that the broadcast time of a block in the distributed network is usually just several seconds. In addition, economic incentives can motivate most nodes to actively comply with the agreement. It is thus considered that with the current Bitcoin network parameter configuration and mechanism design, the Bitcoin mechanism design has reliably solved the Byzantine Consensus problem in the current network environment.

Practical Byzantine Fault Tolerance, PBFT

It is not an easy task to design the Byzantine fault-tolerant consensus protocol in a semi-synchronous network. The first practically usable Byzantine fault-tolerant consensus protocol is the Practical Byzantine Fault Tolerance (PBFT) designed by Castro and Liskov in 1999, the first of its kind with polynomial complexity. For a distributed system with $n$ nodes, the communication complexity is $O(n2$.) Castro and Liskov showed in the paper that by transforming centralized file system into a distributed one using the PBFT protocol, the overwall performance was only slowed down by 3%. In this section we will briefly introduce the PBFT protocol, paving the way for further detailed explanations of the Tendermint protocol and the improvements of the Tendermint protocol.
The PBFT protocol that includes $n=3f+1$ nodes can tolerate up to $f$ Byzantine nodes. In the original paper of PBFT, full connection is required among all the $n$ nodes, that is, any two of the n nodes must be connected. All the nodes of the network jointly maintain the system status through network communication. In the Bitcoin network, a node can participate in or exit the consensus process through hashrate mining at any time, which is managed by the administrator, and the PFBT protocol needs to determine all the participating nodes before the protocol starts. All nodes in the PBFT protocol are divided into two categories, master nodes, and slave nodes. There is only one master node at any time, and all nodes take turns to be the master node. All nodes run in a rotation process called View, in each of which the master node will be reelected. The master node selection algorithm in PBFT is very simple: all nodes become the master node in turn by the index number. In each view, all nodes try to reach a consensus on the system status. It is worth mentioning that in the PBFT protocol, each node has its own digital signature key pair. All sent messages (including request messages from the client) need to be signed to ensure the integrity of the message in the network and the traceability of the message itself. (You can determine who sent a message based on the digital signature).
The following figure shows the basic flow of the PBFT consensus protocol. Assume that the current view’s master node is node 0. Client C initiates a request to the master node 0. After the master node receives the request, it broadcasts the request to all slave nodes that process the request of client C and return the result to the client. After the client receives f+1 identical results from different nodes (based on the signature value), the result can be taken as the final result of the entire operation. Since the system can have at most f Byzantine nodes, at least one of the f+1 results received by the client comes from an honest node, and the security of the consensus protocol guarantees that all honest nodes will reach consensus on the same status. So, the feedback from 1 honest node is enough to confirm that the corresponding request has been processed by the system.

https://preview.redd.it/sz8so5ly7m851.png?width=1400&format=png&auto=webp&s=d472810e76bbc202e91a25ef29a51e109a576554
For the status synchronization of all honest nodes, the PBFT protocol has two constraints on each node: on one hand, all nodes must start from the same status, and on the other, the status transition of all nodes must be definite, that is, given the same status and request, the results after the operation must be the same. Under these two constraints, as long as the entire system agrees on the processing order of all transactions, the status of all honest nodes will be consistent. This is also the main purpose of the PBFT protocol: to reach a consensus on the order of transactions between all nodes, thereby ensuring the security of the entire distributed system. In terms of availability, the PBFT consensus protocol relies on a timeout mechanism to find anomalies in the consensus process and start the View Change protocol in time to try to reach a consensus again.
The figure above shows a simplified workflow of the PBFT protocol. Where C is the client, 0, 1, 2, and 3 represent 4 nodes respectively. Specifically, 0 is the master node of the current view, 1, 2, 3 are slave nodes, and node 3 is faulty. Under normal circumstances, the PBFT consensus protocol reaches consensus on the order of transactions between nodes through a three-phase protocol. These three phases are respectively: Pre-Prepare, Prepare, and Commit:
  • The master node of the pre-preparation node is responsible for assigning the sequence number to the received client request, and broadcasting the message to the slave node. The message contains the hash value of the client request d, the sequence number of the current viewv, the sequence number n assigned by the master node to the request, and the signature information of the master nodesig. The scheme design of the PBFT protocol separates the request transmission from the request sequencing process, and the request transmission is not to be discussed here. The slave node that receives the message accepts the message after confirming the message is legitimate and enter preparation phase. The message in this step checks the basic signature, hash value, current view, and, most importantly, whether the master node has given the same sequence number to other request from the client in the current view.
  • In preparation, the slave node broadcasts the message to all nodes (including itself), indicating that it assigns the sequence number n to the client request with the hash value d under the current view v, with its signaturesig as proof. The node receiving the message will check the correctness of the signature, the matching of the view sequence number, etc., and accept the legitimate message. When the PRE-PREPARE message about a client request (from the main node) received by a node matches with the PREPARE from 2f slave nodes, the system has agreed on the sequence number requested by the client in the current view. This means that 2f+1 nodes in the current view agree with the request sequence number. Since it contains information from at most fmalicious nodes, there are a total of f+1 honest nodes that have agreed with the allocation of the request sequence number. With f malicious nodes, there are a total of 2f+1 honest nodes, so f+1represents the majority of the honest nodes, which is the consensus of the majority mentioned before.
  • After the node (including the master node and the slave node) receives a PRE-PREPARE message requested by the client and 2f PREPARE messages, the message is broadcast across the network and enters the submission phase. This message is used to indicate that the node has observed that the whole network has reached a consensus on the sequence number allocation of the request message from the client. When the node receives 2f+1 COMMIT messages, there are at least f+1 honest nodes, that is, most of the honest nodes have observed that the entire network has reached consensus on the arrangement of sequence numbers of the request message from the client. The node can process the client request and return the execution result to the client at this moment.
Roughly speaking, in the pre-preparation phase, the master node assigns a sequence number to all new client requests. During preparation, all nodes reach consensus on the client request sequence number in this view, while in submission the consistency of the request sequence number of the client in different views is to be guaranteed. In addition, the design of the PBFT protocol itself does not require the request message to be submitted by the assigned sequence number, but out of order. That can improve the efficiency of the implementation of the consensus protocol. Yet, the messages are still processed by the sequence number assigned by the consensus protocol for the consistency of the distributed system.
In the three-phase protocol execution of the PBFT protocol, in addition to maintaining the status information of the distributed system, the node itself also needs to log all kinds of consensus information it receives. The gradual accumulation of logs will consume considerable system resources. Therefore, the PBFT protocol additionally defines checkpoints to help the node deal with garbage collection. You can set a checkpoint every 100 or 1000 sequence numbers according to the request sequence number. After the client request at the checkpoint is executed, the node broadcasts messages throughout the network, indicating that after the node executes the client request with sequence number n, the hash value of the system status is d, and it is vouched by its own signature sig. After 2f+1 matching CHECKPOINT messages (one of which can come from the node itself) are received, most of the honest nodes in the entire network have reached a consensus on the system status after the execution of the client request with the sequence numbern, and then you can clear all relevant log records of client requests with the sequence number less than n. The node needs to save these2f+1 CHECKPOINTmessages as proof of the legitimate status at this moment, and the corresponding checkpoint is called a stable checkpoint.
The three-phase protocol of the PBFT protocol can ensure the consistency of the processing order of the client request, and the checkpoint mechanism is set to help nodes perform garbage collection and further ensures the status consistency of the distributed system, both of which can guarantee the security of the distributed system aforementioned. How is the availability of the distributed system guaranteed? In the semi-synchronous network model, a timeout mechanism is usually introduced, which is related to delays in the network environment. It is assumed that the network delay has a known upper bound after GST. In such condition, an initial value is usually set according to the network condition of the system deployed. In case of a timeout event, besides the corresponding processing flow triggered, additional mechanisms will be activated to readjust the waiting time. For example, an algorithm like TCP's exponential back off can be adopted to adjust the waiting time after a timeout event.
To ensure the availability of the system in the PBFT protocol, a timeout mechanism is also introduced. In addition, due to the potential the Byzantine failure in the master node itself, the PBFT protocol also needs to ensure the security and availability of the system in this case. When the Byzantine failure occurs in the master node, for example, when the slave node does not receive the PRE-PREPARE message or the PRE-PREPARE message sent by the master node from the master node within the time window and is thus determined to be illegitimate, the slave node can broadcast to the entire network, indicating that the node requests to switch to the new view with sequence number v+1. n indicates the request sequence number corresponding to the latest stable checkpoint local to the node, and C is to prove the stable checkpoint 2f+1 legitimate CHECKPOINT messages as aforementioned. After the latest stable checkpoint and before initiating the VIEWCHANGE message, the system may have reached a consensus on the sequence numbers of some request messages in the previous view. To ensure the consistency of these request sequence numbers to be switched in the view, the VIEWCHANGE message needs to carry this kind of the information to the new view, which is also the meaning of the P field in the message. P contains all the client request messages collected at the node with a request sequence number greater than n and the proof that a consensus has been reached on the sequence number in the node: the legitimate PRE-PREPARE message of the request and 2f matching PREPARE messages. When the master node in view v+1 collects 2f+1 VIEWCHANGE messages, it can broadcast the NEW-VIEW message and take the entire system into a new view. For the security of the system in combination with the three-phase protocol of the PBFT protocol, the construction rules of the NEW-VIEW information are designed in a quite complicated way. You can refer to the original paper of PBFT for more details.

https://preview.redd.it/x5efdc908m851.png?width=1400&format=png&auto=webp&s=97b4fd879d0ec668ee0990ea4cadf476167a2948
VIEWCHANGE contains a lot of information. For example, C contains 2f+1 signature information, P contains several signature sets, and each set has 2f+1 signature. At least 2f+1 nodes need to send a VIEWCHANGE message before prompting the system to enter the next new view, and that means, in addition to the complex logic of constructing the information of VIEWCHANGE and NEW-VIEW, the communication complexity of the view conversion protocol is $O(n2$.) Such complexity also limits the PBFT protocol to support only a few nodes, and when there are 100 nodes, it is usually too complex to practically deploy PBFT. It is worth noting that in some materials the communication complexity of the PBFT protocol is inappropriately attributed to the full connection between n nodes. By changing the fully connected network topology to the P2P network topology based on distributed hash tables commonly used in blockchain projects, high communication complexity caused by full connection can be conveniently solved, yet still, it is difficult to improve the communication complexity during the view conversion process. In recent years, researchers have proposed to reduce the amount of communication in this step by adopting aggregate signature scheme. With this technology, 2f+1 signature information can be compressed into one, thereby reducing the communication volume during view change.
submitted by coinexchain to u/coinexchain [link] [comments]

FlowCards: A Declarative Framework for Development of Ergo dApps

FlowCards: A Declarative Framework for Development of Ergo dApps
Introduction
ErgoScript is the smart contract language used by the Ergo blockchain. While it has concise syntax adopted from Scala/Kotlin, it still may seem confusing at first because conceptually ErgoScript is quite different compared to conventional languages which we all know and love. This is because Ergo is a UTXO based blockchain, whereas smart contracts are traditionally associated with account based systems like Ethereum. However, Ergo's transaction model has many advantages over the account based model and with the right approach it can even be significantly easier to develop Ergo contracts than to write and debug Solidity code.
Below we will cover the key aspects of the Ergo contract model which makes it different:
Paradigm
The account model of Ethereum is imperative. This means that the typical task of sending coins from Alice to Bob requires changing the balances in storage as a series of operations. Ergo's UTXO based programming model on the other hand is declarative. ErgoScript contracts specify conditions for a transaction to be accepted by the blockchain (not changes to be made in the storage state as result of the contract execution).
Scalability
In the account model of Ethereum both storage changes and validity checks are performed on-chain during code execution. In contrast, Ergo transactions are created off-chain and only validation checks are performed on-chain thus reducing the amount of operations performed by every node on the network. In addition, due to immutability of the transaction graph, various optimization strategies are possible to improve throughput of transactions per second in the network. Light verifying nodes are also possible thus further facilitating scalability and accessibility of the network.
Shared state
The account-based model is reliant on shared mutable state which is known to lead to complex semantics (and subtle million dollar bugs) in the context of concurrent/ distributed computation. Ergo's model is based on an immutable graph of transactions. This approach, inherited from Bitcoin, plays well with the concurrent and distributed nature of blockchains and facilitates light trustless clients.
Expressive Power
Ethereum advocated execution of a turing-complete language on the blockchain. It theoretically promised unlimited potential, however in practice severe limitations came to light from excessive blockchain bloat, subtle multi-million dollar bugs, gas costs which limit contract complexity, and other such problems. Ergo on the flip side extends UTXO to enable turing-completeness while limiting the complexity of the ErgoScript language itself. The same expressive power is achieved in a different and more semantically sound way.
With the all of the above points, it should be clear that there are a lot of benefits to the model Ergo is using. In the rest of this article I will introduce you to the concept of FlowCards - a dApp developer component which allows for designing complex Ergo contracts in a declarative and visual way.

From Imperative to Declarative

In the imperative programming model of Ethereum a transaction is a sequence of operations executed by the Ethereum VM. The following Solidity function implements a transfer of tokens from sender to receiver . The transaction starts when sender calls this function on an instance of a contract and ends when the function returns.
// Sends an amount of existing coins from any caller to an address function send(address receiver, uint amount) public { require(amount <= balances[msg.sender], "Insufficient balance."); balances[msg.sender] -= amount; balances[receiver] += amount; emit Sent(msg.sender, receiver, amount); } 
The function first checks the pre-conditions, then updates the storage (i.e. balances) and finally publishes the post-condition as the Sent event. The gas which is consumed by the transaction is sent to the miner as a reward for executing this transaction.
Unlike Ethereum, a transaction in Ergo is a data structure holding a list of input coins which it spends and a list of output coins which it creates preserving the total balances of ERGs and tokens (in which Ergo is similar to Bitcoin).
Turning back to the example above, since Ergo natively supports tokens, therefore for this specific example of sending tokens we don't need to write any code in ErgoScript. Instead we need to create the ‘send’ transaction shown in the following figure, which describes the same token transfer but declaratively.
https://preview.redd.it/sxs3kesvrsv41.png?width=1348&format=png&auto=webp&s=582382bc26912ff79114d831d937d94b6988e69f
The picture visually describes the following steps, which the network user needs to perform:
  1. Select unspent sender's boxes, containing in total tB >= amount of tokens and B >= txFee + minErg ERGs.
  2. Create an output target box which is protected by the receiver public key with minErg ERGs and amount of T tokens.
  3. Create one fee output protected by the minerFee contract with txFee ERGs.
  4. Create one change output protected by the sender public key, containing B - minErg - txFee ERGs and tB - amount of T tokens.
  5. Create a new transaction, sign it using the sender's secret key and send to the Ergo network.
What is important to understand here is that all of these steps are preformed off-chain (for example using Appkit Transaction API) by the user's application. Ergo network nodes don't need to repeat this transaction creation process, they only need to validate the already formed transaction. ErgoScript contracts are stored in the inputs of the transaction and check spending conditions. The node executes the contracts on-chain when the transaction is validated. The transaction is valid if all of the conditions are satisfied.
Thus, in Ethereum when we “send amount from sender to recipient” we are literally editing balances and updating the storage with a concrete set of commands. This happens on-chain and thus a new transaction is also created on-chain as the result of this process.
In Ergo (as in Bitcoin) transactions are created off-chain and the network nodes only verify them. The effects of the transaction on the blockchain state is that input coins (or Boxes in Ergo's parlance) are removed and output boxes are added to the UTXO set.
In the example above we don't use an ErgoScript contract but instead assume a signature check is used as the spending pre-condition. However in more complex application scenarios we of course need to use ErgoScript which is what we are going to discuss next.

From Changing State to Checking Context

In the send function example we first checked the pre-condition (require(amount <= balances[msg.sender],...) ) and then changed the state (i.e. update balances balances[msg.sender] -= amount ). This is typical in Ethereum transactions. Before we change anything we need to check if it is valid to do so.
In Ergo, as we discussed previously, the state (i.e. UTXO set of boxes) is changed implicitly when a valid transaction is included in a block. Thus we only need to check the pre-conditions before the transaction can be added to the block. This is what ErgoScript contracts do.
It is not possible to “change the state” in ErgoScript because it is a language to check pre-conditions for spending coins. ErgoScript is a purely functional language without side effects that operates on immutable data values. This means all the inputs, outputs and other transaction parameters available in a script are immutable. This, among other things, makes ErgoScript a very simple language that is easy to learn and safe to use. Similar to Bitcoin, each input box contains a script, which should return the true value in order to 1) allow spending of the box (i.e. removing from the UTXO set) and 2) adding the transaction to the block.
If we are being pedantic, it is therefore incorrect (strictly speaking) to think of ErgoScript as the language of Ergo contracts, because it is the language of propositions (logical predicates, formulas, etc.) which protect boxes from “illegal” spending. Unlike Bitcoin, in Ergo the whole transaction and a part of the current blockchain context is available to every script. Therefore each script may check which outputs are created by the transaction, their ERG and token amounts (we will use this capability in our example DEX contracts), current block number etc.
In ErgoScript you define the conditions of whether changes (i.e. coin spending) are allowed to happen in a given context. This is in contrast to programming the changes imperatively in the code of a contract.
While Ergo's transaction model unlocks a whole range of applications like (DEX, DeFi Apps, LETS, etc), designing contracts as pre-conditions for coin spending (or guarding scripts) directly is not intuitive. In the next sections we will consider a useful graphical notation to design contracts declaratively using FlowCard Diagrams, which is a visual representation of executable components (FlowCards).
FlowCards aim to radically simplify dApp development on the Ergo platform by providing a high-level declarative language, execution runtime, storage format and a graphical notation.
We will start with a high level of diagrams and go down to FlowCard specification.

FlowCard Diagrams

The idea behind FlowCard diagrams is based on the following observations: 1) An Ergo box is immutable and can only be spent in the transaction which uses it as an input. 2) We therefore can draw a flow of boxes through transactions, so that boxes flowing in to the transaction are spent and those flowing out are created and added to the UTXO. 3) A transaction from this perspective is simply a transformer of old boxes to the new ones preserving the balances of ERGs and tokens involved.
The following figure shows the main elements of the Ergo transaction we've already seen previously (now under the name of FlowCard Diagram).
https://preview.redd.it/06aqkcd1ssv41.png?width=1304&format=png&auto=webp&s=106eda730e0526919aabd5af9596b97e45b69777
There is a strictly defined meaning (semantics) behind every element of the diagram, so that the diagram is a visual representation (or a view) of the underlying executable component (called FlowCard).
The FlowCard can be used as a reusable component of an Ergo dApp to create and initiate the transaction on the Ergo blockchain. We will discuss this in the coming sections.
Now let's look at the individual pieces of the FlowCard diagram one by one.
1. Name and Parameters
Each flow card is given a name and a list of typed parameters. This is similar to a template with parameters. In the above figure we can see the Send flow card which has five parameters. The parameters are used in the specification.
2. Contract Wallet
This is a key element of the flow card. Every box has a guarding script. Often it is the script that checks a signature against a public key. This script is trivial in ErgoScript and is defined like the def pk(pubkey: Address) = { pubkey } template where pubkey is a parameter of the type Address . In the figure, the script template is applied to the parameter pk(sender) and thus a concrete wallet contract is obtained. Therefore pk(sender) and pk(receiver) yield different scripts and represent different wallets on the diagram, even though they use the same template.
Contract Wallet contains a set of all UTXO boxes which have a given script derived from the given script template using flow card parameters. For example, in the figure, the template is pk and parameter pubkey is substituted with the `sender’ flow card parameter.
3. Contract
Even though a contract is a property of a box, on the diagram we group the boxes by their contracts, therefore it looks like the boxes belong to the contracts, rather than the contracts belong to the boxes. In the example, we have three instantiated contracts pk(sender) , pk(receiver) and minerFee . Note, that pk(sender) is the instantiation of the pk template with the concrete parameter sender and minerFee is the instantiation of the pre-defined contract which protects the miner reward boxes.
4. Box name
In the diagram we can give each box a name. Besides readability of the diagram, we also use the name as a synonym of a more complex indexed access to the box in the contract. For example, change is the name of the box, which can also be used in the ErgoScript conditions instead of OUTPUTS(2) . We also use box names to associate spending conditions with the boxes.
5. Boxes in the wallet
In the diagram, we show boxes (darker rectangles) as belonging to the contract wallets (lighter rectangles). Each such box rectangle is connected with a grey transaction rectangle by either orange or green arrows or both. An output box (with an incoming green arrow) may include many lines of text where each line specifies a condition which should be checked as part of the transaction. The first line specifies the condition on the amount of ERG which should be placed in the box. Other lines may take one of the following forms:
  1. amount: TOKEN - the box should contain the given amount of the given TOKEN
  2. R == value - the box should contain the given value of the given register R
  3. boxName ? condition - the box named boxName should check condition in its script.
We discuss these conditions in the sections below.
6. Amount of ERGs in the box
Each box should store a minimum amount of ERGs. This is checked when the creating transaction is validated. In the diagram the amount of ERGs is always shown as the first line (e.g. B: ERG or B - minErg - txFee ). The value type ascription B: ERG is optional and may be used for readability. When the value is given as a formula, then this formula should be respected by the transaction which creates the box.
It is important to understand that variables like amount and txFee are not named properties of the boxes. They are parameters of the whole diagram and representing some amounts. Or put it another way, they are shared parameters between transactions (e.g. Sell Order and Swap transactions from DEX example below share the tAmt parameter). So the same name is tied to the same value throughout the diagram (this is where the tooling would help a lot). However, when it comes to on-chain validation of those values, only explicit conditions which are marked with ? are transformed to ErgoScript. At the same time, all other conditions are ensured off-chain during transaction building (for example in an application using Appkit API) and transaction validation when it is added to the blockchain.
7. Amount of T token
A box can store values of many tokens. The tokens on the diagram are named and a value variable may be associated with the token T using value: T expression. The value may be given by formula. If the formula is prefixed with a box name like boxName ? formula , then it is should also be checked in the guarding script of the boxName box. This additional specification is very convenient because 1) it allows to validate the visual design automatically, and 2) the conditions specified in the boxes of a diagram are enough to synthesize the necessary guarding scripts. (more about this below at “From Diagrams To ErgoScript Contracts”)
8. Tx Inputs
Inputs are connected to the corresponding transaction by orange arrows. An input arrow may have a label of the following forms:
  1. [email protected] - optional name with an index i.e. [email protected] or u/2 . This is a property of the target endpoint of the arrow. The name is used in conditions of related boxes and the index is the position of the corresponding box in the INPUTS collection of the transaction.
  2. !action - is a property of the source of the arrow and gives a name for an alternative spending path of the box (we will see this in DEX example)
Because of alternative spending paths, a box may have many outgoing orange arrows, in which case they should be labeled with different actions.
9. Transaction
A transaction spends input boxes and creates output boxes. The input boxes are given by the orange arrows and the labels are expected to put inputs at the right indexes in INPUTS collection. The output boxes are given by the green arrows. Each transaction should preserve a strict balance of ERG values (sum of inputs == sum of outputs) and for each token the sum of inputs >= the sum of outputs. The design diagram requires an explicit specification of the ERG and token values for all of the output boxes to avoid implicit errors and ensure better readability.
10. Tx Outputs
Outputs are connected to the corresponding transaction by green arrows. An output arrow may have a label of the following [email protected] , where an optional name is accompanied with an index i.e. [email protected] or u/2 . This is a property of the source endpoint of the arrow. The name is used in conditions of the related boxes and the index is the position of the corresponding box in the OUTPUTS collection of the transaction.

Example: Decentralized Exchange (DEX)

Now let's use the above described notation to design a FlowCard for a DEX dApp. It is simple enough yet also illustrates all of the key features of FlowCard diagrams which we've introduced in the previous section.
The dApp scenario is shown in the figure below: There are three participants (buyer, seller and DEX) of the DEX dApp and five different transaction types, which are created by participants. The buyer wants to swap ergAmt of ERGs for tAmt of TID tokens (or vice versa, the seller wants to sell TID tokens for ERGs, who sends the order first doesn't matter). Both the buyer and the seller can cancel their orders any time. The DEX off-chain matching service can find matching orders and create the Swap transaction to complete the exchange.
The following diagram fully (and formally) specifies all of the five transactions that must be created off-chain by the DEX dApp. It also specifies all of the spending conditions that should be verified on-chain.

https://preview.redd.it/piogz0v9ssv41.png?width=1614&format=png&auto=webp&s=e1b503a635ad3d138ef91e2f0c3b726e78958646
Let's discuss the FlowCard diagram and the logic of each transaction in details:
Buy Order Transaction
A buyer creates a Buy Order transaction. The transaction spends E amount of ERGs (which we will write E: ERG ) from one or more boxes in the pk(buyer) wallet. The transaction creates a bid box with ergAmt: ERG protected by the buyOrder script. The buyOrder script is synthesized from the specification (see below at “From Diagrams To ErgoScript Contracts”) either manually or automatically by a tool. Even though we don't need to define the buyOrder script explicitly during designing, at run time the bid box should contain the buyOrder script as the guarding proposition (which checks the box spending conditions), otherwise the conditions specified in the diagram will not be checked.
The change box is created to make the input and output sums of the transaction balanced. The transaction fee box is omitted because it can be added automatically by the tools. In practice, however, the designer can add the fee box explicitly to the a diagram. It covers the cases of more complex transactions (like Swap) where there are many ways to pay the transaction fee.
Cancel Buy, Cancel Sell Transactions
At any time, the buyer can cancel the order by sending CancelBuy transaction. The transaction should satisfy the guarding buyOrder contract which protects the bid box. As you can see on the diagram, both the Cancel and the Swap transactions can spend the bid box. When a box has spending alternatives (or spending paths) then each alternative is identified by a unique name prefixed with ! (!cancel and !swap for the bid box). Each alternative path has specific spending conditions. In our example, when the Cancel Buy transaction spends the bid box the ?buyer condition should be satisfied, which we read as “the signature for the buyer address should be presented in the transaction”. Therefore, only buyer can cancel the buy order. This “signature” condition is only required for the !cancel alternative spending path and not required for !swap .
Sell Order Transaction
The Sell Order transaction is similar to the BuyOrder in that it deals with tokens in addition to ERGs. The transaction spends E: ERG and T: TID tokens from seller's wallet (specified as pk(seller) contract). The two outputs are ask and change . The change is a standard box to balance transaction. The ask box keeps tAmt: TID tokens for the exchange and minErg: ERG - the minimum amount of ERGs required in every box.
Swap Transaction
This is a key transaction in the DEX dApp scenario. The transaction has several spending conditions on the input boxes and those conditions are included in the buyOrder and sellOrder scripts (which are verified when the transaction is added to the blockchain). However, on the diagram those conditions are not specified in the bid and ask boxes, they are instead defined in the output boxes of the transaction.
This is a convention for improved usability because most of the conditions relate to the properties of the output boxes. We could specify those properties in the bid box, but then we would have to use more complex expressions.
Let's consider the output created by the arrow labeled with [email protected] . This label tells us that the output is at the index 0 in the OUTPUTS collection of the transaction and that in the diagram we can refer to this box by the buyerOut name. Thus we can label both the box itself and the arrow to give the box a name.
The conditions shown in the buyerOut box have the form bid ? condition , which means they should be verified on-chain in order to spend the bid box. The conditions have the following meaning:
  • tAmt: TID requires the box to have tAmt amount of TID token
  • R4 == bid.id requires R4 register in the box to be equal to id of the bid box.
  • script == buyer requires the buyerOut box to have the script of the wallet where it is located on the diagram, i.e. pk(buyer)
Similar properties are added to the sellerOut box, which is specified to be at index 1 and the name is given to it using the label on the box itself, rather than on the arrow.
The Swap transaction spends two boxes bid and ask using the !swap spending path on both, however unlike !cancel the conditions on the path are not specified. This is where the bid ? and ask ? prefixes come into play. They are used so that the conditions listed in the buyerOut and sellerOut boxes are moved to the !swap spending path of the bid and ask boxes correspondingly.
If you look at the conditions of the output boxes, you will see that they exactly specify the swap of values between seller's and buyer's wallets. The buyer gets the necessary amount of TID token and seller gets the corresponding amount of ERGs. The Swap transaction is created when there are two matching boxes with buyOrder and sellOrder contracts.

From Diagrams To ErgoScript Contracts

What is interesting about FlowCard specifications is that we can use them to automatically generate the necessary ErgoTree scripts. With the appropriate tooling support this can be done automatically, but with the lack of thereof, it can be done manually. Thus, the FlowCard allows us to capture and visually represent all of the design choices and semantic details of an Ergo dApp.
What we are going to do next is to mechanically create the buyOrder contract from the information given in the DEX flow card.
Recall that each script is a proposition (boolean valued expression) which should evaluate to true to allow spending of the box. When we have many conditions to be met at the same time we can combine them in a logical formula using the AND binary operation, and if we have alternatives (not necessarily exclusive) we can put them into the OR operation.
The buyOrder box has the alternative spending paths !cancel and !swap . Thus the ErgoScript code should have OR operation with two arguments - one for each spending path.
/** buyOrder contract */ { val cancelCondition = {} val swapCondition = {} cancelCondition || swapCondition } 
The formula for the cancelCondition expression is given in the !cancel spending path of the buyOrder box. We can directly include it in the script.
/** buyOrder contract */ { val cancelCondition = { buyer } val swapCondition = {} cancelCondition || swapCondition } 
For the !swap spending path of the buyOrder box the conditions are specified in the buyerOut output box of the Swap transaction. If we simply include them in the swapCondition then we get a syntactically incorrect script.
/** buyOrder contract */ { val cancelCondition = { buyer } val swapCondition = { tAmt: TID && R4 == bid.id && @contract } cancelCondition || swapCondition } 
We can however translate the conditions from the diagram syntax to ErgoScript expressions using the following simple rules
  1. [email protected] ==> val buyerOut = OUTPUTS(0)
  2. tAmt: TID ==> tid._2 == tAmt where tid = buyerOut.tokens(TID)
  3. R4 == bid.id ==> R4 == SELF.id where R4 = buyerOut.R4[Coll[Byte]].get
  4. script == buyer ==> buyerOut.propositionBytes == buyer.propBytes
Note, in the diagram TID represents a token id, but ErgoScript doesn't have access to the tokens by the ids so we cannot write tokens.getByKey(TID) . For this reason, when the diagram is translated into ErgoScript, TID becomes a named constant of the index in tokens collection of the box. The concrete value of the constant is assigned when the BuyOrder transaction with the buyOrder box is created. The correspondence and consistency between the actual tokenId, the TID constant and the actual tokens of the buyerOut box is ensured by the off-chain application code, which is completely possible since all of the transactions are created by the application using FlowCard as a guiding specification. This may sound too complicated, but this is part of the translation from diagram specification to actual executable application code, most of which can be automated.
After the transformation we can obtain a correct script which checks all the required preconditions for spending the buyOrder box.
/** buyOrder contract */ def DEX(buyer: Addrss, seller: Address, TID: Int, ergAmt: Long, tAmt: Long) { val cancelCondition: SigmaProp = { buyer } // verify buyer's sig (ProveDlog) val swapCondition = OUTPUTS.size > 0 && { // securing OUTPUTS access val buyerOut = OUTPUTS(0) // from [email protected] buyerOut.tokens.size > TID && { // securing tokens access val tid = buyerOut.tokens(TID) val regR4 = buyerOut.R4[Coll[Byte]] regR4.isDefined && { // securing R4 access val R4 = regR4.get tid._2 == tAmt && // from tAmt: TID R4 == SELF.id && // from R4 == bid.id buyerOut.propositionBytes == buyer.propBytes // from script == buyer } } } cancelCondition || swapCondition } 
A similar script for the sellOrder box can be obtained using the same translation rules. With the help of the tooling the code of contracts can be mechanically generated from the diagram specification.

Conclusions

Declarative programming models have already won the battle against imperative programming in many application domains like Big Data, Stream Processing, Deep Learning, Databases, etc. Ergo is pioneering the declarative model of dApp development as a better and safer alternative to the now popular imperative model of smart contracts.
The concept of FlowCard shifts the focus from writing ErgoScript contracts to the overall flow of values (hence the name), in such a way, that ErgoScript can always be generated from them. You will never need to look at the ErgoScript code once the tooling is in place.
Here are the possible next steps for future work:
  1. Storage format for FlowCard Spec and the corresponding EIP standardized file format (Json/XML/Protobuf). This will allow various tools (Diagram Editor, Runtime, dApps etc) to create and use *.flowcard files.
  2. FlowCard Viewer, which can generate the diagrams from *.flowcard files.
  3. FlowCard Runtime, which can run *.flowcard files, create and send transactions to Ergo network.
  4. FlowCard Designer Tool, which can simplify development of complex diagrams . This will make designing and validation of Ergo contracts a pleasant experience, more like drawing rather than coding. In addition, the correctness of the whole dApp scenario can be verified and controlled by the tooling.
submitted by eleanorcwhite to btc [link] [comments]

FlowCards: A Declarative Framework for Development of Ergo dApps

FlowCards: A Declarative Framework for Development of Ergo dApps
Introduction
ErgoScript is the smart contract language used by the Ergo blockchain. While it has concise syntax adopted from Scala/Kotlin, it still may seem confusing at first because conceptually ErgoScript is quite different compared to conventional languages which we all know and love. This is because Ergo is a UTXO based blockchain, whereas smart contracts are traditionally associated with account based systems like Ethereum. However, Ergo's transaction model has many advantages over the account based model and with the right approach it can even be significantly easier to develop Ergo contracts than to write and debug Solidity code.
Below we will cover the key aspects of the Ergo contract model which makes it different:
Paradigm
The account model of Ethereum is imperative. This means that the typical task of sending coins from Alice to Bob requires changing the balances in storage as a series of operations. Ergo's UTXO based programming model on the other hand is declarative. ErgoScript contracts specify conditions for a transaction to be accepted by the blockchain (not changes to be made in the storage state as result of the contract execution).
Scalability
In the account model of Ethereum both storage changes and validity checks are performed on-chain during code execution. In contrast, Ergo transactions are created off-chain and only validation checks are performed on-chain thus reducing the amount of operations performed by every node on the network. In addition, due to immutability of the transaction graph, various optimization strategies are possible to improve throughput of transactions per second in the network. Light verifying nodes are also possible thus further facilitating scalability and accessibility of the network.
Shared state
The account-based model is reliant on shared mutable state which is known to lead to complex semantics (and subtle million dollar bugs) in the context of concurrent/ distributed computation. Ergo's model is based on an immutable graph of transactions. This approach, inherited from Bitcoin, plays well with the concurrent and distributed nature of blockchains and facilitates light trustless clients.
Expressive Power
Ethereum advocated execution of a turing-complete language on the blockchain. It theoretically promised unlimited potential, however in practice severe limitations came to light from excessive blockchain bloat, subtle multi-million dollar bugs, gas costs which limit contract complexity, and other such problems. Ergo on the flip side extends UTXO to enable turing-completeness while limiting the complexity of the ErgoScript language itself. The same expressive power is achieved in a different and more semantically sound way.
With the all of the above points, it should be clear that there are a lot of benefits to the model Ergo is using. In the rest of this article I will introduce you to the concept of FlowCards - a dApp developer component which allows for designing complex Ergo contracts in a declarative and visual way.
From Imperative to Declarative
In the imperative programming model of Ethereum a transaction is a sequence of operations executed by the Ethereum VM. The following Solidity function implements a transfer of tokens from sender to receiver . The transaction starts when sender calls this function on an instance of a contract and ends when the function returns.
// Sends an amount of existing coins from any caller to an address function send(address receiver, uint amount) public { require(amount <= balances[msg.sender], "Insufficient balance."); balances[msg.sender] -= amount; balances[receiver] += amount; emit Sent(msg.sender, receiver, amount); } 
The function first checks the pre-conditions, then updates the storage (i.e. balances) and finally publishes the post-condition as the Sent event. The gas which is consumed by the transaction is sent to the miner as a reward for executing this transaction.
Unlike Ethereum, a transaction in Ergo is a data structure holding a list of input coins which it spends and a list of output coins which it creates preserving the total balances of ERGs and tokens (in which Ergo is similar to Bitcoin).
Turning back to the example above, since Ergo natively supports tokens, therefore for this specific example of sending tokens we don't need to write any code in ErgoScript. Instead we need to create the ‘send’ transaction shown in the following figure, which describes the same token transfer but declaratively.
https://preview.redd.it/id5kjdgn9tv41.png?width=1348&format=png&auto=webp&s=31b937d7ad0af4afe94f4d023e8c90c97c8aed2e
The picture visually describes the following steps, which the network user needs to perform:
  1. Select unspent sender's boxes, containing in total tB >= amount of tokens and B >= txFee + minErg ERGs.
  2. Create an output target box which is protected by the receiver public key with minErg ERGs and amount of T tokens.
  3. Create one fee output protected by the minerFee contract with txFee ERGs.
  4. Create one change output protected by the sender public key, containing B - minErg - txFee ERGs and tB - amount of T tokens.
  5. Create a new transaction, sign it using the sender's secret key and send to the Ergo network.
What is important to understand here is that all of these steps are preformed off-chain (for example using Appkit Transaction API) by the user's application. Ergo network nodes don't need to repeat this transaction creation process, they only need to validate the already formed transaction. ErgoScript contracts are stored in the inputs of the transaction and check spending conditions. The node executes the contracts on-chain when the transaction is validated. The transaction is valid if all of the conditions are satisfied.
Thus, in Ethereum when we “send amount from sender to recipient” we are literally editing balances and updating the storage with a concrete set of commands. This happens on-chain and thus a new transaction is also created on-chain as the result of this process.
In Ergo (as in Bitcoin) transactions are created off-chain and the network nodes only verify them. The effects of the transaction on the blockchain state is that input coins (or Boxes in Ergo's parlance) are removed and output boxes are added to the UTXO set.
In the example above we don't use an ErgoScript contract but instead assume a signature check is used as the spending pre-condition. However in more complex application scenarios we of course need to use ErgoScript which is what we are going to discuss next.
From Changing State to Checking Context
In the send function example we first checked the pre-condition (require(amount <= balances[msg.sender],...) ) and then changed the state (i.e. update balances balances[msg.sender] -= amount ). This is typical in Ethereum transactions. Before we change anything we need to check if it is valid to do so.
In Ergo, as we discussed previously, the state (i.e. UTXO set of boxes) is changed implicitly when a valid transaction is included in a block. Thus we only need to check the pre-conditions before the transaction can be added to the block. This is what ErgoScript contracts do.
It is not possible to “change the state” in ErgoScript because it is a language to check pre-conditions for spending coins. ErgoScript is a purely functional language without side effects that operates on immutable data values. This means all the inputs, outputs and other transaction parameters available in a script are immutable. This, among other things, makes ErgoScript a very simple language that is easy to learn and safe to use. Similar to Bitcoin, each input box contains a script, which should return the true value in order to 1) allow spending of the box (i.e. removing from the UTXO set) and 2) adding the transaction to the block.
If we are being pedantic, it is therefore incorrect (strictly speaking) to think of ErgoScript as the language of Ergo contracts, because it is the language of propositions (logical predicates, formulas, etc.) which protect boxes from “illegal” spending. Unlike Bitcoin, in Ergo the whole transaction and a part of the current blockchain context is available to every script. Therefore each script may check which outputs are created by the transaction, their ERG and token amounts (we will use this capability in our example DEX contracts), current block number etc.
In ErgoScript you define the conditions of whether changes (i.e. coin spending) are allowed to happen in a given context. This is in contrast to programming the changes imperatively in the code of a contract.
While Ergo's transaction model unlocks a whole range of applications like (DEX, DeFi Apps, LETS, etc), designing contracts as pre-conditions for coin spending (or guarding scripts) directly is not intuitive. In the next sections we will consider a useful graphical notation to design contracts declaratively using FlowCard Diagrams, which is a visual representation of executable components (FlowCards).
FlowCards aim to radically simplify dApp development on the Ergo platform by providing a high-level declarative language, execution runtime, storage format and a graphical notation.
We will start with a high level of diagrams and go down to FlowCard specification.
FlowCard Diagrams
The idea behind FlowCard diagrams is based on the following observations: 1) An Ergo box is immutable and can only be spent in the transaction which uses it as an input. 2) We therefore can draw a flow of boxes through transactions, so that boxes flowing in to the transaction are spent and those flowing out are created and added to the UTXO. 3) A transaction from this perspective is simply a transformer of old boxes to the new ones preserving the balances of ERGs and tokens involved.
The following figure shows the main elements of the Ergo transaction we've already seen previously (now under the name of FlowCard Diagram).
https://preview.redd.it/9kcxl11o9tv41.png?width=1304&format=png&auto=webp&s=378a7f50769292ca94de35ff597dc1a44af56d14
There is a strictly defined meaning (semantics) behind every element of the diagram, so that the diagram is a visual representation (or a view) of the underlying executable component (called FlowCard).
The FlowCard can be used as a reusable component of an Ergo dApp to create and initiate the transaction on the Ergo blockchain. We will discuss this in the coming sections.
Now let's look at the individual pieces of the FlowCard diagram one by one.
  1. Name and Parameters
Each flow card is given a name and a list of typed parameters. This is similar to a template with parameters. In the above figure we can see the Send flow card which has five parameters. The parameters are used in the specification.
  1. Contract Wallet
This is a key element of the flow card. Every box has a guarding script. Often it is the script that checks a signature against a public key. This script is trivial in ErgoScript and is defined like the def pk(pubkey: Address) = { pubkey } template where pubkey is a parameter of the type Address . In the figure, the script template is applied to the parameter pk(sender) and thus a concrete wallet contract is obtained. Therefore pk(sender) and pk(receiver) yield different scripts and represent different wallets on the diagram, even though they use the same template.
Contract Wallet contains a set of all UTXO boxes which have a given script derived from the given script template using flow card parameters. For example, in the figure, the template is pk and parameter pubkey is substituted with the `sender’ flow card parameter.
  1. Contract
Even though a contract is a property of a box, on the diagram we group the boxes by their contracts, therefore it looks like the boxes belong to the contracts, rather than the contracts belong to the boxes. In the example, we have three instantiated contracts pk(sender) , pk(receiver) and minerFee . Note, that pk(sender) is the instantiation of the pk template with the concrete parameter sender and minerFee is the instantiation of the pre-defined contract which protects the miner reward boxes.
  1. Box name
In the diagram we can give each box a name. Besides readability of the diagram, we also use the name as a synonym of a more complex indexed access to the box in the contract. For example, change is the name of the box, which can also be used in the ErgoScript conditions instead of OUTPUTS(2) . We also use box names to associate spending conditions with the boxes.
  1. Boxes in the wallet
In the diagram, we show boxes (darker rectangles) as belonging to the contract wallets (lighter rectangles). Each such box rectangle is connected with a grey transaction rectangle by either orange or green arrows or both. An output box (with an incoming green arrow) may include many lines of text where each line specifies a condition which should be checked as part of the transaction. The first line specifies the condition on the amount of ERG which should be placed in the box. Other lines may take one of the following forms:
  1. amount: TOKEN - the box should contain the given amount of the given TOKEN
  2. R == value - the box should contain the given value of the given register R
  3. boxName ? condition - the box named boxName should check condition in its script.
We discuss these conditions in the sections below.
  1. Amount of ERGs in the box
Each box should store a minimum amount of ERGs. This is checked when the creating transaction is validated. In the diagram the amount of ERGs is always shown as the first line (e.g. B: ERG or B - minErg - txFee ). The value type ascription B: ERG is optional and may be used for readability. When the value is given as a formula, then this formula should be respected by the transaction which creates the box.
It is important to understand that variables like amount and txFee are not named properties of the boxes. They are parameters of the whole diagram and representing some amounts. Or put it another way, they are shared parameters between transactions (e.g. Sell Order and Swap transactions from DEX example below share the tAmt parameter). So the same name is tied to the same value throughout the diagram (this is where the tooling would help a lot). However, when it comes to on-chain validation of those values, only explicit conditions which are marked with ? are transformed to ErgoScript. At the same time, all other conditions are ensured off-chain during transaction building (for example in an application using Appkit API) and transaction validation when it is added to the blockchain.
  1. Amount of T token
A box can store values of many tokens. The tokens on the diagram are named and a value variable may be associated with the token T using value: T expression. The value may be given by formula. If the formula is prefixed with a box name like boxName ? formula , then it is should also be checked in the guarding script of the boxName box. This additional specification is very convenient because 1) it allows to validate the visual design automatically, and 2) the conditions specified in the boxes of a diagram are enough to synthesize the necessary guarding scripts. (more about this below at “From Diagrams To ErgoScript Contracts”)
  1. Tx Inputs
Inputs are connected to the corresponding transaction by orange arrows. An input arrow may have a label of the following forms:
  1. [email protected] - optional name with an index i.e. [email protected] or u/2 . This is a property of the target endpoint of the arrow. The name is used in conditions of related boxes and the index is the position of the corresponding box in the INPUTS collection of the transaction.
  2. !action - is a property of the source of the arrow and gives a name for an alternative spending path of the box (we will see this in DEX example)
Because of alternative spending paths, a box may have many outgoing orange arrows, in which case they should be labeled with different actions.
  1. Transaction
A transaction spends input boxes and creates output boxes. The input boxes are given by the orange arrows and the labels are expected to put inputs at the right indexes in INPUTS collection. The output boxes are given by the green arrows. Each transaction should preserve a strict balance of ERG values (sum of inputs == sum of outputs) and for each token the sum of inputs >= the sum of outputs. The design diagram requires an explicit specification of the ERG and token values for all of the output boxes to avoid implicit errors and ensure better readability.
  1. Tx Outputs
Outputs are connected to the corresponding transaction by green arrows. An output arrow may have a label of the following [email protected] , where an optional name is accompanied with an index i.e. [email protected] or u/2 . This is a property of the source endpoint of the arrow. The name is used in conditions of the related boxes and the index is the position of the corresponding box in the OUTPUTS collection of the transaction.
Example: Decentralized Exchange (DEX)
Now let's use the above described notation to design a FlowCard for a DEX dApp. It is simple enough yet also illustrates all of the key features of FlowCard diagrams which we've introduced in the previous section.
The dApp scenario is shown in the figure below: There are three participants (buyer, seller and DEX) of the DEX dApp and five different transaction types, which are created by participants. The buyer wants to swap ergAmt of ERGs for tAmt of TID tokens (or vice versa, the seller wants to sell TID tokens for ERGs, who sends the order first doesn't matter). Both the buyer and the seller can cancel their orders any time. The DEX off-chain matching service can find matching orders and create the Swap transaction to complete the exchange.
The following diagram fully (and formally) specifies all of the five transactions that must be created off-chain by the DEX dApp. It also specifies all of the spending conditions that should be verified on-chain.

https://preview.redd.it/fnt5f4qp9tv41.png?width=1614&format=png&auto=webp&s=34f145f9a6d622454906857e645def2faba057bd
Let's discuss the FlowCard diagram and the logic of each transaction in details:
Buy Order Transaction
A buyer creates a Buy Order transaction. The transaction spends E amount of ERGs (which we will write E: ERG ) from one or more boxes in the pk(buyer) wallet. The transaction creates a bid box with ergAmt: ERG protected by the buyOrder script. The buyOrder script is synthesized from the specification (see below at “From Diagrams To ErgoScript Contracts”) either manually or automatically by a tool. Even though we don't need to define the buyOrder script explicitly during designing, at run time the bid box should contain the buyOrder script as the guarding proposition (which checks the box spending conditions), otherwise the conditions specified in the diagram will not be checked.
The change box is created to make the input and output sums of the transaction balanced. The transaction fee box is omitted because it can be added automatically by the tools. In practice, however, the designer can add the fee box explicitly to the a diagram. It covers the cases of more complex transactions (like Swap) where there are many ways to pay the transaction fee.
Cancel Buy, Cancel Sell Transactions
At any time, the buyer can cancel the order by sending CancelBuy transaction. The transaction should satisfy the guarding buyOrder contract which protects the bid box. As you can see on the diagram, both the Cancel and the Swap transactions can spend the bid box. When a box has spending alternatives (or spending paths) then each alternative is identified by a unique name prefixed with ! (!cancel and !swap for the bid box). Each alternative path has specific spending conditions. In our example, when the Cancel Buy transaction spends the bid box the ?buyer condition should be satisfied, which we read as “the signature for the buyer address should be presented in the transaction”. Therefore, only buyer can cancel the buy order. This “signature” condition is only required for the !cancel alternative spending path and not required for !swap .
Sell Order Transaction
The Sell Order transaction is similar to the BuyOrder in that it deals with tokens in addition to ERGs. The transaction spends E: ERG and T: TID tokens from seller's wallet (specified as pk(seller) contract). The two outputs are ask and change . The change is a standard box to balance transaction. The ask box keeps tAmt: TID tokens for the exchange and minErg: ERG - the minimum amount of ERGs required in every box.
Swap Transaction
This is a key transaction in the DEX dApp scenario. The transaction has several spending conditions on the input boxes and those conditions are included in the buyOrder and sellOrder scripts (which are verified when the transaction is added to the blockchain). However, on the diagram those conditions are not specified in the bid and ask boxes, they are instead defined in the output boxes of the transaction.
This is a convention for improved usability because most of the conditions relate to the properties of the output boxes. We could specify those properties in the bid box, but then we would have to use more complex expressions.
Let's consider the output created by the arrow labeled with [email protected] . This label tells us that the output is at the index 0 in the OUTPUTS collection of the transaction and that in the diagram we can refer to this box by the buyerOut name. Thus we can label both the box itself and the arrow to give the box a name.
The conditions shown in the buyerOut box have the form bid ? condition , which means they should be verified on-chain in order to spend the bid box. The conditions have the following meaning:
  • tAmt: TID requires the box to have tAmt amount of TID token
  • R4 == bid.id requires R4 register in the box to be equal to id of the bid box.
  • script == buyer requires the buyerOut box to have the script of the wallet where it is located on the diagram, i.e. pk(buyer)
Similar properties are added to the sellerOut box, which is specified to be at index 1 and the name is given to it using the label on the box itself, rather than on the arrow.
The Swap transaction spends two boxes bid and ask using the !swap spending path on both, however unlike !cancel the conditions on the path are not specified. This is where the bid ? and ask ? prefixes come into play. They are used so that the conditions listed in the buyerOut and sellerOut boxes are moved to the !swap spending path of the bid and ask boxes correspondingly.
If you look at the conditions of the output boxes, you will see that they exactly specify the swap of values between seller's and buyer's wallets. The buyer gets the necessary amount of TID token and seller gets the corresponding amount of ERGs. The Swap transaction is created when there are two matching boxes with buyOrder and sellOrder contracts.
From Diagrams To ErgoScript Contracts
What is interesting about FlowCard specifications is that we can use them to automatically generate the necessary ErgoTree scripts. With the appropriate tooling support this can be done automatically, but with the lack of thereof, it can be done manually. Thus, the FlowCard allows us to capture and visually represent all of the design choices and semantic details of an Ergo dApp.
What we are going to do next is to mechanically create the buyOrder contract from the information given in the DEX flow card.
Recall that each script is a proposition (boolean valued expression) which should evaluate to true to allow spending of the box. When we have many conditions to be met at the same time we can combine them in a logical formula using the AND binary operation, and if we have alternatives (not necessarily exclusive) we can put them into the OR operation.
The buyOrder box has the alternative spending paths !cancel and !swap . Thus the ErgoScript code should have OR operation with two arguments - one for each spending path.
/** buyOrder contract */ { val cancelCondition = {} val swapCondition = {} cancelCondition || swapCondition } 
The formula for the cancelCondition expression is given in the !cancel spending path of the buyOrder box. We can directly include it in the script.
/** buyOrder contract */ { val cancelCondition = { buyer } val swapCondition = {} cancelCondition || swapCondition } 
For the !swap spending path of the buyOrder box the conditions are specified in the buyerOut output box of the Swap transaction. If we simply include them in the swapCondition then we get a syntactically incorrect script.
/** buyOrder contract */ { val cancelCondition = { buyer } val swapCondition = { tAmt: TID && R4 == bid.id && @contract } cancelCondition || swapCondition } 
We can however translate the conditions from the diagram syntax to ErgoScript expressions using the following simple rules
  1. [email protected] ==> val buyerOut = OUTPUTS(0)
  2. tAmt: TID ==> tid._2 == tAmt where tid = buyerOut.tokens(TID)
  3. R4 == bid.id ==> R4 == SELF.id where R4 = buyerOut.R4[Coll[Byte]].get
  4. script == buyer ==> buyerOut.propositionBytes == buyer.propBytes
Note, in the diagram TID represents a token id, but ErgoScript doesn't have access to the tokens by the ids so we cannot write tokens.getByKey(TID) . For this reason, when the diagram is translated into ErgoScript, TID becomes a named constant of the index in tokens collection of the box. The concrete value of the constant is assigned when the BuyOrder transaction with the buyOrder box is created. The correspondence and consistency between the actual tokenId, the TID constant and the actual tokens of the buyerOut box is ensured by the off-chain application code, which is completely possible since all of the transactions are created by the application using FlowCard as a guiding specification. This may sound too complicated, but this is part of the translation from diagram specification to actual executable application code, most of which can be automated.
After the transformation we can obtain a correct script which checks all the required preconditions for spending the buyOrder box.
/** buyOrder contract */ def DEX(buyer: Addrss, seller: Address, TID: Int, ergAmt: Long, tAmt: Long) { val cancelCondition: SigmaProp = { buyer } // verify buyer's sig (ProveDlog) val swapCondition = OUTPUTS.size > 0 && { // securing OUTPUTS access val buyerOut = OUTPUTS(0) // from [email protected] buyerOut.tokens.size > TID && { // securing tokens access val tid = buyerOut.tokens(TID) val regR4 = buyerOut.R4[Coll[Byte]] regR4.isDefined && { // securing R4 access val R4 = regR4.get tid._2 == tAmt && // from tAmt: TID R4 == SELF.id && // from R4 == bid.id buyerOut.propositionBytes == buyer.propBytes // from script == buyer } } } cancelCondition || swapCondition } 
A similar script for the sellOrder box can be obtained using the same translation rules. With the help of the tooling the code of contracts can be mechanically generated from the diagram specification.
Conclusions
Declarative programming models have already won the battle against imperative programming in many application domains like Big Data, Stream Processing, Deep Learning, Databases, etc. Ergo is pioneering the declarative model of dApp development as a better and safer alternative to the now popular imperative model of smart contracts.
The concept of FlowCard shifts the focus from writing ErgoScript contracts to the overall flow of values (hence the name), in such a way, that ErgoScript can always be generated from them. You will never need to look at the ErgoScript code once the tooling is in place.
Here are the possible next steps for future work:
  1. Storage format for FlowCard Spec and the corresponding EIP standardized file format (Json/XML/Protobuf). This will allow various tools (Diagram Editor, Runtime, dApps etc) to create and use *.flowcard files.
  2. FlowCard Viewer, which can generate the diagrams from *.flowcard files.
  3. FlowCard Runtime, which can run *.flowcard files, create and send transactions to Ergo network.
  4. FlowCard Designer Tool, which can simplify development of complex diagrams . This will make designing and validation of Ergo contracts a pleasant experience, more like drawing rather than coding. In addition, the correctness of the whole dApp scenario can be verified and controlled by the tooling.
submitted by Guilty_Pea to CryptoCurrencies [link] [comments]

Quant Network: Token valuation dynamics and fundamentals

Quant Network: Token valuation dynamics and fundamentals
This post intends to illustrate the dynamics and fundamentals related to the mechanics and use of the Quant Network Utility Token (QNT), in order to provide the community with greater clarity around what holding the token actually means.
This is a follow-up on two articles David W previously wrote about Quant Network’s prospects and potential, which you can find here:
For holders not intending to use Overledger for business reasons, the primary goal of holding the QNT token is to benefit from price appreciation. Some are happy to believe that speculation will take the QNT price to much higher levels if and when large-scale adoption/implementation news comes out, whilst others may actually prefer to assess the token’s utility and analyse how it would react to various scenarios to justify a price increase based on fundamentals. The latter is precisely what I aim to look into in this article.
On that note, I have noticed that many wish to see institutional investors getting involved in the crypto space for their purchase power, but the one thing they would bring and that is most needed in my opinion is fundamental analysis and valuation expectations based on facts. Indeed, equity investors can probably access 20 or 30 reports that are 15 pages long and updated on a quarterly basis about any blue chip stock they are invested in, but how many of such (professional) analyst reports can you consult for your favorite crypto coins? Let me have a guess: none. This is unfortunate, and it is a further reason to look into the situation in more details.
To be clear, this article is not about providing figures on the expected valuation of the token, but rather about providing the community with a deeper analysis to better understand its meaning and valuation context. This includes going through the (vast) differences between a Utility Token and a Company Share since I understand it is still blurry in some people’s mind. I will incorporate my thoughts and perspective on these matters, which should not be regarded as a single source of truth but rather as an attempt to “dig deeper”.
In order to share these thoughts with you in the most pertinent manner, I have actually entirely modelled the Quant Treasury function and analysed how the QNT token would react to various scenarios based on a number of different factors. That does not mean there is any universal truth to be told, but it did help in clarifying how things work (with my understanding of the current ruleset at least, which may also evolve over time). This is an important safety net: if the intensity of speculation in crypto markets was to go lower from here, what would happen to the token price? How would Quant Treasury help support it? If the market can feel comfortable with such situation and the underlying demand for the token, then it can feel comfortable to take it higher based on future growth expectations — and that’s how it should be.
Finally, to help shed light on different areas, I must confess that I will have to go through some technicalities on how this all works and what a Utility Token actually is. That is the price to pay to gain that further, necessary knowledge and be in a position to assess the situation more thoroughly — but I will make it as readable as I possibly can, so… if are you ready, let’s start!

A Utility Token vs. a Company Share: what is the difference?

It is probably fair to say that many people involved in the crypto space are unfamiliar with certain key financial terms or concepts, simply because finance is not necessarily everyone’s background (and that is absolutely fine!). In addition, Digital Assets bring some very novel concepts, which means that everyone has to adapt in any case.
Therefore, I suggest we start with a comparison of the characteristics underpinning the QNT Utility Token and a Quant Network Company Share (as you may know, the Company Shares are currently privately held by the Quant Network founders). I believe it is important to look at this comparison for two reasons:
  1. Most people are familiar with regular Company Shares because they have been traded for decades, and it is often asked how Utility Tokens compare.
  2. Quant Network have announced a plan to raise capital to grow their business further (in the September 2019 Forbes article which you can find here). Therefore, regardless of whether the Share Offering is made public or private, I presume the community will want to better understand how things compare and the different dynamics behind each instrument.
So where does the QNT Utility Token sit in Quant Network company and how does it compare to a Quant Network Company Share? This is how it looks:
https://preview.redd.it/zgidz8ed74y31.png?width=1698&format=png&auto=webp&s=54acd2def0713b67ac7c41dae6c9ab225e5639fa
What is on the right hand side of a balance sheet is the money a company has, and what is on the left hand side is how it uses it. Broadly speaking, the money the company has may come from the owners (Equity) or from the creditors (Debt). If I were to apply these concepts to an individual (you!), “Equity” is your net worth, “Debt” is your mortgage and other debt, and “Assets” is your house, car, savings, investments, crypto, etc.
As you can see, a Company Share and a Utility Token are found in different parts of the balance sheet — and that, in itself, is a major difference! They indeed serve two very different purposes:
  • Company Shares: they represent a share of a company’s ownership, meaning that you actually own [X]% of the company ([X]% = Number of shares you possess / Total number of shares) and hence [X]% of the company’s assets on the left hand side of the balance sheet.
  • Utility Tokens: they are keys to access a given platform (in our case, Quant Network’s Operating System: Overledger) and they can serve multiple purposes as defined by their Utility Document (in QNT’s case, the latest V0.3 version can be found here).
As a consequence, as a Company Shareholder, you are entitled to receive part or all of the profits generated by the company (as the case may arise) and you can also take part in the management decisions (indeed, with 0.00000001% of Apple shares, you have the corresponding right to vote to kick the CEO out if you want to!).
On the other hand, as a Utility Token holder, you have no such rights related to the company’s profits or management, BUT any usage of the platform has to go through the token you hold — and that has novel, interesting facets.

A Utility Token vs. a Company Share: what happens in practice?

Before we dig further, let’s now remind ourselves of the economic utilities of the QNT token (i.e. in addition to signing and encrypting transactions):
  1. Licences: a licence is mandatory for anyone who wishes to develop on the Overledger platform. Enterprises and Developers pay Quant Network in fiat money and Quant Treasury subsequently sets aside QNT tokens for the same amount (a diagram on how market purchases are performed can be found on the Overledger Treasury page here). The tokens are locked for 12 months, and the current understanding is that the amount of tokens locked is readjusted at each renewal date to the prevailing market price of QNT at the time (this information is not part of the Utility Token document as of now, but it was given in a previous Telegram AMA so I will assume it is correct pending further developments).
  2. Usage: this relates to the amount of Overledger read and write activity performed by clients on an ongoing basis, and also to the transfer of Digital Assets from one chain to another, and it follows a similar principle: fiat money is received by Quant Network, and subsequently converted in QNT tokens (these tokens are not locked, however).
  3. Gateways: information about Gateways has been released through the Overledger Network initiative (see dedicated website here), and we now know that the annual cost for running a Gateway will be 500 QNT whilst Gateway holders will receive a percentage of transaction fees going through their setup.
  4. Minimum holding amounts: the team has stated that there will be a minimum QNT holding amount put in place for every participant of the Overledger ecosystem, although the details have not been released yet.
That being said, it now becomes interesting to illustrate with indicative figures what actually happens as Licences, Usage and Gateways are paid for and Quant Network company operates. The following diagram may help in this respect:
Arbitrary figures from myself (i.e. no currency, no unit), based on an indicative 20% Net Income Ratio and a 40% Dividend yield
We have now two different perspectives:
  • On the right hand side, you see the simplified Profit & Loss account (“P&L”) which incorporates Total Revenues, from which costs and taxes are deducted, to give a Net Income for the company. A share of this Net Income may be distributed to Shareholders in the form of a Dividend, whilst the remainder is accounted as retained profits and goes back to the balance sheet as Equity to fund further growth for instance. Importantly, the Dividend (if any) is usually a portion of the Net Income so, using an indicative 40% Dividend yield policy, shareholders receive here for a given year 80 out of total company revenues of 1,000.
  • On the left hand side, you see the QNT requirements arising from the Overledger-related business activity which equal 700 here. Note that this is only a portion of the Total Revenues (1,000) you can see on the right hand side, as the team generates income from other sources as well (e.g. consultancy fees) — but I assume Overledger will represent the bulk of it since it is Quant Network’s flagship product and focus. In this case, the equivalent fiat amount of QNT tokens represents 700 (i.e. 100% of Overledger-related revenues) out of the company’s Total Revenues of 1,000. It is to be noted that excess reserves of QNT may be sold and generate additional revenues for the company, which would be outside of the Overledger Revenues mentioned above (i.e. they would fall in the “Other Revenues” category).
A way to summarise the situation from a very high level is: as a Company Shareholder you take a view on the company’s total profits whereas as a Utility Token holder you take a view on the company’s revenues (albeit Overledger-related).
It is however too early to reach any conclusion, so we now need to dig one level deeper again.

More considerations around Company Shares

As we discussed, with a Company Share, you possess a fraction of the company’s ownership and hence you have access to profits (and losses!). So how do typical Net Income results look in the technology industry? What sort of Dividend is usually paid? What sort of market valuations are subsequently achieved?
Let’s find out:
https://preview.redd.it/eua9sqlt74y31.png?width=2904&format=png&auto=webp&s=3500669942abf62a0ea1c983ab3cea40552c40d1
As you can see, the typical Net Income Ratio varies between around 10% and 20% in the technology/software industry (using the above illustrated peer group). The ratio illustrates the proportion of Net Income extracted from Revenues.
In addition, money is returned to Company Shareholders in the form of a Dividend (i.e. a portion of the Net Income) and in the form of Share repurchases (whereby the company uses its excess cash position to buy back shares from Shareholders and hence diminish the number of Shares available). A company may however prefer to not redistribute any of the profits, and retain them instead to fund further business growth — Alphabet (Google) is a good example in this respect.
Interestingly, as you can see on the far right of the table, the market capitalisations of these companies reflect high multiples of their Net Income as investors expect the companies to prosper in the future and generate larger profits. If you wished to explore these ideas further, I recommend also looking into the Return on Equity ratio which takes into account the amount of resources (i.e. Capital/Equity) put to work to generate the companies’ profits.
It is also to be noted that the number of Company Shares outstanding may vary over time. Indeed, aside from Share repurchases that diminish the number of Shares available to the market, additional Shares may be issued to raise additional funds from the market hence diluting the ownership of existing Shareholders.
Finally, (regular) Company Shares are structured in the same way across companies and industries, which brings a key benefit of having them easily comparable/benchmarkable against one another for investors. That is not the case for Utility Tokens, but they come with the benefit of having a lot more flexible use cases.

More considerations around the QNT token

As discussed, the Utility Token model is quite novel and each token has unique functions designed for the system it is associated with. That does not make value assessment easy, since all Utility Tokens are different, and this is a further reason to have a detailed look into the QNT case.
https://preview.redd.it/b0xe0ogw74y31.png?width=1512&format=png&auto=webp&s=cece522cd7919125e199b012af41850df6d9e9fd
As a start, all assets that are used in a speculative way embed two components into their price:
A) one that represents what the asset is worth today, and
B) one that represents what it may be worth in the future.
Depending on whether the future looks bright or not, a price premium or a price discount may be attached to the asset price.
This is similar to what we just saw with Company Shares valuation multiples, and it is valid across markets. For instance, Microsoft generates around USD 21bn in annual Net Income these days, but the cost of acquiring it entirely is USD 1,094bn (!). This speculative effect is particularly visible in the crypto sector since valuation levels are usually high whilst usage/adoption levels are usually low for now.
So what about QNT? As mentioned, the QNT Utility model has novel, interesting facets. Since QNT is required to access and use the Overledger system, it is important to appreciate that Quant Network company has three means of action regarding the QNT token:
  1. MANAGING their QNT reserves on an ongoing basis (i.e. buying or selling tokens is not always automatic, they can allocate tokens from their own reserves depending on their liquidity position at any given time),
  2. BUYING/RECEIVING QNT from the market/clients on the back of business activity, and
  3. SELLING QNT when they deem their reserves sufficient and/or wish to sell tokens to cover for operational costs.
Broadly speaking, the above actions will vary depending on business performance, the QNT token price and the Quant Network company’s liquidity position.
We also have to appreciate how the QNT distribution will always look like, it can be broken down as follows:
https://preview.redd.it/f20h7hvz74y31.png?width=1106&format=png&auto=webp&s=f2f5b63272f5ed6e3f977ce08d7bae043851edd1
A) QNT tokens held by the QNT Community
B) QNT tokens held by Quant Network that are locked (i.e. those related to Licences)
C) QNT tokens held by Quant Network that are unlocked (i.e. those related to other usage, such as consumption fees and Gateways)
D) the minimum QNT amount held by all users of the platform (more information on this front soon)
So now that the situation is set, how would we assess Quant Network’s business activity effect on the QNT token?
STEP 1: We would need to define the range of minimum/maximum amounts of QNT which Quant Network would want to keep as liquid reserves (i.e. unlocked) on an ongoing basis. This affects key variables such as the proportion of market purchases vs. the use of their own reserves, and the amount of QNT sold back to the market. Also, interestingly, if Quant Network never wanted to keep less than, for instance, 1 million QNT tokens as liquid reserves, these 1 million tokens would have a similar effect on the market as the locked tokens because they would never be sold.
STEP 2: We would need to define the amount of revenues that are related to QNT. As we know, Overledger Licences, Usage and Gateways generate revenues converted into QNT (or in QNT directly). So the correlation is strong between revenues and QNT needs. Interestingly, the cost of a licence is probably relatively low today in order to facilitate adoption and testing, but it will surely increase over time. The same goes for usage fees, especially as we move from testing/pilot phases to mass implementation. The number of clients will also increase. The Community version of Overledger is also set to officially launch next year. More information on revenue potential can be found later in this article.
STEP 3: We would need to define an evolution of the QNT token price over time and see how things develop with regards to Quant Network’s net purchase/sale of tokens every month (i.e. tokens required - tokens sold = net purchased/sold tokens).
Once assumptions are made, what do we observe?
In an undistorted environment, there is a positive correlation between Quant Network’s QNT-related revenues and the market capitalisation they occupy (i.e. the Quant Network share of the token distribution multiplied by the QNT price). However, this correlation can get heavily twisted as the speculative market prices a premium to the QNT price (i.e. anticipating higher revenues). As we will see, a persistent discount is not really possible as Quant Treasury would mechanically have to step in with large market purchases, which would provide strong support to the QNT price.
In addition, volatility is to be added to the equation since QNT volatility is likely to be (much) higher than that of revenues which can create important year-on-year disparities. For instance, Quant Treasury may lock a lot of tokens at a low price one year, and be well in excess of required tokens the next year if the QNT token price has significantly increased (and vice versa). This is not an issue per se, but this would impact the amount of tokens bought/sold on an ongoing basis by Quant Treasury as reserves inflate/deflate.
If we put aside the distortions created by speculation on the QNT price, and the subsequent impact on the excess/deficiency of Quant Network token reserves (whose level is also pro-actively managed by the company, as previously discussed), the economic system works as follows:
High QNT price vs. Revenue levels: The value of reserves is inflated, fewer tokens need to be bought for the level of revenues generated, Quant Treasury provides low support to the QNT price, its share of the token distribution diminishes.
Low QNT price vs. Revenue levels: Reserves run out, a higher number of tokens needs to be bought for the level of revenues generated, Quant Treasury provides higher support to the QNT price, its share of the token distribution increases.
Summary table:
https://preview.redd.it/q7wgzpv384y31.png?width=2312&format=png&auto=webp&s=d8c0480cb34caf2e59615ec21ea220d81d79b153
The key here is that, whatever speculation on future revenue levels does to the token in the first place, if the QNT price was falling and reaching a level that does not reflect the prevailing revenue levels of Overledger at a given time, then Quant Treasury would require a larger amount of tokens to cover the business needs which would mean the depletion of their reserves, larger purchases from the market and strong support for the QNT price from here. This is the safety net we want to see, coming from usage! Indeed, in other words, if the QNT price went very high very quickly, Quant Treasury may not be seen buying much tokens since their reserves would be inflated BUT that fall back mechanics purely based on usage would be there to safeguard QNT holders from the QNT price falling below a certain level.
I would assume this makes sense for most, and you might now wonder why have I highlighted the bottom part about the token distribution in red? That is because there is an ongoing battle between the QNT community and Quant Treasury — and this is very interesting.
The ecosystem will show how big a share is the community willing to let Quant Network represent. The community actually sets the price for the purchases, and the token distribution fluctuates depending on the metrics we discussed. An equilibrium will be formed based on the confidence the market has in Quant Network’s future revenue generation. Moreover, the QNT community could perceive the token as a Store of Value and be happy to hold 80/90% of all tokens for instance, or it could perceive QNT as more dynamic or risky and be happy to only represent 60/70% of the distribution. Needless to say that, considering my previous articles on the potential of Overledger, I think we will tend more towards the former scenario. Indeed, if you wished to store wealth with a technology-agnostic, future proof, globally adopted, revenue-providing (through Gateways) Network of Networks on which most of the digitalised value is flowing through — wouldn’t you see QNT as an appealing value proposition?
In a nutshell, it all comes down to the Overledger revenue levels and the QNT holders’ resistence to buy pressure from Quant Treasury. Therefore, if you are confident in the Overledger revenue generation and wish to see the QNT token price go up, more than ever, do not sell your tokens!
What about the locked tokens? There will always be a certain amount of tokens that are entirely taken out of circulation, but Quant Network company will always keep additional unlocked tokens on top of that (those they receive and manage as buffer) and that means that locked tokens will always be a subset of what Quant Network possesses. I do not know whether fees will primarily be concentrated on the licencing side vs. the usage side, but if that were to be the case then it would be even better as a higher amount of tokens would be taken out of circulation for good.
Finally, as long as the company operates, the revenues will always represent a certain amount of money whereas this is not the case for profits which may not appear before years (e.g. during the first years, during an economic/business downturn, etc.). As an illustration, a company like Uber has seen vast increases in revenues since it launched but never made any profit! Therefore, the demand for the QNT token benefits from good resilience from that perspective.
Quant Network vs. QNT community — What proportion of the QNT distribution will each represent?

How much revenues can Overledger generate?

I suggest we start with the basis of what the Quant Network business is about: connecting networks together, building new-generation hyper-decentralised apps on top (called “mApps”), and creating network effects.
Network effects are best defined by Metcalfe’s law which states: “the effect of a telecommunications network is proportional to the square of the number of connected users of the system” (Source: Wikipedia). This is illustrated by the picture below, which demonstrates the increasing number of possible connections for each new user added to the network. This was also recently discussed in a YouTube podcast by QNT community members “Luke” and “Ghost of St. Miklos” which you can watch here.
Source: applicoinc.com
This means that, as Overledger continues to connect more and more DLTs of all types between themselves and also with legacy systems, the number of users (humans or machines) connected to this Network of Networks will grow substantially — and the number of possible connections between participants will in turn grow exponentially. This will increase the value of the network, and hence the level of fees associated with getting access to it. This forms the basis of expected, future revenue generation and especially in a context where Overledger remains unique as of today and embraced by many of the largest institutions in the world (see the detailed summary on the matter from community member “Seq” here).
On top of this network, multi-chain hyper-decentralised applications (‘mApps’) can be built — which are an upgrade to existing dApps that use only one chain at a time and hence only benefit from the user base and functionalities of the given chain. Overledger mApps can leverage on the users and abilities of all connected chains at the same time, horizontal scaling, the ability to write/move code in any language across chains as required, write smart contracts on blockchains that do not support them (e.g. Bitcoin), and provide easier connection to other systems. dApps have barely had any success so far, as discussed in my first article, but mApps could provide the market with the necessary tools to build applications that can complement or rival what can be found on the Apple or Google Play store.
Also, the flexibility of Overledger enables Quant Network to target a large number of industries and to connect them all together. A sample of use cases can be found in the following illustration:
https://preview.redd.it/th8edz5b84y31.png?width=2664&format=png&auto=webp&s=105dd4546f8f9ab2c66d1a5a8e9f669cef0e0614
It is to be noted that one of the use cases, namely the tokenisation of the entire world’s assets, represents a market worth hundreds of trillions of USD and that is not even including the huge amount of illiquid assets not currently traded on traditional Capital Markets which could benefit from the tokenisation process. More information on the topic can be found in my previous article fully focused on the potential of Overledger to capture value from the structural shift in the world’s assets and machine-related data/value transfers.
Finally, we can look at what well established companies with a similar technology profile have been able to achieve. Overledger is an Operating System for DLTs and legacy systems on top of which applications can be built. The comparison to Microsoft Windows and the suite of Microsoft Software running on top (e.g. Microsoft Office) is an obvious one from that perspective to gauge the longer term potential.
As you can see below, Microsoft’s flagship softwares such as Windows and Office each generate tens of billions of USD of revenues every year:
Source: Geekwire
We can also look at Oracle, the second largest Enterprise software company in the world:
Source: Statista
We can finally look at what the Apple store and the Google Play store generate, since the Quant Network “mApp store” for the community side of Overledger will look to replicate a similar business model with hyper-decentralised applications:
Source: Worldwide total revenue by app store, 2018 ($bn)
The above means total revenues of around USD 70bn in 2018 for the Apple store and Google Play store combined, and the market is getting bigger year-on-year! Also, again, these (indicative!) reference points for Overledger come in the context of the Community version of the system only, since the Enterprise version represents a separate set of verticals more comparable to the likes of Microsoft and Oracle which we just looked at.

Conclusion

I hope this article helped shed further light on the QNT token and how the various market and business parameters will influence its behavior over time, as the Quant Network business is expected to grow exponentially in the coming years.
In the recent Forbes interview, Quant Network’s CEO (Gilbert Verdian) stated : “Our potential to grow is uncapped as we change and transform industries by creating a secure layer between them at speed. Our vision is to build a mass version of what I call an internet of trust, where value can be securely transferred between global partners not relying on defunct internet security but rather that of blockchain.”.
This is highly encouraging with regards to business prospects and also in comparison to what other companies have been able to achieve since the Web as we know it today emerged (e.g. Microsoft, Google, Apple, etc.). The Internet is now entering a new phase, with DLT technology at its core, and Overledger is set to be at the forefront of this new paradigm which will surely offer a vast array of new opportunities across sectors.
I believe it is an exciting time for all of us to be part of the journey, as long as any financial commitment is made with a good sense of responsibility and understanding of what success comes down to. “Crypto” is still immature in many respects, and the emergence of a dedicated regulatory framework combined with the expected gradual, selective entrance of institutional money managers will hopefully help shed further light and protect retail token holders from the misunderstandings, misinformation and misconduct which too many have suffered from in the last years.
Thanks for your time and interest.
Appendix:
First article: “The reasons why Quant Network (QNT) will rise to the Top of the crypto sphere in the coming months”
Second article: “The potential of Quant Network’s technology to capture value from the structural shift in the World’s assets and machine-related data/value transfers”
October 2019 City AM interview of Gilbert Verdian (CEO): Click here
October 2019 Blockchain Brad interview of Gilbert Verdian (CEO): Click here
July 2019 Blockchain Brad interview of Gilbert Verdian (CEO): Click here
February 2019 Blockchain Brad interview of Gilbert Verdian (CEO): Click here
----
About the original author of the article:
My name is David and I spent years in the Investment Banking industry in London. I hold QNT tokens and the above views are based on my own thoughts and research only. I am not affiliated with the Quant Network team in any way. This is not investment advice, please do your own research and understand what you are buying before doing so. It is also my belief that more than 90% of all other crypto projects will fail because what matters is what is getting adopted; please do not put more money at risk than you can afford to lose.
submitted by mr_sonic to CryptoCurrency [link] [comments]

Bitcoin Live! - Breakout!! Chart, Price, Stock-to-Flow ... FED TO PRINT $6 TRILLION? Will Any Money Flow Into Bitcoin? Bitcoin Series: Basic Use, Concepts & Inner Workings - YouTube Analyzing Blockchain and Bitcoin Transaction Data as a Graph ATM Transaction flow - YouTube

Merchant Account Transaction Flow [diagram] by Brian Manning Mar 27, 2014 General Merchant Account. Update: 9/18/2020 — We published the video How Credit Card Processing Works and it’s a newer, updated version of the transaction flow so if you haven’t seen that video be sure to check it out. The content of this post is still correct, the new video just goes into much more detail. Knowing ... Bitcoin transaction: The Flow . submitted 1 year ago by lunarocket. 124 comments; share ... they need only glance at this diagram to see that...they were right. In any case...its a good explanation of how a transaction works at a high level. permalink; embed; save; give award; th1nkpatriot 19 points 20 points 21 points 1 year ago . The explanation is taken from Andreas Antonopoulos' book ... So we took a look around to find the most impressive ways of visualizing the Bitcoin transaction flows. Some of the following visualizations even come with recipes. 1. Bitnodes. The first visualization “Bitnodes” by Addy Yeow shows the distribution of Bitcoin nodes across the globe. It uses a Bitcoin crawler implemented in Python that is also available on Github: 2. Network Map. Even more ... Beobachten Sie live den Bitcoin / Dollar Chart, folgen Sie den BTCUSD Kursen in Echtzeit und erhalten Sie die Bitcoin Kurshistorie. Überprüfen Sie die technischen Analysen und Prognosen für Bitcoin. The median time for a transaction with miner fees to be included in a mined block and added to the public ledger. Average Confirmation Time . The average time for a transaction with miner fees to be included in a mined block and added to the public ledger. Mining Information. Total Hash Rate (TH/s) The estimated number of terahashes per second the bitcoin network is performing in the last 24 ...

[index] [14590] [11207] [41909] [14012] [28474] [45869] [16082] [50051] [8753] [47513]

Bitcoin Live! - Breakout!! Chart, Price, Stock-to-Flow ...

Blocks, transactions with potential multiple inputs and outputs, and flow of bitcoins between addresses, form a sophisticated real-time graph. This talk details a pipeline used to gather bitcoin ... HowTo Send a Bitcoin Transaction with JavaScript & Bitcore by CuriousInventor. 13:16. Demo: Create an Ethereum Smart Contract by CuriousInventor. 7:25. How Bitcoin Works in 5 Minutes (Technical ... Bitcoin prices rallied on Wednesday to reach an intraday top at about $6,983 but the move did not improve its bearish outlook for the rest of the month. The bitcoin-to-dollar daily chart saw the ... ATM Transaction flow Blockchain Full Course Tutorial video will give you a complete understanding on Blockchain Technology and Ethereum. In this Blockchain tutorial you will lear...

#