Solana was founded by former Qualcomm, Intel and Dropbox engineers at the end of 2017. It is a single chain entrusted proof of interest agreement, which focuses on providing scalability without reducing dispersion or security. The core of Solana’s extended solution is the distributed clock called “historical proof (POH)”, which aims to solve the time problem that there is no single reliable time source in the distributed network. By using a verifiable delay function, poh allows each node to generate a timestamp locally using sha256 calculation. In this way, there is no need to broadcast time stamps on the whole network, which improves the overall network efficiency.
Sol is the local token of Solana blockchain. Solana uses the trust equity proof consensus algorithm to motivate token holders to verify transactions. As part of Solana’s security design, all costs will be paid in sol and burned, thus reducing the total supply. This deflationary sol mechanism encourages more token holders to participate in shares, thus improving network security.
In order to create a distributed ledger with coded and untrusted time, Solana designed a proof of history, which is evidence to verify the passage of time between orders and specific events.
Proof of history will work with proof of work (consensus algorithm used by bitcoin, etc.) or proof of interest (consensus algorithm used by Ethereum’s Casper). This reduces the messaging overhead that results in a termination time of sub seconds.
In addition, Solana is committed to generating up to 710k transactions per second on a 1 GB network without data partitioning. Do you want to know how they plan to achieve this great victory?
In the race to develop high throughput (TPS) and highly secure blockchains, the team is designing new ways to create highly scalable solutions that allow a high number of transactions per second in existing blockchains.
“Time problem?”. In the era of computing and information, there is a basic demand waiting to be solved. Fair coordination between events. This means that, for example, when a computer sends a message to another computer, they need to synchronize the time between transactions. Therefore, this means that if each of them has their own internal clock, they may or may not coordinate correctly.
Using timestamp to coordinate events is not only the need of the system, but also a huge cost in terms of money, personnel and effort.
Developers have begun to use a technology to improve the overall throughput of the chain. Sharding is a technology used to improve the TPS (system throughput) of the total chain and has proved to be successful, but it is not a complete solution in itself, because it may introduce vulnerabilities.
The biggest loophole is the segmentation of transactions. If it is not handled well, it will open the chain, resulting in fraudulent transactions, double expenditure or fragments of the same transaction and lack of shared knowledge.
In order to provide some general views, Google spanner (Google’s scalable, multi version, globally distributed and synchronously replicated database supports read-write transactions, read-only transactions and snapshot reading) spends a lot of resources to synchronize atomic clocks between its data centers.
They need precise maintenance, and a large number of engineers are working on it. It seems that coordinating time is an easy thing, but this is not the case. This is the historically proven solution proposed by Solana.
By achieving trusted time coordination, Solana can not only improve the blockchain throughput in terms of speed and reliability, but also reduce the average cost.
The team that successfully solves this problem may have a highly adopted blockchain.
In depth study of Solana’s solution will find some problems, such as how to realize historical proof on the blockchain, how Solana works and what tools do they use?
First, we need to understand how the network is designed and what it contains.
It has proved to be a high frequency verifiable delay function. This means that it will need to determine the number of relevant steps to evaluate. On the other hand, these steps will eventually produce a unique output for verification.
In the solution section, we discussed how Solana can increase the number of txns / s and reduce the resources required to run them. The explanation of this possibility is consistent with that of hash function.
Hash function as a way to compress data, so that a larger amount of data can eventually be compressed into small bits, which encourages the reduction of TX weight, so as to improve the efficiency and faster sequence.
As mentioned above, the historical proof sequence is designed to work with the encrypted hash function.
Particularly relevant to encrypted hash functions is that the final result (output) can be predicted using the original input without executing the whole function from scratch. Therefore, if it is impossible to have input and try to predict the output, you will need to run the function to get the result.
With this in mind, suppose that the hash function runs from a random starting point (initial input), and once the process is completed, the first output (hash) is obtained. Here’s what makes it interesting to enter the input into the input of the next hash along with the output obtained from the running function.
If we want to repeat this process, for example, 300 times. You can begin to see that we have created a single threaded process in which the final output (hash 300) is completely unpredictable except for the person executing the whole thread.
This loop that provides output to the input and generated data of the next function is expressed as the passage of time and the creation of history, which is a ticking sound in Solana’s words. If each function does not carry detailed information, it cannot be output. Like Marvel’s film in the above example, each work represents a period of time, just in its place in a continuous thread.
Therefore, Solana recommends not using unreliable time, but using these orderly and unpredictable outputs to determine a specific time, that is, a specific time in the thread process. We can call it history.
Certificate of project equity
Solana uses proof of interest (POS) to reach consensus, and it has many other similar features based on POS tokens. As a review, here are some main features of POS Tokens:
Proof of POS token using verifier
POS can pass certification
- Lock the token in your wallet
- Locking the token on the master node is helpful to the stability of the chain
The payment order is determined by the “age” of POS token or master node reward plan.
Each POS wallet or master node reward plan will receive coins or newly forged tokens.
Wallets or master node reward plans that have been offline for too long will no longer be “paid” and may be deleted from the network.
The function of POS is to prevent the participants of bad behavior from introducing invalid transactions by undermining the security of the network.
The penalty for the “bad character” may be the loss of POS tokens and rewards.
Trust can be guaranteed as long as it is proved that the return of benefits is greater than the opportunity to obtain benefits through fraud.
Solana has a very similar structure, but they implement their POS in a slightly different way.
Solana selects a verifier from those connected nodes (i.e., put a token in).
Then, the vote and selection of the verifier will be determined by the node that has been in the longest or most bound nodes.
Solana relies on quick confirmation; If a node does not respond within the specified time, it is marked as invalid and removed from the vote. If the node is a verifier at that time, a new election is held to select a new verifier.
If a super majority node (two-thirds of nodes) votes within the timeout, the branch is considered valid.
Tailoring is the act of nullifying rights and interests, which prevents the verifier from cheating or trying to verify multiple nodes because bonded tokens will be lost.
A major difference is the concept of secondary election nodes. Once selected, the secondary node can take over the primary role in the event of network interruption or other failures.