Introduction
A relative proof conflict resolution, irreversibility & database service for crypto-execution & data processing
Redgold is a peer-to-peer crypto network, database & engine focused on data infrastructure & financial uses with an explicit absence of global state and centralized data structures. Localized signature batches consisting of arbitrary data and peer observations, combined with hashed cross-embeddings to other networks and peer trust scores, enable a variety of optimizations around the security of executor contracts. Operations are expressed not as live running contracts or services, but as data flows and transformations, expressing both state and code references under a UTXO model. Generic multi-language pluggable executors, backed by WASM and other layers, along with 'floating dependencies' resolved at execution time, and cross-language relational algebra and SQL support, allow the validation operations of decentralized applications to be expressed as lazily evaluated 'flows' using conventional SQL queries, UDFs, map functions, tables, indexes and updates. Rather than competing with other networks, or acting as a roll-up, this acts symbiotically as both data infrastructure for other crypto-applications (as a 'pre-global-consensus' process & data service), as a mechanism to improve their security & its own, and as an independently functioning network. The motivating use cases are the proper support of decentralized financial contracts for ETFs, trading strategies, and arbitrary portfolio target contracts, as well as the construction of data pipeline style applications relying upon both SQL & data transformation dependency graphs.
Introduction
All blockchains currently rely on global state in the form of a top level network hash. This is mostly a mistake. It's certainly an easy solution and makes a lot of the logic less complicated, but for scalable applications it's the concurrency equivalent of relying on a global var in a python script and essentially plagues all solutions with issues that have already been known about in other programming problems for a long time. It makes sense, starting from PoW (Proof of work,) to co-mingle all data under a primary reference, as the max operation for work requires a single hash reference, but it causes intense difficulties as a primary assumption, and is far less necessary when removing the PoW constraint.
Attempts to solve this through sharding or partitioning or otherwise which still compound up to a global state are still fundamentally bottle-necked by this problem as you cannot have any concurrency expressed when values must always collect or compound to a single stage. The core solution is to focus on where ordering is actually required for security. Lamport ordering is typically used in distributed systems for providing an ordering on multiple machines for data that may potentially have conflicts, and the key takeaway is that ordering only matters for causal relationships (i.e. data dependencies.)
Where means "a happens before b".
This is naturally expressed under the UTXO model as the direct parent references associated with a classical currency transaction. Any attempt to provide an ordering for data which is not related, is simply not required for security, as any double spends (or state conflicts) will only demonstrate that conflict relative to their direct data dependencies.
Co-mingling of unrelated data leads to enormous complexity in validation which is all strictly speaking not necessary (outside the core concurrent casually dependent validation). If a transaction is valid, and appears in one block or the next by some artificially imposed cut-off, it does not actually matter where the cut-off is. This issue is currently solved by a series of complicated consensus processes in order to build agreement on the exact location of these cut-offs -- which as a side effect solves the original validation problem, but introduces problems that do not need to be introduced. There are plenty of valid use cases for consensus operations. But again, within the core context of the existing data model used by most blockchains, there is a serious flaw in reasoning in the application of these processes to non-required ordering problems. Most of the consensus processes simply waste effort attempting to determine this exact ordering, even when all the data is actually in agreement and there are no 'legitimate' conflicts.
The purpose of a single conventional cryptocurrency transaction is for a given sender to prove to a given receiver that they have not created a secondary transaction and that the transaction will be legitimately accepted by as large a number of people as possible, thus retaining value. Other decentralized contract examples follow this general approach, despite adding additional complexity. We'll discuss this as an example before extending to more complicated contract scenarios. People primarily focus solely on double-spends either through a proof of work to demonstrate agreed state, or a consensus process with collateral or some other criteria. This obfuscates the more basic fact that the network itself can fork depending on the software version, or decisions of the community (like Bitcoin Cash, or the DAO hack.)
In the case of PoW, the forking of the network is handled explicitly manually by individual nodes deciding which group they wish to join and reconfiguring either their deployment, software, or peers list, and there is very little automation around this process, leading to potential confusion and fragmentation and disruptions in stability. Contrary to popular conception, even Bitcoin has manually reset their network before in the early days due to the discovery of a bug allowing the printing of arbitrary coins, demonstrating that it is the peers themselves, rather than the chain data, which is the ultimate arbiter of true state.
In collateral based forks, there's an added issue around nodes potentially losing substantial value due to an arbitrary community decision, and there is the added introduction of an additional threat vector from collusion of large stakeholders to take potentially either perform attacks against smaller validators to seize their assets, or otherwise dictate decisions against the interests of other stakeholders in favor of themselves. The general trend of collateral induces effects of centralization, but is not useless as a mechanism for distinguishing between completely dishonest nodes and somewhat honest nodes. Additionally, relying on collateral alone defeats the incentive towards openly audited networks, as the auditors should ideally be an independent third party with no incentive to behave dishonestly. Malicious validators voting with respect to collateral stake can easily and immediately hijack an otherwise valid network with no checks and balances against them. The ideal network acts more like a decentralized notary service rather than a collateral backed company, performing a public proof to multiple participants for a fee, and stands little to gain for dishonesty.
The main reason peer scoring is typically avoided as a problem, is because of the oracle problem of introducing off-chain information into an existing network. Stake is easy to choose because all of that information is available to a validator context, without requirements for any introduction of external information. The problem is external information is incredibly useful and valuable, and the mechanism for introducing it repeatedly crops up into other problems (like determining the price of an off-chain asset.) Since this problem exists for contract validation requirements, it should be built-in to the core process of validator security rather than ignored.
The main goal of a consensus process (as it applies to these types of transactions) is to prove to the largest number of people, as quickly and efficiently as possible, that a transaction is legitimate and doesn't create a conflict with any other piece of data. The receiver of funds has an incentive to retain as much value as possible by not accepting an illegitimate transaction, this is typically done by waiting for a block confirmation interval, rather than considering the trustworthiness of the auditors who validated it and how likely they are to reverse their decision. Ideally, for a receiver to be able to instantly use funds, he would want the transaction immediately finalized by the largest possible number of people, and the state embedded in the largest number possible of networks to guarantee irreversibility.
Naturally, the quickest way to do this is to have every independent observer immediately sign the transaction and accept it as part of a ledger, but that would cause an issue if a double spend was spreading across two different sections of a network independently. This means there must always (to guarantee perfect security,) be some minimum delay in acceptance, relative to the network's ability to spread the message to as many participants as possible.
Where is the minimum delay time, is the diameter of the network (longest path between any two nodes), and is the average message propagation velocity in the network.
The onus here can also be on the honest sender, who has the ability to broadcast that message as quickly as possible to as many honest nodes as reasonable, which again is also the ideal case to prove to a given receiver that his funds have been accepted by the network. A perfect receiver attempting to determine accuracy as quickly as possible would again also be attempting to query as many nodes as possible on whether they have accepted the transaction or observed another double spend, and would continue to query the network to verify those nodes have not reversed their decision. In the event that the sender is dishonest, he would attempt to spread the information to different sections of the network -- or otherwise attempt to hijack the network and delay his 'attempt' at spreading this information for some future period of time based on receiver confirmation delay settings (by offering a forked alternative data structure with the other half of the spend)
This leads us to pose the following:
The Fundamental Theorem of Double Spend Consensus
The perfect network acts as a medium to connect two recipients of a double spend such that they can communicate to one another and verify the invalidity of a transaction.
This applies both to the immediate double spend case, and the future double spend case (chain reversal.) In the latter, the network is simply connecting two pieces of information and attempting to detect an actual conflict between them. This process is obfuscated by focusing on the entire state (hash chain reference) of the network, as opposed to focusing directly (and only) on the conflicting data. The distinction of this separation is extremely important for more complicated types of contracts -- although it may not seem that obvious or important when discussing only currency based transactions.
In any case, there must be some delay period T, which splits the 'ideal' process into two phases. First a PENDING operation must be signed by a node -- otherwise dishonest nodes would be able to lie and pretend to be ready to accept one transaction while secretly spreading its secondary double spend -- followed by an ACCEPTED operation, which again signs the transaction with an irreversibility condition. The only condition under which a node should ever reverse after signing an ACCEPTED block should be for severe worldwide network degradation, or a manual whole-network fork, which should be marked with a TOMBSTONE operation and signature (rather than re-writing any data.) Tombstones are however used for PENDING operations that do not get accepted. To prevent attacks by anonymous sources against the first PENDING operation, a mem-pool buffer is used before signing any data to stop (some) obvious attacks which can introduce an additional minor delay phase. This can be skipped by nodes signing or pre-approving certain addresses for faster operations.
Processing all of these signatures on each transaction would be most ideal and lead to the lowest latency, but is not strictly necessary due to the minimum delay period anyways. Instead, the most optimal operation would be for each node to create mini-batches of signatures (see prior work on ACCEPT, creating a Merkle proof for each hash or transaction being signed in each batch. This is the only 'block' level information that is ever really required, and this is the fastest way to actually create it. Every operation involved in creating blocks where order is agreed by multiple nodes only introduces additional delay between creating a proof and a receiver being able to check that proof. An ideal receiver interested in some arbitrary transaction only cares about collecting a set of proofs from as many nodes as possible, so his final output doesn't care at all arrangement of blocks, but rather how many nodes have deemed something as irreversible. Bitcoin and PoW networks typically obfuscate how this is actually happening, since conventionally you'd just check a couple external sources to find out what the 'accepted blocks' are rather than manually polling a large number of peers. They also don't give you (automatically,) much information about who these peers are -- and even worse in some cases, those peers never actually treat something as 'irreversible' if a larger chain comes along. Others DO offer short-term irreversibility, but impose halting constraints based on network majorities. Again really the main problem here is being obfuscated by treating a block as a central data structure, when really the data of interest is the cumulative amount of trust associated with the set of peers and Merkle proofs available. A receiver will really only 'accept' a transaction after N block confirmations after it has been confirmed by several existing block aggregator websites, but this is more properly expressed as accepting a proof from N peers with scores associated with each peer.
These scores again in most networks, are arbitrarily manually calculated, you trust some website to give you correct block information, or you trust the software to connect to peers and download the block data directly without knowing who you are connecting to. This obfuscation leads to all sorts of potential attacks and confusion, and requires many manual processes geared around protecting against this. Again also, in the event of forks the network is mostly unprepared to deal with this and requires operators to manually adjust to whichever fork they are interested in. And again also the software version forks are treated manually as well.
The way to encapsulate all of this is to say that an arbitrary reception of a transaction depends on a set of proofs with peers and scores associated with those peers. Collateral or proof of stake is absolutely useful as a component in calculating that score, as the validator stands to lose something for dishonesty, but again should not be the only component. Proof of work also is useful for the validator to prove to independent third parties that it has not reversed it's earlier decision, to prevent suppression of information -- but that can easily be achieved by embedding each validators state within another network already undergoing proof of work.
Any independent network can embed as much state as desired into any kind of roll-up to gain use of the mining ecosystem -- which adds all the pre-existing protections onto any given variation. Again -- the idea of 'zero-trust' in crypto as a whole is really a pre-tense that obfuscates or ignores any steps involved that involve trust by explicitly pretending like the problem doesn't exist. Rather, those areas of weakness should be focused on to build a more secure model.
As the final outputs here are a set of peers, scores, and proofs associated with them in determining the authenticity & irreversibility of a transaction, there is no explicit need (for this type of transaction) to create any centrally agreed upon data structure or global state. It can be avoided entirely. Really, this is the most generic abstraction of what blockchains are actually doing — supplying a merkle proof of acceptance irrespective of the chain data definitions they determine. This structure already partially exists in the 'multi-chain' ecosystem, where we have hundreds of different actively used chain structures and peers, each essentially only used to provide an end-user a set of merkle proofs proving acceptance. Rather than spawning more networks to solve each application specific use case, you can simply take this philosophy to it's logical extreme by eliminating the intermediary collect operations and relying only upon the final proof.
Each 'block' can be a pure mini-batch of local relative proofs specific to that validator node, and never agreed upon with other nodes. That nodes state can then also be rolled up and embedded into both other networks and other nodes with some tiny fractional cost associated with proving / auditing that the node is not reversing transactions. So the set of proofs consist not only of proving the individual item has been accepted, but also that the node has been behaving honestly and not reversing transactions by checking the state of other network embeddings and other peer captures of their observations. Any honest node who captures proofs from dishonest nodes would immediately be able to prove some other node has been reversing transactions. So long as all legitimate tombstones are captured, it's relatively easy to prove a node is either behaving or not.
That means, the only information necessary for linking mini-batches is the nodes own previous mini-batch. Giving a linked relative structure. The interval for embedding this data as state information into other networks doesn't need to be relatively large, as even one embedding per day would capture everything in a single proof. Same as well for embeddings associated with other node's batches within the network, each node only needs to capture a small fraction of other node's behavior to maintain proper information necessary to capture reversals. All of this is really to say that any 'block' level information should be considered highly ephemeral, and merely exists as a way of constructing per-transaction proofs and per-peer proofs, as nodes do not need to know the exact data composition of the rest of the network's individual node batches. When a conflict occurs, they can dynamically request that information and verify it during resolution to determine which side of the conflict is most accurate.
The only potential drawback here is the additional storage constraints associated with non-redundant agreed data structures. That is more than worth it considering it is exclusively small metadata of hashes, and adds a minor penalty more than made up for by the security and latency benefits of immediately capturing as many proofs as possible across the largest group of the network as possible -- and again, as we can embed this data in other networks, we immediately gain access (for free) to their global state information for long-term synchronization and replication. This means the persistence of other nodes blocks need not be strictly required, as it can be re-built on a per-proof basis as needed so long as each node retains their own and minor references to others.
It would be foolish to completely ignore all the other existing networks and avoid building upon them in some way and make use of their potential for observations, and this model properly allows for a very easy well to not only make use of them in a relative non-global form, but to actually enhance their security as well. Any proofs built by this network would also, as a side effect, enhance the security of any information captured by the other networks as well. I.e. a proof produced to show a node's state within a BTC block, would in effect, capture and secure the BTC block hash in so doing and provide additional security to any other network. In this manner, this model acts symbiotically with other networks, and the more that are integrated the stronger the proof becomes. This same model also extends to an arbitrary types of transactions running with each local node, meaning the individual node has the freedom to determine what types of data it wants to validate, which may differ drastically between nodes (allowing arbitrarily supported currencies or operations.)
Global state is the main issue plaguing most attempts at scalable consensus, and while there are some good attempts at solving this (and they should be continued to be developed,) they are unnecessary for a certain category of solutions. Consensus is actually useful for certain types of problems which require more advanced state management and do actually require global state -- but the extensions of basic transactions simply do not explicitly require global state (so long as global fees are removed and don't introduce a global dependency), as they only require dependency level information to validate. As a double-spend is a relatively simple conflict, the application of most types of consensus is actually over-kill for certain types of problems, and this 'simpler' model offers enhanced security specific to the sub-set of these types of problems. It will be discussed later how this can be extended to contracts and more complex operations -- but suffice to say they are in the same category of solutions for which a global state is not specifically required, and actually cause problems specifically in the custom cases. Many attempts to over-generalize or over-abstract the original transaction types into arbitrary 'smart contracts' or 'smart applications' miss out on this category of problems. Simply restricting yourself to non-global state and pure dependency interactions offers many technical benefits. While it does restrict the category of solutions, most of the problems within this space have not been sufficiently or properly solved.
The separation of this problem space is critically important, as most blockchains which support global state have encountered enormous issues. Solana, while having low latency and other benefits, regularly halts the network for entire days over extremely minor problems (even if the nodes agree on 99.999% of the data, they halt on a global hash difference.) This is a symptom of a poor design. It could be argued that there are other ways around the global state issue (by requiring less validators to approve, or other arguments,) but such arguments may introduce additional security vulnerabilities or other issues, and are completely missing the main problem -- which is global state. Blocks try and offer ‘strong guarantees’ of total agreement but there is no total agreement in any aspect of life. There’s only risk, and measuring it should be done per-transaction. By focusing explicitly on conflicts, you ignore the 99.999% of cases where validators already agree, and instead devote the network power towards disagreement settlement.
This does not mean all problems solvable through global state and consensus are irrelevant. This process, acting as a fast conflict-resolver, can still undergo consensus through other network's protocols immediately before or after local signing. This approach offers flexibility to enhance existing consensus work by reducing the overhead of detecting conflicting values, and should be considered a complementary solution when consensus is actually required or when global state is actually required. For the purposes of the later proposed problems, neither of these is.
The best way to discuss scaling is to think of the problem as one of 'Optimistic routing'. For true decentralization you must not expect every node to be a supercomputer -- you should expect each node to have a maximum amount of processing power and storage, which might not be very much at all. Storage constrains the threshold associated with hash distances. As more data is encountered, the distance for hashes is lowered so that no global state information is maintained but nodes merely recycle and adjust their distance based on availability.
Where is the current hash distance, is the maximum hash distance, is the available storage, and is the total amount of data.
Any of the relevant data of interest required for validation or long-term storage should be based purely on each node's local relative decision whether to participate in the storage of that data. That way, there is no central coordination of partition sizes at all, it's simply not required as each node will attempt to provide as much security as it reasonably can. In the worst case where the network is overwhelmed, theoretical security does drop as nodes do not have the time to process and sign as great a fraction of the data, but this can be solved by making receivers aware of the congestion, and either adjusting fees as appropriate for prioritization, or simply notifying receivers that the minimum delay time has increased, so they are aware not to immediately accept something. Hash distance should also incorporate fee level information and can be done without requiring central coordination.
As an example, storage of a given UTXO entry based on its id. It should be based on the total fraction of storage devoted to UTXOs (lets say half a node is dedicated to that, 25% to transactions, and 25% to observations) of that half, assuming we reach storage capacity, we adaptively lower the hash distance down so that we can prune the existing storage. Nodes would preferentially attempt to validate new transactions associated with data they are storing within that distance, and simply resolve any missing data from other nodes who store within those UTXOs. Similar limits would apply as well for the bottleneck associated with proof production, restricting the inputs as appropriate to maximize throughput on validation to data with the greatest storage affinity preventing excess resolutions.
Really the main optimization of this entire solution is to avoid consensus stages entirely. This brings us into the solution space of a 'passive' version of consensus, where conflicts are simply detected and resolved independently by each node -- with large disagreements leading into actual forks. Each node does not need to explicitly cover all the data involved. It only needs to act as a marker for a particular event that it has observed. This way no one node ever needs to maintain global state. The goal of these proofs is to show the largest number of signers or observers of an event as quickly as possible. The data that is observed the most and spreads the most quickly and gains the most trust relative to each node and their definitions of peer trust is locally accepted. And since any operations or contracts requiring consensus can simply integrate with another consensus-based network through buffer observations, making use of that is easy.