The core data model is based on UTXO similar to Bitcoin, and shares the greatest amount of similarity with Cardano's Extended UTXO data model. Every node can supply local trust scores for all other nodes that they are aware of. Additional trust structure information is calculated through a model to build a transitive localized trust vector representing how much each node trusts any other given node. These scores are utilized when any conflict is found to determine which side of the conflict to agree with. Nodes continuously attempt to gossip & synchronize their data with other nodes -- but only ever perform 'consensus' operations when a disagreement is found. Nodes regularly sign an observation buffer with a merkle proof containing all recent data they have observed. They attempt to communicate most directly with the nodes they are subscribed to, as well as using libp2p gossipsub to provide additional communication to random nodes on the network.
When any double spend is observed, a node observing it abstains from spreading the secondary message further in the network. Any conflicts are collected and all observation proofs of interest are summed together to determine which transaction spread through the network first by determining the aggregate level of trust associated with each. So long as the nodes quickly resolve any conflicts among themselves, it is irrelevant which transaction actually came first, as total ordering is not required as long as there is a consistently chosen transaction. No commits to the internal database are made until such conflicts have been resolved, and once finalized all transactions are assumed correct.
The motivation behind this will be discussed in more detail in the security section, but this structure was chosen over the account model for enhanced security and performance. The core focus of this must be to quickly and rapidly determine any conflicts with the minimal transfer of data -- and to prevent any situations requiring a rollback. Other chains like Bitcoin for example, require potentially validating multiple chain states as there are potentially competing leading edges of blocks with different amounts of work. In our case, once a commit has been made to node's internal store, it should never need to be reversed. All observation operations can then happen using a single state reference. Because UTXOs are consumed once only, conflicts are easily detected immediately upon a single hash. Any re-use of this hash id qualifies as a conflict detection, making propagation simple as no state is required to maintain this other than the direct dependencies.
Irreversibility is the prime concern of any blockchain network. Rather than approaching this problem from the perspective of chain state data, we instead consider the behavior of each individual peer composing the network. Most operations on a blockchain are redundant or otherwise meaningless to propagate. If all peers agree that a transaction is valid, there is no real issue or need to communicate it other than spreading the original data and confirming it. This comprises the majority of transactions.
The first set of difficulties is encountered when a double spend is broadcast within some short time period. This case is the easiest to handle, as the resolution of it is meaningless in terms of the end users. A short propagation delay between two double spends observed on the network allows an arbitrary choice in terms of acceptance. The ideal behavior is simply for nodes to attempt to determine which side of the double spend has spread to the widest group of the network, and accept that. Reversals that happen on a short time scale, also are mostly irrelevant as few would consider that transaction as anything other than pending.
Having ruled out that subset of transactions, the next case to deal with is a shadow-chain attack. This is where a node has signed multiple transactions which conflict with each other, but has not revealed one side of the signatures. They initially claim that one side is valid, but later reverse their decision. Any honest node watching them, would be aware of their reversal and be able to demonstrate a proof showing that they signed multiple elements of conflicting data or otherwise reversed their decision. Should they refuse to reveal their observation buffer as well to many nodes, that too would leave evidence of an omission, which would eventually be discovered when they reveal the duplicate signature. While this attack is more complicated in the event of severe network degradation, fundamentally the attackers leave evidence of their behavior through invalid signatures. This case is identical to that where a node supplies a TOMBSTONE operation to an ACCEPTED transaction, as it would leave evidence of a signature.
The next category of attacks is a shadow omission attack. This is an attack where a node refuses to sign one half of the double spend, letting the rest of the network proceed with approving it, and later reveals the hidden other half, signs it as ACCEPTED, and considers that state valid. This is the most complicated attack to defend against, as there is no actual self-conflict generated in terms of signatures. This is where the requirement comes in that nodes must sign each other's observation buffers & embeddings. Trivially, we might think that each observation / local signature batch block structure is completely independent, but they cannot be in order to defend against this. We need some mechanism that demonstrates from one peer, they have accepted the results of other peers behavior. This means if the rest of the network acknowledges a transaction, that peer must also have acknowledged the existence of that transaction. This is accomplished through signing not just transaction hashes, but also some subset of the observation buffers associated with other peers. The exact determination of where and when to include those references is discussed in the observation formation process, but it can be assumed that any honest node will be including enough references to additional nodes buffers, that a proof can be constructed showing they have acknowledged the existence of other peers buffers including the data, but themselves have censored or omitted a transaction.
The secondary case to the shadow omission is where they refuse to include any peer observation reference that includes the omitted transaction -- assuming those references are properly distributed across the network, this would result in a node that does not include ANY peer references, which would be trivially detectable.
Each of these cases becomes more complicated assuming a node does not have access to the entire data set. But, the same assumptions hold when hash distance is taken into account, so long as there is sufficient coverage. In the event of insufficient coverage due to excess processing requirements relative to current network resources, operations must slow down sufficiently to guarantee security.
There has been a general trend since the construction of the Ethereum network EVM to bring execution closer to conventional executors. The use of WASM, LLVM, python bindings which generate lower languages, and more, is indicative of a desire to use languages and tools that are closer to conventional, non-decentralized programming. The extension of Bitcoin op codes to the EVM op codes makes sense historically, as it is a step forward from a simpler transaction-based DSL to a more general purpose DSL, but in the long-term it is incredibly restrictive compared to generic code execution. The assumptions the EVM make are specific to the original architectures, and secure untrusted code execution can be handled differently and generically in other ways without relying on those restrictions, as proven by numerous other projects, most notably with Solana.
In general also, different projects are attempting to support multiple executors, either generically as framework components (see Polkadot) or at a sub-VM binding level (see Cardano). The desire to support many different types of languages and executors speaks to the goal of user adoption and ease of use -- a trend seen in other non-crypto distributed compute projects such as Spark, which has many parallels as it exists as a generalized compute engine / SQL processing framework for distributed computation across executors in multiple languages. The same trend exists even more generically as a whole with any service or execution DAG style orchestration platform, such as Airflow, Oozie, or Azkaban which provides executors, service calls, binary dispatches, or otherwise to handle language transitions across composable operations -- and more generally with the trend in service oriented architectures, which have seen parallels in the development of ABI usages (application binary interfaces) as an improvement upon simple function delegation in EVM which shares structural similarity with conventional APIs.
The most desire-able solution, from a development perspective, is a network that supports arbitrary, generic executors, and thereby multiple languages. Interfaces between executor operations should span not just ABI operations (which are generally intra-executor runtime operations either as bindings or direct invocations,) but also API style invocations (which span executor types and thereby mimic or translate conventional service calls into a crypto context.)
Conventional crypto applications are expressed essentially as single binaries, where dependencies are reflected through ABI-like external contract references or function calls. This becomes extremely complicated when dealing with interface invocations across multiple potentially conflicting binaries. The proper way to handle separation across multiple binaries is with a data schema interface similar to Spark StructTypes. Each binary corresponds to a transformation, and so long as the input and output schemas are known to any binary depending on a transform, it should be simple to chain such transformations together.
This becomes more complicated when dealing with shared memory, as the ideal mechanism for chaining together function invocations across multiple binaries is to use shared process memory to avoid inter-process communication or disk serialization. Suffice to say this is simplest to handle in the WASM case, where the memory tables per engine can be accessed directly -- or IPC protocols or off-heap memory storage services ala Tachyon can be relied upon as a fallback for different executor types. Arrow is the most convenient cross-language memory format for handling data between transformations, and serves as a structural typing layer for data inter-ops -- which can be conventionally bound in WASM memory and served to binary function calls.
C, Rust & C++ can be handled the most easily, as compilation processes to WASM are already well established, and dealing with memory translation between pointers in the host machine and WASM sandbox is trivial as direct memory access can be translated relatively simply.
There have been attempts to build Scala and Java to WASM as well as many other languages, but execution in JVM languages may benefit from or require a different approach based on classloader and JVM sandboxing with dynamic multi-tenant runtime execution. Either way, these should be considered part of a plug-and-play executor system, where over time more executor types can be added to the same overall architecture. The issue with following Spark's approach of invoking python under the JVM with py4j and LLVM languages with invocation bindings, is of course the security aspect of dealing with untrusted code. There are sandboxing mechanisms invoking VMs and docker that can limit the damage, and that route may prove useful in certain circumstances, but it comes with far less restrictive guarantees than a proper WASM engine, and makes it much more difficult to run on small nodes. With an isolated VM where executors can be isolated to separate machines, it can be accomplished, but for running small tasks on regular computers, phones, etc. WASM is simpler to secure.
Another interesting trend in pipeline development is that Spark, or rather Databricks is rewriting their entire SQL / Dataset execution engine in C++ with Photon, and providing high level bindings in other languages (Java, Scala, Python, etc.) This came from dealing with issues associated with the JVM around GC pauses, memory management, and other optimization issues which continue to crop up despite them still managing to have beaten most benchmarks with their existing solution. MapReduce / streaming platforms have relied on Java for some time, while database solutions conventionally rely on C / C++ for performance, and now we're seeing somewhat of a marriage between the two platforms in terms of required functionality for end-users, where the transformation engine must live much closer to the database.
Some projects have attempted to bypass the C++ performance requirement by re-writing the core engine in Rust. An attempt to re-create similar work, which was mostly abandoned but shows some interesting ideas is seen here: FastSpark. Here, a single binary is deployed for multiple executors, and operations like a map use a serde closure serializer to pass a serializable function reference from the driver to the (same binary) executor. Such an operation may prove useful in expressing a transformation API, but is not strictly speaking necessary other than for convenience due to the binary restriction issue compared to WASM invocation.
Ballista is another attempt to re-write Spark in Rust, but so far the implementation only has trivial SQL operations, and less properly handling of transformations and UDFs. The usage of arrow for cross-language in-memory operations though, is definitely a direction of interest for applications, as they have demonstrated the same capability that can be re-used with WASM executors. By binding arrow tables to WASM memory, they can be re-used across multiple chained function invocations to provide in-memory optimizations across invocations.
Overall, there is a gap that exists in dApps where people are unable to use conventional SQL to handle application state as they normally would when writing a standard application, or store large amounts of data due to restrictions like EVM storage space or fees. A gap exists where they could potentially use Storj or another service, but have no way to validate that external network directly and hence cannot necessarily trust the results with no ability to check the underlying service or properly interface with it with multiple peers re-validating it. The security benefits come from being able to validate (and hence re-create) the execution logic associated with a generic request. This is why every crypto has siloed and built walls around themselves by maintaining a 'pure' environment -- unspoiled by the external environment but otherwise weak to integration points. The correct direction for abstraction then is an attempt to create as 'large' an environment as possible, by covering enough surface area in terms of executable code / state management as possible to expand to greater numbers of use cases. It may be a case as well where most of these currently separate and isolated networks end up converging upon similar execution patterns, which would mean proper integrations with re-execution would be eventually possible, but until then it is simpler to design around generic executors.
Relational algebra (SQL & Logical Query Plans) generally do not require executors. While UDFs embedded within these plans do, the operations to load data from index, build SELECT statements and COUNT aggregators can easily be shared across all application types generally (with some exceptions.) While there is a caveat that at some point, these operations may be desirable to perform using fully generic WASI-backed file interface executors, the level of complication involved is beyond the scope any initial attempts, and only required in extreme cases of modification.
A common query plan API can and should be expressed as Protobuf, to be consistent with the protocol service API as well as to be handled by potentially multiple language loaders for executor interop. Substrait is the general correct direction to move in terms of serializing query plans, as it offers consistency with the Protobuf schema used for general transactions. Any operations not well supported by this, should be adapted and incorporated into the general transaction schema. SQL strings as well, must be parsed into an intermediate representation, ideally similar to or using substrait or an equivalent. In general, many default operations for contracts can and should be pulled into schema level operations, to provide a consistent & unified experience for the most common operations, primary of those being loading and handling of data.
Distributed applications should be able to issue SQL or sql-like queries against a unified generic data lake / database. One of the pain points in coming from the development of a conventional application to attempting to build a smart contract is the general difficulty in building an appropriate data model. EVM style smart contracts are attempting to persist the raw storage state associated with an arbitrary code block, in terms of persisting variables and translating that into the contract storage space. This has some commonality with conventional programming in terms of binary serialization of a class instance in order to preserve the state of all variables, which is typically considered an anti-pattern and best avoided. The more appropriate model is to consider a separation between data access and persistent storage, versus application level logic -- a pattern that is more familiar to most developers.
While blockchains are generally thought of as a 'distributed multi-party database', the abstraction layers around actually using the functionality of a database are generally lacking due to this model. Binary serialization for storage ignores the typical abstractions of unique tables, indexes, queries, updates, as well as database snapshots and deltas for bulk analysis. All of these conventional models should be translated to a decentralized context to allow a seamless user experience. This is not necessarily a straightforward task, as existing tools and functionality cannot handle this problem out of the box. The same patterns and tools which are in mainstream use, typically assume the database users are relatively centralized. SQL injection attacks are mitigated by preventing raw end user access. Security isolation is designed to prevent direct access as well to database creation and operations aspects. Basically, a conventional database cannot easily be used as-is without re-designing around decentralized security.
While Executors require WASM or other sandboxes to deal with custom code execution UDFs, and map functions of input data, query operations generally do not and can be shared by a given network. For instance, a simple SELECT over some set of input fields only requires passing a set of fields -- which means the code is the same for all users. The different layers required to support generalized data infrastructure for applications can be expressed in terms of the required availability latency, frequency of updates or changes, and frequency of access. While the ideal future would cover each of these forms of complexity in detail, there are two primary paths that would support and cover the majority of common use cases: sqlite for live access, and parquet for long-term persistent storage. This mimics the conventional usage of a Postgres instance for live queries and updates and/or operations on smaller data sizes, and S3 / parquet for bulk storage, data migrations, snapshots, batch operations and larger analytics.
While direct memory access can be used for frequently accessed data and act as a cache in front of sqlite-like operations, it does not scale properly to small workers that may have insufficient memory relative to the total network needs to be used exclusively. It should however, act as an optimization layer for caching frequently used datasets and chaining transforms together. This again, in motivation, is similar to Spark's approach for data caching, except applied in a context where the tables are not explicitly known in advance to the user or declared, but rather determined through network behavior similar to a more familiar cache mechanism. The most obvious example of this would be the price of bitcoin, which might be used in a large number of decentralized derivatives contracts, and may require a large amount of references associated with it, making memory persistence ideal.
As sqlite is more than sufficiently optimized for local-machine access, it does not impose a substantial penalty requiring much thought in terms of any caching mechanisms -- since access would be direct instead of across the network. For this layer, which should be used internally for any schema-level supported contracts due to the substantially large standard 'latest state' size (compare to Ethereum where the 'latest' validation set is on order of 500GB and thus out of memory,) -- it is impossible to directly translate a conventional SQL style query directly on the node, as we cannot make guarantees about the node persisting all of the data required to answer it, nor supplying sufficient processing power to execute the query in a reasonable amount of time.
This then means that there must be an intermediary layer available, in the construction of a logical query plan, where SQL is parsed and translated into a set of instructions that can operate in a context based upon hash distance, non-bounded or 'fuzzy' partitions, imperfectly distributed or untrusted data, peer failures, and outright attacks. The most basic failure of a direct translation is seen by a SELECT statement that would miss rows that the node does not have stored in it. As increased adoption will push the standard EVM node size well past 500GB, especially with a lower-fee and better solution, and small workers are generally desirable to support, we should not expect a situation where a node CAN have full access to the row set. The solution here, is for the node to select what data it can, and apply any relevant optimizations from the query plan immediately as close to the data as possible.
For instance, if applying a FILTER operation the optimization would be considered a predicate pushdown -- where we can attempt to apply the immediate column filter, BEFORE that node transmits data to any other nodes (as part of the remainder of the query plan execution.) The set of operations which can be accomplished within the scope of a small worker is not the same as the entire query plan, but many simple ones can (including REDUCE operations as required by certain aggregations for example.) The reason to consider this category of solutions, is that it is trivially easy to collect the full result set on another node by merging results from multiple peers in a single operation -- with no intermediary steps required in processing.
The more complicated cases can be seen easily in GROUPBY operations, which are computationally expensive, and can potentially span to be very large operations requiring multiple tasks (or requests to different workers / a parallelization strategy to run in a feasible amount of time.) In this case, we cannot expect a single worker to reasonably & directly execute a 'full' pass at it, nor can the results be easily collected from peers as in the above SELECT case -- especially as the keys are not known in advance. Here, the strategy to be applied is to first apply hash distance calculations to the keyspace to determine which nodes collect or aggregate which keys. Nodes would self-nominate to attempt to collect the data associated with a particular key they have an incentive to operate on and for which the hash distance is within range, and then as they are collecting results, they would apply any other FILTER or aggregation reduce optimizations during the collection / shuffling process.
The key difficulty encountered here is explosive key sizes created by data skew, either by unexpected data, user error, or a malicious attack. As we cannot expect to know what users may craft in terms of queries, or the distribution of unknown data, a conventional data skew solution cannot be applied relative to a specific job or operation, but must be handled for ALL cases, as a protection mechanism. The simplest solution here is dynamic key salting & dynamically allocated partitioning. As a node executing a groupby over a certain element of the keyspace, it will gradually accumulate more and more data up to it's maximum allowable task-relative buffer. When this happens, it will attempt to salt the keys dynamically as it runs out of space, re-shuffling any keys which do not fall within the appropriate salt bucket to other nodes. In order to guarantee an operation can be performed, there must at least be some nodes of minimum sizes to handle large task key spaces, otherwise a task may fail entirely at a network level and result in an invalid transaction -- but in general this is not an issue, as the resulting operation should easily be parallelizable across many transactions by the end-user, and to protect against validation attacks collateral can be provided for large queries. Speculative executions to avoid collateral need to be heavily limited and are only possible on smaller queries, this will be discussed in greater detail in the fee section.
Assuming the keys have collected for a GROUPBY operation, any subsequent transforms can be applied on top, and as all peers will know the light-weight query plan, any dynamic adjustments can be broadcast to those next in the query chain as relevant, to inform the collect side of a subsequent transform stage the appropriate data locations. These two examples demonstrate the most common type of operation categories, and while more operations are necessary to support the full range of SQL & transformations, this process will be relatively similar. For instance, in an ORDER BY statement, while a sort operation needs to be performed, fundamentally that process is identical to nodes applying the same key space grouping solution, with an extra transformation step applied and additional metadata supplied to other nodes to determine relative ordering of tasks across peers.
While this process above described handles the live update case across sqlite node-local indexes & tables, for bulk parquet storage there is another level of simplification that helps tremendously. Dealing with relatively 'live' data is much more difficult due to the fact that there is substantial dynamism in the key-spaces / partition ids of interest, meaning every optimization must be applied on a row level basis. When dealing with long-term persistant data, especially where the changeset can be isolated, operations can instead at least rely on the initial data loading layer to be composed of relatively large chunks, for which hash distance can be applied not on a row level, but a chunk level. This drastically simplifies many of the initial operation, especially collect phases.
While Z-Ordering or multiple redundant data sets indexed with different keys, or more complicated indexes may be required in the long term against parquet tables, a proof of concept is more than sufficient to power most use cases, and can be optimized eventually as needed given this functionality is new to the space as a whole. The general long term strategy would require indexes built above the parquet layer, and handle operations similar to Delta Lake. While delta or iceberg or another solution may prove useful as a component layer, it would have to be adapted in the same fashion described above as for sqlite.
One note of interest for the future, it may be desirable to extend beyond the sqlite case to more complicated deployments associated with an individual peer, meaning using some functionality which has a distributed model within the trust zone of a single peer. I.e. a peer that can deploy an entire cluster of nodes for itself. While this is interesting in the scaling benefits and ability to use existing scaling solutions, it tends to centralize processing power and defeats a lot of the purpose of p2p -- which is to re-use existing idle compute resources and encourage as many participants as possible -- for which sqlite shows greater promise in terms of supporting individual small workers.
The ideal end-user API for structuring transaction logic comes in the form well-established by many data processing engine / platforms, which originated in MapReduce architectures but has since expanded to streaming use cases, which is that of an FP operation layer on top of DataFrame / database like queries. This has some basic commonality with Spark or Pandas -- but needs to handle more complicated cases like updates and handling code under different binaries. In general the ideal API allows one to quickly construct a 'pipeline' expression which make invoke other pre-existing pipelines created by other users within the same network, with different binaries. This API also needs to cover cases similar to a workflow orchestration platform like Airflow which is conventionally used for cross-language / cross-binary scheduling -- or more fine grained cases covering individual requests.
In translating the desired logical flow into a transactional model, there is a desire both to support pipelines of operations across many transactions (chaining together potentially different users or developers) as well as pipelines within a particular transaction -- where one change is expressed as an operation potentially spanning multiple binaries, languages, or pipeline style tasks. This means that an API should not map directly to the transaction schema, but rather determine and/or expose to the user the distinction between these cases.
The ideal API will cover the basic cases, how and when to load data. What filters operate upon these data sources. What functions to apply to this data in terms of native function names and built-in aggregation functions, and how to chain these to prior transactions. This is expressed as both standard SQL support, as well as support for common dataframe operations like map, filter, group, sort, and reduce. As transactions form a dependency tree already, and may themselves contain an intra-transaction pipeline dependency tree, and may have 'floating inputs' (the equivalent of a stream batch which has not yet been evaluated,) the structure naturally translates into the model described above matching the 'live' UTXO set, but with a modification for dependencies which are not consumed by the transaction directly, (to handle cases like oracle data or generic data sources as a requirement.)
While typical APIs invoke closures to handle function serialization on operations like a map function, this only works within invocations that have binary compatibility. Rust has some support for this in terms of serde_closure, but support is drastically limited compared to Python or Java. While there may be a good future solution that involves type safety in terms of access, either involving macros or facade types or as it applies to intra-binary pipelines, in general we would expect some level of function invocation requiring ABI / schema level bindings, invoking operations that reference function names at a schema level rather than directly through closure references. That is much simpler and easier to support first, with closures being an ideal to strive for.
The APIs described above generally struggle with the notion of a conventional update as it were applied to a typical instance like a Postgres database. This is one of the main requirements of an application developer, as they need to handle cases where users are changing live data values and expect consistency on other machines. To handle updates, the general strategy is relatively similar as before, as any UPDATE operation will be translated into a transaction which consumes a prior state and results in a new value, as opposed to a mutable state ala Postgres. Updates should be translated internally for the purposes of optimizations into index changes, but this should only happen after the finalization / acceptance layer has completed. Any live update, by necessity, will require querying the deltas from the most recent raw transactions -- as they will naturally form new state information on the leading edge of live data. Each batch of consistent updates must gradually be rolled into index & persistent changes, but these operations can be performed as views on the existing data structures (at the point of state change.))
One of the issues that immediately becomes apparent when attempting to build a pure dependency graph, is how do you deal with events that have not yet happened? I.e. a future. The most basic example is that of a derivative contract, like a call or a put. In order to express this, we must declare an explicit dependency on data that is not yet available, but instead declare the dependency as some operation which, when run, would result in the production of that data. In the simplest case with a single oracle address contract, it would be expressed as the latest state value associated with that address, given some predicate associated with time or update height, or in general, an executable code fragment resulting in the materialization of fully formed data input.
Another way to think about this is to consider a sequence of contracts as a lazily evaluated pipeline. The actions or triggers that evaluate such a contract would, at the time of execution, resolve all floating dependencies to their actual values, and supply such information as part of validation. This pipeline abstraction then allows for chaining of operations based on unknown future values. This is quite similar to the way a streaming API works, where operations exist as a set of logical transformations, and are only applied to new data as it materializes into the system.
In general, the best way to handle triggering events is for them to only occur at the time of transaction validation. In other words, only some new update should ever trigger evaluation of a contract's dependency chain. This is by far the cleanest way to avoid excessively running unnecessary code or operations. However, in order to support more generic operations that the EVM supports, such as deployment of new contracts, or time based triggers that execute some operation at a specific time, there is a need for handling explicit actions built-in to the transaction schema. This should be considered a much more costly operation, and hence require a higher fee. The cleanest way is to isolate specific trigger types, for example, in the event of time based triggers, require specific windows where some code must be executed.
More complex cases are generally best avoided, or handled at the schema level, as arbitrary and excess actions are a major source of poor performance. Lazy evaluation and chaining of transformations should be viewed as the standard, as the key benefit of decentralized validation is not materializing results, but rather ensuring validation happens by many actors. The resulting materialization should always be available or preferably performed by a client who has access to the transform chain and can re-create the result, once validation has occurred they'll be aware that result is correct even if they have to perform calculations to materialize it.
While the executor and data query layer deal only with local operations, tasks, and peer relations, another question that must be answered under this model is how to handle the representation of user specific tables & indexes. It is relatively easy to anticipate the needs of a common user with respect to currency transactions, as all operations fit under a single UTXO data model & corresponding table, but for an arbitrary user attempting to build their own schema and transforms, they would not be able to optimize queries based purely upon distance to dependent transaction ancestors (or floating futures,) as the data representation is not known since only the hash is available and not any of the underlying fields. A filter operation for instance x > 5 would not work on a hash without an understanding of the underlying data field. Hence, indexes are required for custom data.
Two supported modes cover the majority of use cases -- those being dynamically bound generic fields for multi-tenant style data tables, & conventional single-table schemas with dynamically loaded indexes. All index operations must be considered to be 'views' of a primary agreed upon data structure, coming out of the raw transaction schemas, and therefore must include state level information to handle inconsistencies. This follows a copy-on-write model, where newly applied updates consume prior state information, hence all queries must have some knowledge about the dependent operations relative state to determine the unique potential views, OR, return values that potentially span multiple states. As the 'liveness' of historical writes should not be considered a primary concern (any persistence of old states should be handled ONLY at the transaction level itself by embedding that data directly, rather than handled at the multiple-state-from-transaction-changes level,) then we necessarily do not need to maintain many historical state archives associated with each index / query. This means only the most recent changes or deltas need be persisted, keeping the overhead relatively small and perfomant.
The first table type, which is most conventionally expressed under the prior described data model, is that of a generically described schema table which can live alongside the primary UTXO table. As we do not know the data of interest or any schema information associated with it, we assume it to be homogenously distributed and suitable for packing within the same table despite the underlying data types. Generic columns associated with many common data types should be available with mappings available for column names, allowing persist indexes to be constructed handling arbitrary and commonly supported use cases. This is a generic and standard industry pattern for adapting table operations to handle many different types of data, and acts as a catch-all which covers most use cases and is easily optimizable and configurable on demand. This looks like a standard set of columns over each field type and column mappings, and corresponding indexes. Support for these operations is mostly consistent between the sqlite like case, and the
For more complex use cases, or those which do not fit into this bulk generic model, such as heterogenous data sets with more complex schema structures, nested schema structures, or otherwise complex index patterns, the
There are many projects for blockchain integration like Rosetta which simply assume a global block structure to even work or synchronize properly. The reality of the current eco-system means that it is impossible to build a new data structure and properly connect it to other blockchains without this fundamental assumption of a linear chain in most cases. Which means there must be some mapping from the above model to a block structure.
The best way to handle this, is to consider blocks as essentially deterministically formed data structures dependent on a peer's current view. As many nodes could form blocks with different cut-offs, the result is mostly arbitrary and should result in a choice based on overall popularity of agreement, so long as there is no conflict introduced. Assuming all conflicts have been resolved already, by the already described model, then by definition all nodes will agree on the set of transactions, and the only issue is the ordering problem relative to time based cut offs.
In order to solve this, the network itself is responsible for publishing a seed list, and those seeds and their transitive scores are used to form a deterministic function for determining time based cut-offs for block ordering. This is only used as a tie-breaker mechanism for orderings that have no conflicts, and should be considered a reproducible deterministic step that follows AFTER conflict resolution process has completed. No blocks should be formed in advance of that.
Additionally, the block formation process does not need to capture all data, it only needs to capture the data of interest for a particular integration, which can be as simple as blocks formed over a single currency. Then you might have one set of blocks representing everything, another representing 1 genesis currency address, etc. A block structure should be considered as a 'view' on the fundamental data structure, which is a dependency graph.
Many existing Bitcoin or other p2p network nodes connect primarily through TOR or other anonymous endpoints. While this is an extremely admirable goal, and provides enormous security benefits in that no one actor can easily tamper with the node using forms of physical duress -- it has some drawbacks. The TOR network is unable to scale realistically to the latency and bandwidth requirements demanded by a large distributed database and execution engine. Anonymously purchasedhosts should be a priority for protecting privacy of the network and ensuring operation upkeep, and can supply more than sufficient bandwidth and compute resources on par with EC2. VPNs or anonymous dedicated hosts can also act as bandwidth suppliers to home networks or otherwise idle resources to pick up extra compute. While stake can be used as a security mechanism with anonymous hosts, and may be used in part of the model as a level of distinction between nodes, the drawbacks listed earlier preclude that as a general purpose solution, as they can only form a component of a proper score of trust.
Unirep is one of the more well developed attempts at building a mostly anonymous reputation system based on zk-Proofs. The system is similar to the zk circuits used for fairly distributing anonymous airdrops to groups of participants with proof of single use, but far more sophisticated. it has great improvements especially related to epoch logic, but it does not fully generalize to a more sophisticated transitive trust model. In theory, more of the circuits can be adapted to more generically transitive recommender style trust scoring systems, but not much has yet been developed. The attestations in unirep are single-edge only in the score assignments and cannot propagate such information beyond direct vote / score assignments per peer.
Solving zk-KYC is actually a relatively easy problem compared to the more generalized solutions. KYC proofs have been adapted in many cases to other ecosystems, namely Cardano, for purposes primarily associated with dealing with security tokens and other regulation problems -- but it introduces an obvious privacy issue. The solution associated with airdrops is naively extended automatically to any key pair associated with ANY crypto network's on-chain KYC solution, and allows a single composite live list of KYC'd key-pairs to be merged into a single zk proof. Demonstrating that a node's behavior is KYC compliant but not necessarily revealing their identity.
Attempting to extend this further leads to a trilemma style issues with generalizability, anonymity, and security. If you have a large number of identities that are KYC'd tied to key pairs attested by someone (Blockpass / proofi / cardano identities / etc.) then it's simple to have a proof, but that proof only shows a certain level of membership, it doesn't give specific information about any scores associated with this large group. The group too wide to be useful in specific situations or provide more granularity. If you make the group too small, you reveal too much information about its members, and defeat the purposes of anonymity and doesn’t cover subdivisions. And sacrificing any security here is generally impossible, as the zk systems are not well adapted to probabilistic proofs, and it is not ideal. A more general purpose solution here is necessarily complicated.
The most generically described problem would be -- given a score vector supplied by a user query (vector of scores they assign to arbitrary keys whose identity they know,) how would one generate a proof for a given anonymous node that it has transitive trust applied from the given input query (transitive score calculations being one node does not directly score another, but A <-> B <-> C where I want trust(A,C)) -- without revealing the intermediary links, or scores. This is pretty complicated, and parts of it might not be totally possible (especially revealing the input). Without explicitly defining exactly what the transitive edge consists of, abstractly it can be considered similar to any prior work on web of trust systems, or more generally can be thought of as similar to how twitter weighs follower of followers for recommending tweets, or more generally any recommender style model (users who like movies A, B in common probably like C) — suffice to say translating that to ZK is an interesting area of future research, and can only be immediately accomplished in partial.
Regardless of how the existing Unirep implementation treats the attestors in their model (decentralized zk social
media voters,) they're just key pairs. In the docs they discuss how they provide a proof relative to attestorId. It
would necessarily leak information if the node_id keys were fixed (i.e. if we tried to apply this to fixed peer node keys — since you could just query every known user id to find out who is attesting to what — revealing information about the attestors) if you wanted to actually make use of this without revealing that as is — you’d have to change a conventional peer identity to be completely ephemeral (like users from their docs) which means re-writing the whole crypto-application stack to basically be completely ZK which causes other issues in and of itself. (And even with ephemeral users, it seems attestor ids can still be attacked / leaked by attacks under certain circumstances, but it would require more stringent verification and discussion of the exact implementation or adaptation of their logic depending on the use case. Despite that, there’s a couple separations here as the verifiers are independent of the balance amounts some of this could be adapted without entirely transitioning the entire model to ZK (for instance in the transaction schema,) -- this would avoid the issues with Monereo & AML regulations at least.
This same attestation approach can have many variations where the attestors can be publicly known (opening them to retaliation attacks) and can be later weighted by another factor per user, or they can be wrapped in a zk merkle style group proof and you're unable to apply weights -- meaning to hide it you'd have to have multiple attestation groups to 'mimick' weights with some fuzziness over the number of groups designed to make it difficult to reveal the source of the score in the event of a query. If the attestors are public, it’d be simpler for them to just publish a per-node (pre-authorized — to prevent leakage there can only be some number of proof requests within time t) proof which reveals some (epoch, node) has a minimum rep (obfuscating exactly which nodes are trusted.)
Having only a single attestation group poses problems in and of itself described above. If the group is too large, like for instance a KYC generalization solution — it provides very little customizability. (For instance, I only care about attestations from some subset of the nodes because of their distances or my specific relative beliefs about the nodes.) If it’s too small, it reveals too much information about the members, and effectively becomes the equivalent of just having a node manage zks individually.
Most likely, the easiest intermediary solution for supporting anonymous verifiers is going to be a basic zk combined group built on top of another pre-existing solution. Many such groups can be formed, and weighted at a group level. Hiding the exact composition of members (to prevent attacks based on set intersection across groups,) is tricky but do-able. The secondary portion of this, is direct score assignments per peer. Imagine a Map<PeerId, Score> published by each node, but zk-encoded to hide the exact references. Any node with knowledge of the full network would in essence be able to query all possible node IDs out of this information, making it useless if published exclusively individually. But, if mixed WITHIN a group it provides some level of anonymity -- to the extent that the group refuses to reveal any of the subcomponents. This would provide some level of deniability within the group, that demonstrates an externally trusted score to an outsider without revealing the vouching party.
Other solutions are attempting to extend this line of thinking to pre-existing groups and social proofs, such as Interep which can integrate with existing centralized proving platforms, but have not developed a great level of sophistication yet either.
All of these types of solutions are worth supporting to promote privacy and anonymity, but they cannot yet provide sufficient levels of security compared to known nodes in their current state of development, and should be considered as a component of a more complicated model.
The chosen data model is UTXO (unspent transaction outputs), mainly because this offers optimizations around transaction output hashes. A hash of a given output can only be used once, and any contention in this hash is therefore guaranteed to be proof of an attempted double spend. The other useful advantage of choosing this over the account model, is that no management state is required as this follows the 'copy-on-write' philosophy associated with concurrency optimizations. The only data exists as a pure dependency graph where every given transaction requires only knowledge of its own ancestry to validate. This allows a cleaner hash based fuzzy partitioning strategy to be implemented as well.
Every node maintains its own buffer of all data it's received and validated. Upon validation, it signs the receipt of that message in a PENDING state. If no conflicts are detected after a configurable time period, the node signs the receipt as FINALIZED and also gossips this information. Buffers are continually gossiped at short time intervals and used as part of consensus in the event of a disagreement. The observations are not ever necessary in the event of no disagreement or double spends detected, but are persisted for collection to be used by external receivers for the proof of acceptance. If a merchant wishes to know if a transaction has been received, they need only collect the proofs associated with a given transaction hash to confirm if the entire known network has finalized it. So long as they have enough trust in a wide enough group of nodes, they will know the transaction is considered valid. Once accepted, no transactions can be reversed by any node -- as any conflict settlement must proceed in advance.
The observation buffer is simply a collection of hashes of a given message, it's timestamp upon receipt, and the state it was observed in, (PENDING or FINALIZED.) Observations can be of transactions or other observations. This nested structure allows for confirmations of the observations by other nodes, which can prevent certain types of attacks, by demonstrating if a malicious node sends false observation data, another node can proof that it received false observation data.
This buffer when flushed periodically, is turned into a Merkle Tree and a corresponding proof is assembled per transaction and per N-th order observation buffer. When resolving a transaction conflict, all proofs must be collected from all known observation buffers in order to determine the aggregate trust.
The correct representation of a peer id is not an individual key pair, or a hash of a key pair, but a collection of key pairs. This is a reflection of that fact that one person may run multiple nodes, but is not typically reflected in most p2p software. Each key pair must be distinguished in case of key / machine compromise, but they should be distinctly different. The actions of a peer then are really composable as signatures originating from many different machines, and a peer id should be pre-generated covering all key peers of interest so merkle proofs can be used to show membership.
Key functionality offered by peer ids:
- Rotation: a peer id needs a master key (or keys), ideally a cold one, responsible for major updates and rotations to the new key pairs.
- Small Kademlia hash distance PoW. Generation of a peer id should allow a proof of work, in order to prove some amount of runtime was used to generate it. The cost of this should be miniscule, but sufficient to prevent large scale generation of new ids. The amount spend should act as a heuristic demonstrating how much effort was expended during generation. Since this process may run on a cold computer, the key itself may embed some proof of work in the actual hash of the public key, but it also acceptable to run a secondary nonce outside of the key generation process so that it can run separately from the cold high entropy process.
The basic transaction schema consists primarily of inputs and outputs with additional metadata / transaction options / optional code execution parameters attached alongside it. Inputs are the core mechanism for representing dependencies, and should represent not only regular dependencies (in the form of the parent state UTXO representing an amount stored at an address,) but additionally dependencies in the form of data required for contract execution -- which should NOT be considered as 'consumable' inputs as they do not get deleted -- and also dependencies which do not yet exist (stream declaration syntax.)
The basic type of Input consists of a UtxoId (a transaction hash and an output index, corresponding to the index of the output in the array of data associated with a prior transaction), a proof, representing a proof of ownership associated with the prior reference, and a productId, representing a particular genesis address and network identifier corresponding to a unique origin of an asset (the equivalent of an ERC-20 identifier.) This is a sufficient amount of information to describe all currency style transactions.
The first variation upon this is a bool identifier representing whether an input is consumable. If present, a proof is not required, as the input will not be deleted from the live set of data. Rather instead, this is an indicator to load or chain this transaction off of another prior transaction. This is the mechanism by which data is passed from one source to an executor function, which is declared either at the transaction or output level (or both.) This indicates to a validator what data must be loaded in order to satisfy the executor contract.
The second variation is that of a FloatingUtxoId. This represents several cases of interest. The first and most generic is where some function or operation must be executed to materialize a UtxoId. This can be run at validation time. As an example consider 'the Nth live value associated with address X sorted by hash'. This is a way of representing a 'stream' or 'subscription' to a particular address -- but potentially a limited view of that stream ( clipped to most recent value associated with evaluation time.) In order to prevent ambiguity, actions or materialization which occurs through a dependent child, must supply the actual value alongside it, so that it can be validated as legitimate. This is to prevent time-based variations in validator requirements.
The simple way to think about it is, if we're describing some value which is not specified anywhere, but can be verified as being correct after the fact, there may be multiple values that potentially seem valid depending on minor variations in network state. The final 'action' in a chained composition must fix that issue, by explicitly declaring which of these is correct -- so that it can disambiguate between multiple potential values and instruct validators which data to load so they arrive at consistent results.
As a stream based on an address is such a common operation, it should not need to be specified by an executor contract explicitly (as any expression would necessarily require a common reference to a data loading API anyways through a contract SDK, as the address would be inaccessible directly as executor shares no common memory directly with node address storage.) The better way to handle this is at the schema level, as part of a common declarative syntax that connects to the data loading API -- referencing a particular value associated with an address, (i.e. a SQL query as described earlier.)
In fact, the correct way to think about any FloatingUtxoId is really to consider it similar to the transform / data loading API in general, with a subset of transforms restricted to those which provide inputs. This makes it a less general operation that a full transform, in the sense that the only purpose of it is to supply specific inputs that can be used for more general transform logic.
An output consists of an address (corresponding typically to a hash of a proof key used by a future input consumer,) a productId (corresponding to a genesis address / asset type / network declaration same as with inputs,) a counter-party proof (optionally used by for transaction confirmation by the receiver), a data field used to embed both common schema supported operations (like a currency amount,) generic custom data in the form of generically typed fields and/or external data hash references with full schemas and reference information -- for instance to reference a parquet file or a schema associated with a SQL output. Finally an OutputContract can be specified, which embeds information about code execution specific logic associated with arbitrary transformations applied to yield validation criteria.
Output states are not calculated directly for updates, but only verified after the fact on the next input. So, if a contract acts as a state evolution mechanism, the result of execution is only whether or not it is valid, and that can only be determined on the subsequent child transaction -- this is consistent with the model for currency as well.
Transactions also support top level contracts as well, to distinguish from transformations that apply directly to a given output, which may represent multiple parties, from those that apply to all within the transaction. For some operations this distinction is irrelevant, but it becomes important in certain cases. All other transaction related metadata is bundled at the top level, including generically supported options that embed 'standard' contract types into the schema level, such as time lock windows on outputs and minimum delays associated with spending. These operations are so common, that instead of requiring them to be embedded in a smart contract or custom generic code operation, they should be usable merely from schema changes. This lowers the barrier of entry to using options and prevents users from making errors in copy pasting common template operations.
Each observation is primarily identified by or represents a single merkle root corresponding to a tree built from all observations within. This value is linked to the previous reference, and is otherwise very similar to a block structure. The reason for the variation in name is mostly to distinguish this as a relative data structure. Each observation belongs specifically to the peer who originates it, and cannot be produced by any other peer. It is a unique data structure, which is only shared for the purposes of verifying proofs associated with irreversibility, and does not act as a central reference other than for synchronization purposes.
Alongside the merkle root, there is a reference to the previous observation, and a proof associated with the key associated with the peer. This is used to verify that the observation is valid, and that the peer is the originator. Each member of the observation consists of an ObservationMetadata object, representing a hash with a specified type, observation timestamp, and state (PENDING or ACCEPTED.)
Observation edges are a derived data structure which is useful for end-user proofs. They represent a merkle proof associated with a particular transaction representing the peer who signed it -- and carry the information necessary to prove irreversibility. They act as a link between observation hashes and transaction hashes.
Peer metadata is treated generically as another transaction, except with a singlely linked value with no external dependencies. This acts much like a single-source oracle, and allows the same data structure to be re-used for all internal purposes. This structure can evolve over time to handle updates to specific sections of peer metadata, but the overhead associated with updates is relatively small, so a full refresh operation on the entire peer metadata is relatively cheap.
All events are maintained and monitored throughout their lifecycle by an async thread which monitors and outputs status information, and optionally returns a result if an event was triggered by an API query. This means, for the lifecycle of one operation, there should only be a single thread essentially handling all management of that particular request. This is to simplify all logical handling, even if distinct submodules are responsible for different portions of the lifecycle.
Transactions can originate from several places. The most common is from other peers, either from a subscription to another peer's events or through random gossip. Next most frequent is from an external public request, if the node is responsible for serving up any p2p specific APIs, such as a public REST API. Finally, they can originate from the internal API and represent transactions associated with that node. For those, they should be handled as a special case, in order to maintain tracking information associated with the addresses, hashes, and UTXOs associated with them for caching of future information and alerting purposes.
Once a transaction has been received, it will be handled by a unique async thread call for its entire lifecycle -- this means all operations related to it up to termination will be logically grouped together. The first step with a transaction is to attempt to resolve its references as part of validation. Before validating a transaction, we may not have all the data necessary to even execute it or verify it references a live accepted state. Because a given peer cannot make guarantees about its own storage of all data, since its own capacity will almost definitely be less than the total capacity of the network, it must rely on hash distance to resolve the data against peers who have it stored and can be checked for acceptance.
Each resolution must check multiple peers, both in the event that a peer does not have the data stored, and in the event the peer is maliciously saying something has been accepted. The threat from the latter case is limited, as the worst case scenario is that it pollutes existing chain data with a hash reference which can be marked as TOMBSTONE (the data itself can be removed.) By attempting to include at least one high trust peer in this operation, we can further reduce the likelihood.
After resolution, we attempt to validate it and mark it as being processed within the mem-pool (with a bypass for latency based on trustworthiness of source -- to handle this, NodeMetadata can sign for arbitrary list of addresses.) Here is the first point at which we can potentially find a conflict. Because each transaction is being handled by its own unique request processor, we must setup a mechanism for them to communicate with each other on the detection of a conflict. Channels are constructed as appropriate based on UtxoIds potentially in contention with each other. Each UtxoId or dependency within a transaction (if it is being consumed by that transaction) can only legitimately be used by a single transaction -- but just based on the order of arrival we don't necessarily know which transaction is most appropriate to accept, so we must persist (within reason) all the potential contenders during this process. In the event of a flood of conflicts, there would have to be some rate-limiting associated with those UtxoIds, delaying acceptance until more time has been allowed by the network to consume them (a strategy useful in other areas as well.)
The first conflict-free transaction a node observes (within some mem-pool buffer delay to give time to hear from other peers to optionally instead use their first picks as well,) will be signed as PENDING by the node. Afterwards, peer data is collected until we have either heard from a sufficient fraction of the network or some minimum delay has been reached. If we have seen no conflicts, it becomes ACCEPTED. If there are conflicts, they are resolved based on data collected from other peers and determinism to attempt to cause the minimum disruption -- optionally delaying acceptance in the event of large disagreements. This information is communicated to the observation buffer in both PENDING and ACCEPTED stages, and finally after acceptance it is written and committed to the database as a set of new live UTXOs. Finally, any results are collected if required for response to an on-going API call.
The solution offered by most networks with regards to latency, is to rotate 'leaders' which coordinate new network events and communicate between them for consensus rounds. This is a fairly complex process, and as an end result leads to the same security issues as before -- namely, if the facilitators make a mistake, we still need acknowledgments from the rest of the network to determine actual security. This means that even in the event where we rely on coordinators / facilitators to propose new data, we still need to know the rest of the network has accepted it to guarantee we're not operating on the wrong side of a security fork or that our particular transaction of interest has been legitimately accepted by the network.
In this way, claims of superior latency should be regarded with extreme suspicion & distrust, as fundamentally anyone can analytically determine that the minimum latency with maximal security is how fast you can query the results of a given operation from every peer on the network. This can be expressed as the network latency distribution * the gossip propagation distribution. Additionally, it is undesirable to rely solely on nodes that are effectively super-computers or giant servers, as most of the point of p2p systems is re-using small computers (see BitTorrent.) Security is maximized when as many peers as possible can participate, which is heavily restricted with minimum node requirements.
Nonetheless, there are use cases where low-latency may be required. The way to translate this to a relative context is again by considering this at a data-local context. The responsibility for assigning peer ids that have greater 'importance' in determining the transaction acceptance status (for latency purposes to prevent having to query the entire network,) should be determined by the transaction originator. For guarantees that they are manipulating this (to prove to receivers,) VRF peer subsets can be used as well. The actual operations associated with the network will otherwise remain identical -- however, nodes will preferentially attempt to break ties / apply determinism functions to prioritize the opinions of these selected 'facilitators'. In this way, we gain some of the benefits of offering a low latency solution to those who only wish to rely on a validator subset, while still allowing the remainder of the network to participate at their own desired speed.
Fees are important for proving real-world value and dispelling the ponzi scheme argument. There is an enormous demand to use validator / p2p networks as seen by the vast fees collected by many networks. While it can be argued these networks are overcharging to extreme degrees, the demand for this solution in terms of market value has been proven. There is real utility captured by the existing fee structures, and it can best be viewed as similar to the demand for public notarization. As fees correspond to an influx of value into a network (as opposed to pure speculative interest,) they demonstrate the legitimacy of the network's actual value.
The original purpose of fees is to incentivize and encourage validators to participate in the network, and in the model of PoW this is relatively easy, as the fee is paid directly to whichever validator maximizes the current chain. As we transition to a model where there is no globablly preferred state hash, fees must be considered localized. Token burning as a mechanism for avoiding that can work, but is less clean as it makes parity checks more complicated and in general obfuscates the active total sum. Instead the fee is a separate output on the main transaction.
This model encourages the view of the network as a set of independent auditors or notaries. When a transaction is desired to be published, rather than appealing to a single 'network state' as in Ethereum or another conventional crypto, and waiting for it to accepted by the entire network, you would instead view this as a problem of attempting to pay for a decentralized group of notaries relative to you and your end receivers view of the reliability of those notaries. Again here, is an opportunity for pooling, as fees paid to one notary should instead reflect their relative views of reliability of all the peer notaries they know, and so should be rebalanced among peers regularly to encourage large scale acceptance. The end goal is to create a fair marketplace for validation -- where there is no restrictive qualities on what determines 'who' is trustworthy as a validator (like is required for minimum stake node validation or proof of authority networks.)
The determination of where fees go is then essentially be down to the users themselves. Those who initiate transactions, and by proxy those who are willing to receive them, both have the ability to set scoring information related to which validators they are confident in. If a receiver requires a certain group of validators to have witnessed the transaction, the transmitter should take that into account, and also essentially nominate his own through their scores. This is the raw input that forms the overall determination of fee distribution within the network. Much as in a conventional network where low fee transactions tend to be ignored, if you cannot interest sufficient validators in including your block, most will not consider it as accepted by the majority of the network.
The exact balancing required to handle fee redistribution is complicated, as for smaller fees it is undesirable to create an excessive number of very small UTXOs to handle each split of each fee. But a large enough pool, which is gradually balanced across different validators according to overall usage, is easy enough to handle. For nodes that attempt to seize and not re-distribute portions of fees fairly, they are subject to the same sanctions as any other deposit failure. To prevent excess complication in this re-balancing, fees are handled on the same schedule (monthly) as rewards.
To encourage network adoption and simplicity, fees can be waived in certain circumstances. The reason you need fees as all is to stop attacks, and provide demand for the difficulty of validating a transaction relative to the amount of validation required by a receiver to consider it as accepted by enough peers with enough trust. For simple currency transactions, the easiest way to make things simpler is to reduce fees if the balance on a UTXO is sufficiently large. This increases the expense of an attack, and fees can be reduced in proportion to the balance size. The next easiest, is a delay on the usage of them. Time lock periods between UTXO shifts provide an easy mechanism as well. Pre-validated addresses that correspond to known nodes are easy to reduce fees for as well, as malicious behavior can just ban the entire node so long as a particular address is associated with it. With these simple heuristics, fees can be eliminated in a large number of common cases -- but not for completely anonymous, small sized, frequent transactions (which most resemble attacks.) Another aspect to fees is if the intended receiver is willing to accept 'less' validation by a smaller number of peers, this should be adjustable based on the pool recipient and scores. Larger fees should allow more validators to process the data, and ensure higher speed and quicker spread. This too contributes to reducing overall fees in certain cases.
The structure for determining the difficulty of a complicated contract and fees associated with it is as follows: The onus is essentially on the sender who evaluates the sequence required to validate including all dependencies. They will be able to calculate the initial cost, and nominate several validators to provide a second amount of estimates. All transactions, before submitting, should be assumed to be run through all dependencies and include the costs associated with fetching and calculated all lazily evaluated dependencies and the final transaction. This model essentially removes a lot of the requirements to track operations, as it should be considered somewhat speculative in validator executions.
To deal with potential attack vectors here, the model should revolve essentially around off-chain collateral or 'pre-paid' fee options for purely anonymous transactions. This allows the fee to be paid in advance of the transaction if the speculative execution cost is too high. The problem with using an in-schema collateral operation like Cardano transactions for instance (or EVM gas costs per account,) is that it requires an exact determination of falsehood and introduces a potential incentive to seize collateral. So long as there is a validator associated with introducing or first signing a speculative transaction in, and there are in re-balancing agreements for fees with the rest of the network, malicious activity can be stemmed substantially, allowing a relative model for collateral.
As far as incentives to validate data -- nodes should attempt to validate anything with the cost of the resources they're willing to provide, but should prioritize validating those transactions which reward them most. This essentially makes prioritization a node-local problem -- which allows applications or contracts to be treated in a fashion more similar to pub-sub. Nodes will only participate up to the extent that they have an incentive to -- with excess resources devoted to anything within reason. If there is sufficient capacity and the node is running, there is no reason to reject a transaction they are un-interested in. This is the key to building a 'marketplace' for validation.
The primary model requires localized scores assigned and signed by every node. Each node needs to configure trust scores associated with peers that they know in order to inform large scale network behavior. The standard model for peer to peer trust is typically EigenTrust -- although there are deficiencies in that model we'll attempt to address. EigenTrust was originally based around PageRank, which is used to calculate the importance of a web link based on which other links reference it. While these affects are useful in modeling distributed trust, they are not primarily designed around the issues associated with building a large network where a single failure of a node can have serious implications. This is especially amplified in the case of transactions, where a trust failure represents a real loss of assets.
One of the core problems with EigenTrust is the attempt to build a global trust vector. While this has uses in the context of calculating and propagating reliability scores (for which it is conventionally applied,) the more useful quantity is to model a network score vector unique (or relative) to each node. For this, recommender style models, graph features, and other operations apply.
One of the first expansions here is the notion of Directed Acyclic Transitive Trust (DATT). From first principles, if every node knew every other node on the network, we would be able to assign scores directly and not require any form of inference. Because that is impractical, we need to imagine starting from the peers we know directly, what is the correct way to expand the scores of those further out, and when is it appropriate to trust transitive relations?
One way we might explore outwards is to consider expanding spheres of influence outwards from our immediate neighbors. All nodes which we know as direct neighbors would be rank 1, nodes that they know but we don't form the set of rank 2 neighbors, and so on. Because trust is essentially transitive, we would never trust the higher ranks to influence the scores of the lower ranks, as any score from a higher rank would be a dependent calculation from the lower rank from the perspective of an individual node. This is the most conservative way to expand trust outwards to unknown peers
The most basic transitive trust score then, would be the product of all scores up to rank i. So if we know neighbor_a with rank 1 with score 0.9, and he knows neighbor_b with rank (relative to us of 2) and score 0.9 -- then we take the product 0.90.9 and assign a transitive score of 0.81 -- any further neighbors would be 0.81(next score). In order to deal with multiple overlapping score assignments at each rank, we average across them.
EigenTrust doesn't allow us to deal with scores that are negative, which is an important issue when dealing with cases where there are serious penalties associated with failures in the network. This doesn't matter as much in the case of a simple file sharing application -- where failure merely means a re-routing requirement -- but it does matter in the case where there is a substantial loss associated with a failure. EigenTrust score assignments essentially converge to 0 in the event of distrust, which does not properly model the underlying phenomena. It is equivalent to the Facebook 'likes' functionality in that respect. In order to represent negative scores, we persist the DATT scores, and assign the negative values up to only the rank associated with the first negative score. After all, it would be non-sensical to continue the transitive calculation for a node which has a negative score (since we do not trust their transitive scores anyways.)
It may be asked why not include the scores of higher ranks on the calculation of lower ranks? In general you might imagine it would be useful to use that information. If a low rank neighbor is not well trusted, but higher ranks trust them greatly, would it not be useful to know this? EigenTrust typically exploits this information by allowing trust to 'spread' or diffuse through different peers without taking into account exact rankings, which is an effect which should be counteracted. It is not entirely useless information, but we want to start from first principles in building the most conservative information up first.
This is exactly one of the potential attack vectors that needs to be implemented. If this were the case, a single outbound neighbor link to Nth order could invite a giant group of fake users, who could all score up one of the lower ranking values and thus act maliciously. To add further protection, it is useful to introduce a decay parameter as well, to influence the scores even more. Where at each transitive neighbor level, we decay by some factor to reduce trust even further. This is essentially the equivalent of a capping or max function with the associated score, and may be implemented the same way to prevent lower scores from diminishing too quickly.
So far, we have only discussed relative scoring models. While this is useful for handling disagreements & per node operations, there are cases where it is useful to gain the benefits of models that compound to a single vector. There may be insufficient peer information for a node who has just joined the network, or may be otherwise sparsely connected across the network. In this case, while it is still useful to apply relative operations as a prefilter, we may want the advantageous properties of EigenTrust to make use of that additional information not available exclusively in a local context.
This brings us to a more pressing and interesting problem with decentralization in general. In most networks, when you join them, you need to assume that first of all, you're joining the correct network. There are generally seed lists which can point you to the network, run from websites that you implicitly trust. This is never made explicit by these networks, it is simply assumed that there are enough trustworthy people representing the network to promote its well-being. We think instead that this should be made explicit. There is a bias factor for centrality in a network, relative to the original creators and users and maintainers of it typically -- which helps inform new users joining it how to know they've actually joined and connected with the proper network.
This bias factor does not need to be substantial in order to gain most of the benefits of it, indeed it should never be abused -- as it introduces a security flaw where any central bias can be hacked and subvert the network itself, but it should exist. The bias factor should also be heavily limited, to prevent gaming of rewards and encourage network growth. The initial bias should be represented primarily by those developing and supporting the protocol, who have the most interest in seeing its success -- and is essence as applied, the exact equivalent of a pre-weighted seed list. This is something that should always be capable of being overriden by an end-user, but supplied by default. It provides additional context information which can be supplied to other models, in order to benefit their security.
The trust model initially will be focused on DATT as a filter mechanism along with the biases, followed by an averaging process including this as the initialization vector followed by EigenTrust or an EigenTrust like equivalent -- which produces relative non-global outputs but with some of the benefits associated with EigenTrust for distant scores. Eventually, a replacement for EigenTrust will be implemented to better allow for negative rankings, and to fix some of the issues associated, but that is not required until larger scales. See MaxPlus algebra equivalent of EigenTrust for more information and examples.
EigenTrust, while useful, is not great as a long term trust model. There have been several major attempts to fix it, but starting from a perspective of pre-filtering data with real scores and more conservative DATT scores and pre-trusted peers is a simple way to alleviate most or all of these problems in the short term. In the long term, a more comprehensive approach is needed. EigenTrust does not allow negative scores, it is much like Facebook likes in that sense, and misses important information. Addition of a new peer lowers the overall trust of other pre-existing peers, which also doesn't follow logically.
Typical reward systems are designed to amplify fees. The original motivation for most networks was to encourage adoption and subsidize validation, and encourage decentralization -- although this has devolved in many cases to excessively speculative predatory dividends. The main problem associated with paying rewards based on stake -- which is essentially equivalent to dividends, besides the potential legal issues associated with bordering on an actual security -- is that essentially inflates away small holders. While there is some argument that active network development costs may need some mechanism (similar to dividends,) to support ongoing maintenance and work -- there's no guarantee those dividends go towards that. The flaw is most obvious in extreme cases. When dividends are extremely high, or the assets are majority controlled by a small number of users, those users will quickly devalue everyone else's holdings in proportion to how much they have. When kept to reasonable amounts, the flaw is not as clear, but still a property more desirable in a company than something that should be treated as a public notarization service. Even under a 'company' model, dividends typically inhibit growth when taken to extremes.
Either way, that model essentially requires a global state, as collateral amounts must be locked and can be taken away by a central agreement source -- which removes local behavior. As trust is a more general model, some of the same effects can be achieved through collateral contracts to specific node subsets, but they should be considered as a proxy to raw peer scoring.
In transitioning to a relative model, the original motivation primarily remains the same. Amplifying the fees should subsidize or encourage adoption and running validators. De-centralization should be encouraged by allowing any honest validator to claim rewards. A new goal as well, should be to support the actual development of the network. This is typically left out of most models, but should be encouraged to prevent stagnancy. As development is extremely difficult to quantify automatically, it falls to an oracle problem as well, and can be handled by the same scoring mechanism, with local weighting determining it. Again also, some of this can be achieved through a seed list.
The main difficulty encountered here -- as opposed to dealing purely with fees -- is that attacks on rewards are more prominent than fees. With fees, any attack is essentially neutered by other nodes having no incentive to pay attention to the transaction relative to other transactions -- making it essentially impossible to drain any value and making it easy to provide an incentive against spam by making it costly. With rewards, if it acts as an amplification system on top of fees, then the most obvious attack is to simply send giant fees exclusively to yourself, or a network of nodes you control. If it were to amplify this, and you were doing no honest work, it would drain value from the system away from its intended purpose.
The first defense against this naturally, would be to cap the reward not as an amplification but on a per-transaction level (i.e. ignore any fee weightings in the overall distribution.) This would simply make the attackers attempt to generate as many small fee transactions as possible -- or to only use a subset of the network. The ideal defense against this would be to make it prohibitively expensive in fees to capture a significant fraction of the rewards. This can be achieved, but does require some thought as to the nature of the actual problem.
This problem shares a great degree of similarity with evaluating the actual nature of work towards establishing a project. While idealistically purely decentralized networks have no artificially imposed point of reference, they do have an origin point. Most projects start with a small vision, and a small number of people, and an actual 'centralized' origin, even Bitcoin. The attempts to 'change' the software definitions of Bitcoin for instance -- despite it's decentralized philosophies and ideologies, are still governed by this 'point of origin' principle in adhering to tenets espoused by its creators. It is very difficult, for instance, to change basic operations such as block sizes and timings, or to add new features arbitrarily and without extreme review. Despite this, it still remains one of the most 'decentralized' projects due to PoW (relative to PoS), despite all of the problems associated with PoW.
While lacking a central committee or executive or single leader who sets the directive associated with it, there is still a bias in almost all decentralized projects towards those who contribute to them, and review work associated with them. There are security reviewers who are known, people who know details about the project, peer review processes, and so on, which all exhibit this effect. In essence there is somewhat of a 'cohesion' principle, which to translate in terms of peer oracle scores, would be explained as peers naturally 'pruning' distant connections on the graph of scores. For example, imagine a random unknown security reviewer claims there's a flaw in the code, and everything must be changed around this. They'd likely be completely ignored. But, the opposite mechanism, that of encouraging new development from any honest developer, is also heavily encouraged so long as they are within the scope of the reasonably agreed upon development. The proper mechanism to express this, in a graph principles, is to attempt to extend the scores to the widest reasonable network possible. It is similar to the notions of graph centrality, spanning trees, and graph cutting.
While a sophisticated approach might involve a more complex model, most of these behaviors of interest can be captured with far less complexity in the short term -- and still defend against attacks while providing a reasonable path towards a better model. Many projects have attempted to capture this phenomena through the use of large pre-mines. Some of which is legitimately intended to support development, and which does provide a needed incentive to build the vision. Unfortunately, this is also typically abused rather extremely and should be viewed as a potential attack vector to be defended against.
Pre-mines, aside from being used purely as a tool for fraud, can create an extremely dangerous situation in the form of excessive volatility and centralization. They can frequently create death spirals, where fears of liquidating can actually destroy a project entirely as large holders will accelerate any natural decline out of panic. Despite this, they do have positive benefits in the form of allowing capital to flow into work and development. One of the simpler ways to fix these issues transparently, is in the form of time-lock contracts associated with any early rewards. There should be an added incentive to any holder who is willing to accept a locked contract as a reward, with greater time periods offering greater incentive. A better model is far more complicated, and essentially requires more complex contracts governing exact 'withdrawal' schedules and so forth. Escrows or other peers that behave similarly can act as a check and balance to provide oracle level information to these contracts to provide another layer of complexity. In essence, some of these phenomena encourage categorizing this problem more in the light of a 'decentralized trust.' The goal of which is to fairly grow and develop the network.
As a primary goal is to avoid a global state, while still capturing some of this 'point of origin' positive effect, rewards should be thought of as 'conserved future quantity.' Where outward flows from the point of origin, capture future flows in rewards outward to other nodes as well. In essence, nodes that receive initial rewards are also receiving the 'potential' associated with distributing later rewards, subject to revocation under attack conditions. (Similar to the effect of banning a developer who introduces malware.)
The initial model for rewards is primarily based around this 'seed' mechanism, but calculated with transitive outer scores and an averaging process, this is designed to be extended in the future to a more complicated model, and is subject to change and adaptation.
The total coin supply is set to 1e9. The total divisions per coin as well (equivalent to number of satoshis per bitcoin) is also set to 1e9 (for integration consistency exact divisions are required.), giving a maximum supply of 1e18, within range of an int64 max value. Both of these are primarily designed to make calculations of total supply and market cap easier. Reward epochs are divided into 1e3 total periods, each with a distribution of 1e6 coins per. This is different from a typical deflationary model, as it is purely linear and attempts to encourage more growth in the later phases of the project, and provide for ongoing maintenance. These reward cycles are distributed monthly, at a time easy to synchronize against, the peak of the full moon. This gives a reward schedule of roughly 100 years. The motivation for long duration in rewards is intended to give sufficient time to detect malicious behavior and prevent gamification and time warp attacks. There is a lagging period of 1 cycle before rewards are paid, which gives sufficient time to calculate and settle all fees and disagreements.
Crypto-currencies and open source projects and in general all have external dependencies they rely upon which are extremely valuable in providing the functionality required to launch the project, and yet most funding models specifically ignore those who came before them or who they depend on due to short-sightedness and greed. The ideal model would be where value is distributed proportionally to the projects acting as dependencies, including the language itself. Ideally, this contribution factor and weighting proportions should not be set by the project itself to prevent collusion or dishonesty, but rather by the market and independent auditors. The exact mechanism of this will likely be simplistic at first, and greatly dominated by a bias on behalf of the originating project, but should be designed to eventually defer to market forces in terms of the overall design of localized fee structures determining 'notary value'.
This acknowledgement is typically explicitly ignored by most projects, and should be designed in such a way as to give back to open-source in general, by crediting those whose accomplishments lead to the development of the active project, as well as providing them expandable income streams for them to focus additional development towards targeted use case. Such a model provides growth to both parties in a mutually beneficial relationship.