Redgold Documentation

Redgold Documentation

Addendum

Treasury Management & Reserves

The treasury is a core component of the network. It's responsible for managing the reserves of the network, and is managed with a decentralized DAO built around the active seeds. It's important to have a treasury smoothing function on the AMM for the native token, so that the network can manage its own reserves and deal with volatility.

The core seeds operate based on a native AMM to provide liquidity for the network. In the zero-knowledge situation, user deposits/swaps are automatically routed to the provided seeds. Scores emitted from the origin seeds act both as contributions to the AMM liquidity composition, and also towards the treasury voting mechanism. The treasury is defined as that composition of scores originating from the seeds by default, as it acts as an origin point. Fees accrued from the native AMM are accrued by the treasury allocations automatically, in corresponding amounts to the contribution of liquidity provided by the treasury.

Issues with Proof of Stake

Another huge issue with proof of stake systems is that they’re not really providing a replacement for conventional legal liability, as the suffering party has no recourse against a majority decision.

In a conventional trust, if you give you make a contract with someone and they violate that contract (equivalent of double spend issue, i.e. someone reverses a transaction,) the merchant has lost $100. Ethereum stake does not actually cover this situation AT ALL because it’s not a real guarantee. The merchant has been told the network actually ‘accepted’ something but it later reversed the decision — the stake doesn’t go towards covering the cost entirely, as the network does not provide any guarantees.

Really, bitcoin doesn't solve this problem much either -- because it's fundamentally reversible under certain conditions. Instead, this model should really be considered to be 'insuring' malfeasance or reversals, by covering them from the treasury in the event of failure to cover the actual damages. Collateral is actually somewhat useful in these circumstances, but it's not a guarantee of anything.

Decentralized KYC

Many attempts at on-chain KYC exist, all should be integrated with and used, custodial pools should be distinguished as those which require KYC and those that do not, with the decision whether to require them up to the user of choice. As discussed earlier, zk-KYC is the ultimate ideal to ascribe to, but any integration should be supported.

Initial approach to use should be relying upon external existing on-chain KYC services, where Redgold simply acts as an oracle to provide this additional information to enrich it upon a conventional Transaction. As usual, the decision of whether not to use this information is left to the user and/or contract specifier as they determine.

Approach towards smart contracts

Smart contracts should be viewed as a plug-and-play ecosystem component. The desired languages for writing trading strategies or other more sophisticated models are things like python. This means the executors should be isolated from the contracts themselves. The same model for deposit and ETF products can be applied to arbitrary code execution. There is a requirement that the code be audited and accepted into the network in order for this model to function. So it does not act as a 'universal computer' ala Ethereum, but so long as there exist enough audited contract code a user should be able to easily build their desired behavior. Contracts via deposits can then make use of an underlying asset, which most crypto contracts prevent by requiring you to remain within the network asset.

This approach allows us to plug-in arbitrary connections to other integration platforms and smart contract platforms, so that we may re-use the functionality others are building.

Why not just use blocks?

Even if you have blocks, you're still assuming that someone will store them as a proof of re-validation. While they are useful for eventual consistency on a total state set, the main benefit is for partitioning large data sets, in which case another mechanism can be chosen such as partitioning by average accepted. No proof of work means there's no issue with forging blocks, since the security mechanism is replaced per data-item and essentially flexible per data item. Archival state sets are still useful, but only as an optimization and do not need to cover all data in order to provide that optimization.

Additionally, proof of work or proof of useful work can always be added to the archival set for more security over a longer time period to gain the benefits. Another core problem with blocks is you still are relying on some type of authority for determining what's the 'best' block in terms of earlier listed issue, i.e. software definitions, latest block hash, etc.

It may be convenient to use blocks for synchronization etc. but these are all easily solvable problems with other strategies. the core prioritization of the network should focus EXCLUSIVELY on disagreements. Most of the time for most operations, you'll never really need to deal with that.

Implementation of Trading Strategies.

Ideally, ETFs are not the only mechanism by which deposits & intra-platform asset holding contracts can occur. By using the executor / contract layer, it should be straightforward to implement custom logic for handling assets, such that implementation of trading strategies can be done through multi-sig operations. This allows you to build a trading strategy and launch it as an ETF automatically.

State Machines

"A blockchain runtime is a state machine. It has some internal state, and state transition function that allows it to transition from its current state to a future state. In most runtimes there are states that have valid transitions to multiple future states, but a single transition must be selected."

This isn't actually true or reflective of the underlying phenomena. It doesn't have to be represented as a state machine any more than a conventional webserver has to be represented as a state machine. It CAN be represented as a state machine, in order to capture security from PoW over the entire hash, but if you remove that assumption it no longer has to be represented this way. Blockchains are conventionally treated this way because it's easier to reason about if there's essentially the equivalent of a GIL (global interpreter lock) or a single thread of execution, performing state updates. This is a terrible design for a distributed system and scalability, and is an artifact of the security model in PoW. To be clear, it is absolutely required in a PoW system, due to the hash linkages, but can be avoided or upgraded under different security models.

The fundamental architecture of most transactions is only dependencies. If you can eliminate 'global' dependencies, which in the vast majority of cases you can, you can implement more fine grained state transitions. The proper analogy here to make is a mutex around a hashmap versus a concurrent hashmap providing a mutex per key. Locks around the global state during transitions simply do not make sense when they are not strictly required.

One transaction depends on other transactions that were previously approved. No additional state is required in this. Only agreement on which additions to the graph are valid. State transitions imply a unified state function. Instead the state is localized relative to the dependencies.

Why Rust?

P2P protocols shouldn't really belong exclusively to one language. Libp2p has demonstrated the value of building implementations in many languages, and Should be using any and all languages that support the protocol Long term goal is to actually subdivide all the responsibilities of this into proper separate decentralized services. I.e. a service that provides a database layer that's audited. That's highly scalable.

Useful PoW / PoUW

Transitioning away from wasteful PoW makes sense for many reasons, but the core principle of it offering an expensive protection against attacks does make sense, it just doesn't provide anything useful. Instead, the key model for capturing this effect can be accomplished by performing some valuable work. The main problem with this, of course, is that it can be faked or repeated by a copycat attack -- if the work is independent of the active chain.

The only way then to capture some 'useful' work, is to restrict it to a task that can depend on a hash definition, without materializing changing either the verification or output -- AND ideally which can be verified more quickly than it can be calculated. An example of a simple 'useful' proof of work might be to take a recent hash reference (of your own nodes observation chain,) and supply that as a seed in a process which requires randomness. For instance instantiating an ML model with parameters randomly seeded from that hash and training it. While a 'true' verification would require completely re-building the model and cost the same as the output, a partial verification would be to use samples from a hidden hold-out and verify the accuracy is consistent.

Producing these models, would then provide a 'useful' proof of work with a low verification cost. This model does have some flaws though, as the hold-out set would have to remain secret in order to act as a useful verification mechanism, and it would have to be specific to the model of interest.

This work can then be used both for localized & archival sets, to add additional security either in bulk for optimizing archival reads and verifications, or on local observations to provide additional security & guarantees.