Redgold Documentation

Redgold Documentation

FAQ

Why bake a product into a platform?

Regarding ETF-like functionality: given the number of crypto related platforms, it's pretty difficult to attract users and applications to adopt something new. Even with a better design or offering, it's still a challenge to grow from scratch without a compelling use case. Furthermore, having a defined objective helps to focus the platform development towards a specific goal, and provides a clear path for users to follow.

This area is particularly underserved by existing platforms. It's difficult to achieve this sort of functionality out of the box in the context of other crypto systems, and it's a desirable use for many people. This objective highlights a lot of problems and issues related to the core designs of most existing platforms, and additionally allows the major components of the solution to be built before being extended to other use cases. While larger platforms have the resources to focus entirely on the platform, starting from scratch means designing around a use case and building from there.

Additionally, and perhaps an even stronger motivation, is that there is already this strange sort of separation that happens between platform-level features (for example, native ETH transactions) and application-level features. (For example, an ERC-20 transaction.) This separation is done for obvious reasons, in the sense that many ERC-20s need substantial customization, and it's not feasible to bake all of that into the platform.

However, there are many common use cases that CAN be supported at the platform level, and doing so from a common unified schema makes everyone's lives easier in terms of maintaining common standards and interop. While ERC-20 does draw from a common interface, it doesn't offer as much power as a schema level solution, whereas here we are simply adding additional fields to the standard transaction type to indicate a productId (ERC-20 Id equivalent.) With additional fields for more executable definitions, this allows all currencies to follow common schema patterns and make integrations much simpler. For the same reason, many common products should actually be built directly into the platform, when they are important enough to justify this level of standardization.

Why are there mentions of trust? I thought trust-less systems are better?

There's no such thing as a trust-less system, that is primarily a marketing term rather than a technical reality. Even the origin for the term trust-less in commercial network security and military applications really focuses heavily around the notion of simply finer-grained permissions & access control & user logins, rather than actually eliminating the notion of trust. It's way better to be absolutely explicit about what is trusted, how much, and build a real model to minimize the sources of trust in order to provide transparent and clear security guarantees.

Obfuscation of hidden layers of trust is a dangerous thing, as it leads to people trusting things they shouldn't. And one of the most fundamental principles of security is that security by obfuscation is not security. In other sections, there are more detailed explanations for the hidden sources of trust that seep in to many existing crypto systems or platforms, and we are trying to make all of them absolutely explicit and clear. And taking a data-driven design towards dealing with human inputs & ratings associated with trust information, in order to provide a stronger guarantee of security in critically weak areas. Many areas of the design around model calculation are re-applied from other areas of research towards related problems mostly associated with graph embeddings and recommendation systems applied in commercial industry, and repurposed towards the problem of trust & reputation in the context of conflicting values & a peer-to-peer network. While many reputation systems exist in p2p systems for reliability use cases (and some minor ones for security use cases,) this is an attempt at a more general purpose system that can be applied to a wider range of use cases.

Why doesn't every node validate all data?

Every approach to scaling fundamentally deals with the problem that not all nodes can be expected to validate all data. There's a vertical cap imposed by the maximum machine size, which will always exist so long as there's enough computation or data. So fundamentally, at some point, to scale beyond this, you need to drop this assumption.

Typically, most people are approaching this from a partition perspective. A large group of machines validates one partition, and then joins the results together. This results in a lot of messy problems associated with figuring out how to deal with cross-dependencies. What happens if you have some operation that depends on values in a different partition? How are we dealing with more extreme cases? While there are solutions for this, there is a different way to approach the problem entirely by switching to a more granular architecture, where individual events become the primary concern.

By dropping this bulk-synchronous partition merge based approach, and instead allowing flexible partitions per conflict identifier per data item, we gain a huge number of benefits. Firstly, the problem of dealing with conflicting values becomes independent PER conflicting value, rather than a problem of shuffling between partitions. Second, the issue associated with determining partition sizes, load balancing between them, and dealing with skewed key distributions becomes much easier to deal with. Since the peers themselves are each attempting to re-balance with respect to their own unique key space, there is a different partition definition for EVERY peer, allowing automatic overlapping redundancy to take place without specific assignment.

This mean partition identities are calculated as a function of peer key, data key, peer score, and fee distribution -- rather than uniquely determined by the data itself. This property also yields flexibility in fee costs, as some operations may not need the same level of validation as others.