Make it exponential

As an engineer, I always tend to apprehend a new topic with a technologist point of view. It might not mean anything framed this way, but I'm essentially trying to point out to the fact that I'm mostly trying to understand the structure, and the forces in presence. I'm also trying to view things with the lens of what I know in other sectors. It might be even less clear now, so let's directly jump to it.

I think that Moore's law holds a fundamental truth on a lot of phenomenons in our lives: a lot of variations are exponential, and the human brain can really only comprehend linear variations. From the very first application of Moore's law to the number of transistors on integrated circuits to literally anything in tech: AI, data storage, network capacity and globally anything in the world with radioactivity decay, light intensity, population growth etc. All of this to say that I try to apprehend a new subject through this law, which is also something very helpful for the entrepreneur that needs to understand where the scaling factor is, and how to build this exponential growth.

So when I entered the space of economy through Von Mises' books, I tried to see where this law apply, and so where exponential improvement is. And it was a tough spot: if you think about money itself, of course it went from commodity money (gold), to credit money, fiat and finally digital fiat, but nothing exponential. The only improvements are structural, on very long time frames due to the underlying infrastructure: commodity money is hard to move around (gold is heavy), credit money is lighter, but adds intermediaries, and fiat money with the digital world is even cheaper to move around, but subject to even more intermediaries. If you look into human action, and human organization, the same holds true. It is closely linked to money itself, but we went from nomad, to the separation of work, to more organized structures with cities, and centralized power, to decentralizing further the power, enable sole proprietorship companies, and finally LLC and all the structures we know today.

Maybe we can take a look at human productivity, but here again nothing exponential, just slowly linear, and probably not linked to any improvement in human organization, but to the means.

Labour productivity in 2021  Federal Statistical Office

So I started to wonder why is that, and can we find a scaling factor. About the why, of course we will be able to provide only supposition, because causality is hard (impossible?) to prove, but let's try. Where does exponentials come from? The answer is mostly compounding. Let's take as an example population growth, each generation of organisms reproduces, leading to a larger number of offspring in each successive generation. Which can be modelized by the function P(t) below, with P0 the initial population, r the growth rate and t the time.

This, is compounding, and it translates into an exponential variation. And it applies to many subjects because knowledge compounds. Now the question is why can't we have some sort of compounding in money, or organizations? And my answer would be it is because everything is highly subjective and thus subject to complex consensus, and politics. Thus no compounding is possible, because compounding comes from the specialization and the use of previous knowledge, which is not possible here, because of the slowness of the consensus, and the fact that nothing is specializing, as politics are moving around, and always changing directions. 

But as much as organizations are subject to politics, money isn't. Of course the two are very linked, and even more today as the central banks are forced to bail out part of the government debts, as most of the western countries have an over 1 debt to GDP ratio. But it shouldn't be the case, because it entails the monetary policy to precisely the slowness of policies.

And that's where crypto is compelling: not only to separate the politics to the money itself, but also to pair the technological improvement of the underlying infrastructure to the monetary improvement. Think about the improvements of money up until now: it was always a gain of efficiency in terms of transportability. If we tie the transportability to the Moore's law of blockchains on TPS, and overall infra, we could potentially envision a future where money and its efficiency itself improves exponentially, because of the infrastructure being improved - outside of politics and inefficiencies.

The cost of trust

The opening lines of the Bitcoin whitepaper shape a prism through which we can perceive blockchains: a network atop which users (whether humans, bots, AIs, or any entity capable of utilizing a private/public key) conduct transactions without the need for trust. Initially, Satoshi Nakamoto wrote about the removal of trust for payments, but we are now able to apply this same idea to pretty much anything on the internet: exchange of information, games, data, and much more. This concept of removing the trust, typically required in traditional systems, gave birth to one of the most used words in the crypto space: "trustless," and it was also one of the main points of criticism from the TradFi world (for example, this paragraph from Matt Levine). It has now become mostly a meme, with most of the new teams building on blockchains choosing "growth" over trustlessness (here, for example). Not that I want to refute this, but we will try to provide a more nuanced vision, one that finally goes beyond the alignment meme and provides tangible value for trustlessness.

How trust affects productivity

There are numerous examples we could draw from history to understand the cost of trust and how it can reshape the way we organize society, but I think the most striking one is the creation of limited liability companies. Before the creation of this legal structure, the cost of trust between investors and entrepreneurs was immense: entrepreneurs had to bear all the risk personally, and investors had to trust entrepreneurs with their assets. LLCs provided a legal framework that enabled entrepreneurs to take risks and innovate while providing clarity and safety for investors. They removed much of the cost of mediation on both sides, including legal costs, and thus improved the overall productivity of society. 

Now let's consider blockchains: they remove the need for trust for any transaction on the internet. There's no cost of mediation when you transact on Ethereum or Bitcoin: once your transaction is included in a block, there's no way to go back (except for reorgs - you'll actually have to wait a few blocks). There are also no intermediary fees: you can send, lend, borrow, and do thousands of other things without intermediaries. Blockchains provide trustless trust, and just as LLCs improved human productivity by reducing the cost of trust in business interactions, blockchains will improve the world's productivity (because yes, even AIs can have access to this) by reducing the cost of trust in any transaction.

What needs to be very clear here is that the trustless trust provided by blockchains is possible because blockchains operate using deterministic data. This means that there's a state A that is accepted by everyone (consensus), and it is only possible to transition to state B if everyone agrees (more than 51%, generally) on the new state—and the agreement is only possible because state transitions are verifiable: a signature is either correct or not. Thus, social consensus is straightforward: cryptographic truth exists, and anyone can verify that state transitions are valid.

Trust in oracles

As previously stated, because blockchains operate solely on their own state, anyone seeking to leverage the trustless trust provided by blockchains for something reliant on external data encounters the oracle problem. This problem can be reframed as "Is it possible to provide trustless trust for any data?" While there have been numerous attempts to address this, we must acknowledge the true bottleneck of this issue: non-deterministic data. Cryptographic truth does not apply to non-deterministic data, making it impossible to achieve the pure trustlessness attained by blockchains. 

So, how can we effectively reduce the trust associated with non-deterministic data? This is the problem we are working to solve at Pragma, and it extends far beyond crypto—it encompasses providing a general truth for questions where knowledge is not universally available, constructing trust systems for information that is not distributed or public, and overall improving the decision-making processes of organizations. 

Now, let's return to our oracle problem: providing trustless trust for non-deterministic data. Most protocols attempting to solve this problem have adopted a structure similar to blockchains, as this was the approach that succeeded with deterministic data. If you look into oracles today, while there may be different technical choices, they largely follow the same pattern: a network of nodes reporting data (often price feeds) and reaching consensus through an aggregation mechanism (such as taking the median). However, this design falls short on several fronts. Firstly, trust assumptions do not align with blockchains; oracles are permissioned (nodes must be approved by devs), slashing mechanisms are absent, and everything relies upon SLAs, introducing a cost for trust. 

The cost of trust is the sum of expenses, risks, and inefficiencies that arise when relying on intermediaries or centralized entities to facilitate transactions or verify information. These costs include legal and mediation fees, intermediary charges, the risk of malicious actors or shutdowns, regulatory burdens, and security vulnerabilities. 

Furthermore, as non-deterministic data lacks a single "truth", relying on a centralized decision-making system is nonsensical. Consider a price feed as an example. You can obtain the ETHUSD feed from various oracles, but what does it even mean? If you visit the websites of Chainlink, Pyth, and Chronicle simultaneously, you will find differing prices for the same feed at the same time. This discrepancy arises because there is no singular "price" of ETH; rather, there are multiple markets with varying prices, and one can aggregate these markets, or a subset thereof, using different aggregation methods. Given the inherent variability in non-deterministic data, it is illogical to expect different protocols with distinct purposes to rely on the same pricing, parameters, and sources. Why would a lending protocol on Ethereum require the exact same pricing, parameters, and sources as an options protocol on L2? This example only pertains to ETH, but there are countless derivatives, stablecoins, staked tokens, and other assets that cannot conform to the same model—let alone entirely different types of data, such as election results or randomness requests. 

For these reasons, solving the oracle problem necessitates departing from the blockchain approach. We require a novel approach that considers all the intricacies of the problem, which is a worthy endeavour as it will enhance the efficiency of systems and organizations by reducing the cost of trust. 

What's next for oracles

The recent drama surrounding Renzo has underscored a broader limitation that we are slowly but surely encountering with oracles: the lack of modularity. By "the Renzo drama," I am referring to the depeg part, not the tokenomics one, although they are connected. So let's delve into the pricing of assets because that is what this is all about. But before we do that, we should understand how oracles currently price any asset and why they do so. For this, we need to go back to Ethlend, which was the first peer-to-pool lending protocol on Ethereum. It later rebranded to Aave and shifted its model to a peer-to-pool one, highlighting the complexity for end users in managing positions alone in a peer-to-peer model, with many parameters, and without the knowledge required. This model change drove adoption due to its simplicity for the end user, and the infrastructure enabling it was the oracle. The oracle's role was to report the price of short-tail assets in the safest manner, avoiding manipulation, and guaranteeing liveness. This is when the current model was introduced: a set of nodes reports prices across different markets (on and off-chain), and a price is constructed using either a median or a VWAP.

It worked pretty well — for short-tail assets, at least — for the reason that a VWAP is arguably the best pricing solution for this kind of asset, even if some new pricing methods are emerging. But crypto is maturing, and more assets are now available for trade, and new instruments are being created: LSTs, LRTs, stablecoins, options, perps, etc. Does it mean that we should continue using the same method to price them with a VWAP? Definitely not. Let's consider a few examples, starting with the Renzo incident. Here are the prices reported by Chainlink and Redstone for ezETH during the depeg:

Image

Source: https://twitter.com/inkymaze/status/1783075969940574496

We can observe a very similar behaviour stemming from the same pricing method: a VWAP. During this crash, was Renzo insolvent or illiquid? The answer is the latter; the available liquidity was too thin to support people exiting, but it doesn't mean that the asset became undercollateralized. For this kind of asset, VWAP pricing just isn't the right tool. 

Let's delve further into another example: the MakerDAO/Morpho/Ethena combination. Maker allocated part of their balance sheet to loans backed by Ethena USDe and sUSDe collateral, using Morpho with a fixed price oracle. This means that USDe is hardcoded to $1 on the Morpho pool. To understand the trade-offs, I encourage you to read this great blog post by Sebastien Derivaux. Is the fixed price better than a VWAP in this case? Probably. Is it the best solution? Probably not. 

There are dozens, if not more, cases like this where the VWAP just isn't the right tool to price the assets. I've listed multiple examples but haven't answered the question of what the best pricing model is for each of these assets. And while I may have a personal opinion, the real answer is that my opinion doesn't matter. The way an instrument or an asset is priced is a choice made by a protocol and will become a main differentiator for them. The reason why it's not the case now is that it's just not possible yet — every oracle usable on the market is a monolithic infrastructure, trying to fit the same model to every use case created. If we get the ability to change the model with the same (or fewer) trust assumptions, not only will we get a thousand use cases that weren't even possible before, but we will also have the greatest efficiency improvement in DeFi to date (and probably in other verticals too). 

This is what we're building at Pragma. If you're a financial engineer working on the pricing of assets in crypto, or anyone who wants to join our mission, please reach out at careers@pragma.build.

The Phases of Crypto Progress

It's been just over a few weeks since Blast, the much-anticipated "L2" (it really isn't) with native yield, announced its launch, aiming to attract liquidity (now exceeding $850 million, the team knows how to shoot) to what currently operates as a multisig. While this event sparked valid criticism, I always tend to seek out the positive side of things. When I mention "positivity," it's less about the situation itself and more about getting knowledge from any circumstance. There's so much to unravel from this event— let's dig in.

George Santayana, in The Life of Reason: The Phases of Human Progress, conveyed Churchill's renowned adage: "Those who cannot remember the past are condemned to repeat it." Although the vastness of the five-volume work proved daunting (I must confess, I faltered after the second volume), its initial part, containing this quote, highlights the significance of historical wisdom. The idea lingered, seemingly abstract, until now, when I finally grasped Santayana's essence.

Media - 1984124194 - SAAM-1984124194_1 - 66097

René Magritte, "Those who cannot remember the past are condemned to repeat it."--George Santayana, The Life of Reason, 1905. From the series Great Ideas of Western Man., ca. 1962

A quick look back

Human organization's roots trace back to the shift from a nomadic, hunting lifestyle to settled agriculture. This change allowed surplus food production, paving the way for specialization beyond farming. These two aspects—division of labor and individual ownership of work outcomes—form the bedrock of structured exchange. I craft swords, needing sustenance; you cultivate cereals, needing defense—we've established direct exchange. Yet, sometimes, direct exchange isn't viable, leading to the introduction of indirect exchange. If I possess a sword but wish to purchase cereals at any time, I'd seek assurance that I can do so regardless of the season. Swords hold value mainly in times of war; hence, to ensure acquiring cereals consistently, I'd exchange them for something exchangeable later. But what could that be?  Food perishes, and rocks are easily replicated. Gold, silver, and copper emerge as viable options—ubiquitous, non-perishable, and challenging to fabricate. Voilà! Have we birthed the concept of money? Yes, when these materials become universally accepted for trade, they form commodity money.

Now, let's delve deeper. Collective consensus (albeit unconscious) on a particular commodity's suitability as a medium of exchange prompts standardization of its form and size, facilitating transactions. Standardization enables swift recognition of a commodity's weight, quality, and thus, its value—simplifying exchanges. Consequently, entities begin producing standardized pieces of gold or silver, adorned with stamps for easy valuation, and thus easy exchanges.

We now have commodity money standardized by a trusted entity (often the governing body), guaranteeing each piece's weight while pegging its value directly to the weight and quality of the underlying commodity it's made of. Any change in the piece's weight or quality by the trusted entity alters its value because the value of the money is derived from the commodity (despite attempts to impose consistent values on coins with lesser precious metal). However, transport issues arise as gold is challenging to move. A consensus emerges: I offer you a claim to a set amount of gold, a paper-based claim obviating transport issues, ensuring gold claimability anytime. Possessing this claim, acquiring actual gold entails a loss due to transport and security costs. Thus, holders begin using this claim to procure other goods. Gradually, this practice proliferates, as it's the most logical course of action. This marks the advent of credit money. And with credit money, banking surfaces—someone must physically safeguard the gold.

Credit money enhanced efficiency but blurred the distinction between money and its underlying commodity. In the course of history, maintaining money backed by gold (the gold standard) became a symbol of prestige for the issuer. However, realizing that maintaining the gold standard equated to controlling the money supply, issuers assumed control over issuance—this is the final transition to fiat money, our modern currency. Worth noting—every transition between money types is prompted by marginal efficiency improvements for users: from no money to commodity money, enabling indirect exchanges; from commodity money to credit money, easing transactions through enhanced transport; from credit to fiat money, engineered inflation by an entity. You might wonder how inflation is an improvement—it's not. Governments created this semblance of improvement by taxing non-fiat money use, sealing the deal.

We've barely touched on banks, which emerged with credit money. Bank history intertwines closely with money evolution, being government-owned. A detailed history of banking could consume considerable time, let's make it quick. Banks' initial purpose was safeguarding gold and ensuring a 1:1 backing for issued claims. As governments took control over money creation, discarding the gold standard, banks began backing claims with alternate collateral—often government bonds. At this juncture, holding a bill implied possessing a credit of a credit finally collateralized by an unspecified amount of gold and some collective trust in the system. This paved the way for fractional reserve banking—holding a claim on bank money backed by varying assets working to yield profits (in a regulated framework, right?).

And that's pretty much it, we've now arrived at the XXIst century, with modern banking and money.

A quick look back ... into crypto

Transitioning to digital money and crypto, everything began with Bitcoin in 2009. In 2023, Bitcoin effectively functions as our digital commodity money. It embodies the properties of scarcity, durability, transportability, divisibility, and holds intrinsic value through the stored energy it represents. However, the essence of money lies not solely in its properties but in widespread societal acceptance. While it's early to categorically term Bitcoin as commodity money, it's accepted as such in the digital realm. One could even argue that it surpasses gold as commodity money, given its ease of transport and known scarcity. However, for our reflection, this argument holds little relevance. Let's progress to Ethereum. Ethereum functions as a programmable shared-state machine. Its significance for us here lies not predominantly in its asset ETH, but in its ability to enable transparent, programmable value flows through smart contracts. Over the years, developers have leveraged this technology to transparently rebuild existing real-world markets using our new commodity money(s).

The increase in usage of these new commodity assets and their underlying network has complicated transactions due to higher gas prices. Consequently, Layer 2 solutions were created. An L2 constitutes two elements (in a very simplified model): a smart contract holding bridged assets and another preserving its state. While this simplification overlooks intricate mechanics like asset bridging methods or state transition mechanisms, it underscores L2's transformative nature—offering a marginal improvement on exchanges (as it vastly reduced costs). This implies trading asset claims rather than directly the assets. Familiar territory, isn't it?

A quick look forward

Now, the pressing query: will we come full circle, and what lies ahead? Primarily, we must recognize the catalyst that invariably propels markets from one phase to another—a marginal improvement in the exchange efficiency. This encompasses various aspects like faster, cheaper, more efficient liquidity, and broader accessibility. With this understanding, it's only a matter of time before L2s assume the role of primary exchange platforms. As comprehended, these new technical assumptions will come in with an economic transformation — the exchange of assets in the crypto realm evolving from commodities to credit assets. Consequently, crypto will shift from using commodity money to credit money. This shift signifies L2s metamorphosing into banks, with your claim on these L2s essentially representing the asset on L1, causing the gap between the ETH held on L1 and the L2 claim to widen.

That being said, it's time for you to choose your preferred path, dear anon:

Path one 

One L2 starts luring liquidity with the promise that assets on the platform will have a yield-bearing backup, calling this mechanism "native yield" (when it's in reality level 0 of fractional reserve banking). Even though it's just a multisig, users stake nearly a billion dollars. This triggers a stream of ideas to other teams: You could pull in millions in TVL by creating an L2 (you don't even need to build it; Conduit will handle it for you) and offering some APRs, taking a juicy cut on them. But a 4% APR won't cut it—another team suggests 15% with no risk. After all, what could possibly go wrong? (AL2meda might be a good name for this project). Next thing you know, another team launches an L2 with a whopping 30% native yield. Seems fantastic, right? We're in the midst of a bull market, everything looks stellar, and returns are incredible. But remember, in this world, what goes up eventually comes down. Those 15% or 30% returns only happen when taking a directional risk, only one downturn will get rid of those L2s, and the regulators get in. Regulators don't need numerous failures to justify the regulations they want. Just a few signals pointing in their direction would suffice. Check out their stance on wildcat banks. Plus, L2s aren't decentralized (yet), and probably won't be as decentralized as Bitcoin or Ethereum. Looks like we're back to square one, with regulators ensuring that L2s follow the rules they set, which could go as far as taxing non-CBDC transfers on the network, censoring users, and who knows what else. They already did it years ago.

Path two

Two fundamental truths emerge—placing money in a few hands tends to end poorly, and when money is involved, principles often take a back seat. Let's be clear: crypto has shown that financial rationality governs much of our behavior, including principles. To all the Lido criticizors and Ethereum alignors, the solution to behavior alignment is only through cryptoeconomic alignment. So, to ensure alignment with a vision, it's crucial to offer motivations stronger than alternative behaviors. Trusting a group based on its ethos doesn't guarantee consistent behavior—motivations, ideas, and individuals evolve, and morality is adaptable. Now, the question arises: how can we ensure most exchanges occur on L2s with reasonable risk management, given that offering yield becomes a competitive advantage?

Trying my luck

What I'm about to discuss here represents solely my viewpoint, which remains subject to change and evolution. Nevertheless, I'll endeavor to express strong opinions, recalling a saying attributed to a former French president: "le pire risque c'est celui de ne pas en prendre," translating to "the worst risk is not taking any" (though I'd advise against employing this in crypto debates).

In my perspective, the optimal framework involves enabling "native yield" (again, it's nothing more than transparent fractional reserve banking) for those willing to assume additional risks while safeguarding those happy with holding the claim of an asset (100% backed by the asset on L1). This implies the need for distinct smart contracts on L1 based on the user's chosen mode while bridging – one for original asset holding (safe mode) and another facilitating fund allocation elsewhere (yield mode), thereby introducing additional smart contract risks. To be perfectly clear, the assets we're discussing here are short-tail assets, ETH, BTC and USDC/T, as those are the ones used as money on blockchains. From here, determining how funds are allocated from the yield mode presents several design possibilities.

Considering these assets in yield mode are (again) short-tail assets, sufficiently liquid to support derivatives, the most efficient approach to balance APR/risk tradeoffs might involve creating a market for fund management. Essentially, users, upon bridging, could select a fund manager responsible for allocating their funds, ensuring the preservation of value for their credit asset on L2 while generating yield. This is equivalent to free banking, you choose the fund manager you trust the most, or with the risk parameters you think are best, and get his credit money on L2. For instance, you bridge ETH on an L2, choose Gauntlet as an asset manager, and get on L2 gETH, which is nothing more than ETH "issued" by Gauntlet – except unlike free banking, you can see where the funds are allocated and have better risk management.

What I'm essentially proposing is to split the risk related to assets from the technical risks associated with L2s. Picture this: presently, when you bridge onto an L2 network, you turn to platforms like L2Beat to gauge the technical risks involved in bridging your funds there. Since most L2s today offer credit assets backed 1:1 on L1, you're not exposed to any credit or counterparty risk, making your risk assessment primarily focused on technical aspects. 

Now, consider holding ETH on Blast and ETH on Starknet. You encounter varied technical risks because they operate on different technology stacks. But beyond that, there's also diversity in credit risks. Your ETH on Blast is collateralized with stETH. And this complexity is only going to increase over time. The concept I'm proposing involves establishing a standardized interface between asset managers and L2s. This approach allows each party to specialize in their domain, enabling users to have a clearer understanding of associated risks. Here's a visual representation of it:

Establishing this fund management market on L1 introduces complexities on L2, akin to the challenges faced during the era of free banking when banks issued their currencies, leading to complexities for market participants. Remember, our objective is to construct the most efficient market, advocating for an unopinionated approach and open markets. Hence, accommodating multiple credit assets at this level is acceptable, allowing the market to dictate the most suitable one for its use. The complexity introduced by competition between "money issuers" is what enables us to credibly create better money, I encourage you to read F.A Hayek's paper on the de-nationalization of money if you want to go further.

This design achieves the separation of asset risk from the technical risk of the architecture. Cryptography engineers and financial engineers possess distinct skill sets. Additionally, shared liquidity on L1 becomes a reality, accessible across all L2s. Providing a common interface for L2 asset holdings allows both ecosystems (asset managers and L2 builders) to progress independently, optimizing their products without being constrained by the other ecosystem, akin to how TCP/IP decoupled application builders from network providers.

Now, let's address potential challenges with this model. The primary concern is asset fragmentation, wherein assets possess varying risk profiles, making it challenging for L2 applications to manage these diversified risks. However, with the rise of risk analyzers, oracles, and blockchain transparency, this issue might be effectively addressed. Another issue is the potential for an asset manager to attain a monopolistic position, contradicting the system's intent, which relies on competition to ensure efficient management. This remains an unanswered question, one that warrants a separate discussion – your thoughts on this matter would be appreciated in the comments.

In conclusion, the crypto ecosystem finds itself at the crossroads of "this time is different". Creating a system revolutionizing our monetary interactions has been a painstaking process, and while we're not there yet, external challenges loom large. This journey isn't just about looking back; it's about charting a course toward a future where money becomes fairer, more efficient, and way less complex (starting with inflation). So, let's stride forward, keeping our eyes fixed on a world where money isn't just a thing of the past, but a bright beacon guiding us toward a better way to trade and share value. We have to remain focused on crafting a superior alternative to fiat money for society.


Why Cairo

Today, someone asked me why I was building on Starknet. That's a question I've probably been asked a thousand times and answered the exact same way a thousand times. But today, inexplicably, I didn't answer that question. Instead, I asked: What if the right question were, 'Why are you building on Cairo,' not 'Why are you building on Starknet?'

Indeed, I tend to think that projects choose a language and a tech stack before choosing a particular chain. Of course, there's an intimate link between these two, and the first deployment will happen on a particular chain. However, if you're deploying on Polygon, chances are that deployment on other EVM chains will follow. Stickiness resides in the tech stack; switching costs are much lower in the choice of a particular blockchain.

And that's where things get trickier: Why choose a new language tied to an early ecosystem with limited network effects, over an established one with ever-growing ecosystems built around it? 

The answer is not quite simple, and there's a multitude of reasons that will fall out of this post, but I'll try to highlight the essence of it. But first of all, we need to take a step back and think about what we're trying to build here, in crypto. It is easy to forget about the why with the endless numbers of ponzis and forks, but I think that summarizing the why can look like this: we're convinced that society and human organizations will benefit from more transparency and verifiability, so we're developing technologies that will enable the transfer of value and information in a trustless and verifiable way. And while our common belief in the greater good led us to build blockchains, we must acknowledge that blockchains are just a means to an end.


Sylve's best tweet: (https://twitter.com/sylvechv/status/1658465170421870592?s=46&t=JenRgmO2gjpNvJTF06-Big)

And more than that, we now know that even a small fraction of the transfer of value cannot be handled (yet?) fully onchain (cf: the last bull markets). So what's our path forward? We'll certainly have much more block space in the near future, and L2s offer a great trade-off between security and scalability in that regard. But remember, blockchain is a means, and developments in the zk space provided us with a shiny new way of getting closer to our goal, which is validity proofs. Since blockchains and validity proofs offer very distinct properties, it makes them highly complementary. When you need a hard, shared, live state, with censorship resistance, you use the former; when you just need verifiability, you can use the latter. This design presents a credible path toward enabling the entire world to transfer value and information in a verifiable way.

Let's consider a simple example: you want to establish a new business in journalism to address the issues related to fake information, which are expected to escalate further with the integration of AI. To achieve this, you can utilize zk-proofs to establish a chain of truth for audio/video captures. Subsequently, posting these proofs on a blockchain will create a publicly shared state, enabling anyone to verify the authenticity of the materials. In essence, you've established a public ledger containing authenticated content captured by journalists—essentially, a verifiable repository preserved indefinitely, akin to a time capsule. 



But what is the link between what we just went through, and Cairo? Cairo is the smart contract language of Starknet, yet before that, it's a language optimized for validity proofs. It serves as a virtual CPU and a singular Algebraic Intermediate Representation (AIR), capable of describing any computation using the same “generic” AIR. The new paradigm introduced by Cairo is significant; it allows for the development of a single application that harnesses a public shared state on Starknet and conducts verifiable off-chain computation using one single language. 

What I'm essentially emphasizing is that the new world of the verifiable web is comprised of both blockchains and validity proofs. Cairo is the fuel that enables both airplanes (blockchains) and rockets (validity proofs) to take off and propel you into this realm. Referring back to our previous example, it means you can code in Cairo the logic generating the proof from the attested microphone and the on-chain logic verifying the proofs and the data. It is then possible to subscribe to a proving service to handle the proof generation from the microphone and to have all the on and off-chain interactions under the same code base, ensuring higher security and auditability. Overall, the advantages of an application leveraging both blockchains and validity proofs, using a single language, with various interactions between these elements within the same codebase, are immense in terms of developer experience, auditability, and explainability.

Of course, grasping this concept remains challenging currently, as network effects heavily favor the EVM, and there's still limited exploration of proofs beyond blockchains. However, significant strides are being made toward that goal, notably by projects like Giza, utilizing Cairo to verify AI inferences, Herodotus for on-chain historical data verification, Dojo delving into off-chain verifiable games, and our efforts at Pragma, constructing verifiable financial data backends. Many developments are still required to fully comprehend the comprehensive potential of proofs beyond and interconnected with blockchains.

If I'm being honest, that's definitely not the answer I initially provided to the question, but it prompted me to contemplate this further. I might be mistaken, and Cairo might not be the language that will facilitate bridging part of the gap between where we currently stand with blockchains and a completely verifiable world. Another language might serve as the conduit to reach that destination. However, I firmly believe that something akin to Cairo is necessary, and even if it isn't Cairo itself, we'd still ultimately win.

Thanks a lot Sylve, Edouard, Kaushik and Matthias for the ideas and feedbacks.