Interface Domains
Oftentimes the most transformative technologies are those at the interface between two disciplines. The combined efforts of computer scientists and biologists enabled the rise of computational biology, which in turn enabled genomic sequencing and hyper-targeting genetic mutations through gene editing. The field we call AI was itself started as a brainchild of statisticians and computer scientists, as statistical machine learning. AI today looks like a marriage between mathematics (namely, linear algebra and calculus) and computation (namely, hardware and software designed for high throughput matrix operations). We see a burgeoning interface domain today with potentially profound consequences for technology, the intersection between AI and web 3.0. This domain is perhaps the most consequential domain of the 2020s for information technology.
Why AI needs Web 3.0.
It is no secret that building and deploying foundation models requires significant resources for training and inference, making it an expensive endeavor. Thus this endeavor favors large incumbents with R&D budgets of billions, if not tens or hundreds of billions of dollars. If Microsoft and Amazon are building cathedrals, where are the bazaars?
Bazaars, exemplified by open source efforts to build open technologies such as Linux, Python, and Postgres, naturally need economic incentives to thrive. The status quo is through donations, corporate sponsorships, and volunteers. But we need incentives that are more predictable and scaleable than just good will of the public and volunteers. We are not open AI maximalists, but AI realists—we believe that the future of AI is a disparate collection of different AI components, both closed and open, and that healthy competition is necessary to build technology that is maximally beneficial for all users, corporations and users alike. The internet itself was envisioned as a set of information superhighways controlled by large corporations like Microsoft. In the end, the internet became a mix of open and closed protocols and networks. And we believe that AI’s trajectory will rhyme with that of the internet. For this, we need a new, open way to build alternatives to closed-source AI models and AI applications.
Fortunately, web 3.0 primitives, ones that enable the exchange of value over an open, transparent protocol, allow for the creation of AI primitives that can be built in an open way. Foundation models, storage & compute infrastructure, training-time and inference-time data, agent-specific artifacts — all these components can be tokenized and economic incentives can be shared among researchers, training data providers, storage and compute providers, and end-consumers of AI-native applications. Web 3.0 can unify the entire research-to-product value chain in the AI economy. This will enable a huge number of participants, many orders of magnitude larger than the current number of participants in a closed AI system, in the value creation and value capture of AI models and AI-native products.
Web 3.0 equally benefits from the mass consumer appeal of AI-native applications. Apart from speculation, web 3.0 has not had a real mechanism to trojan horse the mass adoption of web 3.0 architecture. Closed networks like Facebook, X, and Reddit have no incentives to cannibalize their own value capture to allow for web 3.0 primitives to take hold.
AI-based products have the power to create new mass-market behaviors, effectively acting as a "trojan horse" for the mass adoption of Web 3.0 technologies. Open ownership of applications and user-facing networks is a powerful economic motivator for web 3.0 to emerge, and this needs to be coupled with the value chain of AI models and AI-native products.
Both consumers and professionals talking to and interacting with AI characters is definitely a trend that is likely to propagate exponentially.