A Comprehensive View of the Creator Economy in the World of Web 3.0

HTX Ventures
29 min readDec 15, 2021

--

By Jinbin Xie

Historical Transition of Creator Economy

Social Acknowledgement of Creators

In ancient China, there were hardly any information carriers, or means to store and keep information. Paper was the primary source of expression for many people, though acting through physical performances, such as operas, was also common.

Paper was less costly for production and distribution, making it easily accessible. Among the ancient creators, some were novelists and poets, creating works through the accumulation of words; others were painters, creating works by configuring patterns.

However, ancient creators had rather limited revenue streams.

For instance, the typical novelist’s revenue stream would come from exchanging his or her work with publishing agents for a one-time fee; he or she would not see another dime from then on.

Even the great poet, Bai Li, who had the help of his rich and generous donors, pursued a political career instead of the arts. With the lack of available revenue streams, coupled with the prevalence of “official-based” thinking in China at the time, pure content creation was not a realistic option for top-tier creators, let alone less well-known creators.

As a result, many creators stepped into the political arena, as they could gain mainstream recognition in that career.

Privileges of Creators

The establishment of Gutenberg Printing in 1440 marked a turning point, as it lowered the cost for printing products yet increased the speed of printing. People craved knowledge amid the emergence of universities and the bourgeoisie.

Hand-written parchments were rare and expensive; very few could have the honor of reading these materials.

Creators began to see bigger demand in the market. Thanks to the introduction of Gutenberg Printing in China, creators could distribute their works faster and cheaper while maintaining quality.

Books, periodicals and newspapers, and various content collections all appeared at that time. With the emergence of professional publishers, talented creators could quickly stand out with the support of the publishing industry.

Content creation was no longer only available to the nobility.

Revenue Model of Creators

In 1935, Sir Allen Lane founded Penguin Books, bringing high-quality paperback fiction and non-fiction to the market.

At that time, many publishing houses published hardcover books, but Sir Lane saw the demand for reading among the masses, and thus moved towards a “quantity over profit” strategy.

In 1960, Lane published over 2 million paperbacks of Lady Chatterley’s Lover in less than 6 weeks. Through a profit-sharing model, the author received 10% of the earnings from the publisher. This not only provided the author with a more lucrative revenue stream but also set a precedent in the publishing industry.

Speed of Information Distribution

With the eruption of the Industrial Revolution and the oil-backed economy, the transport and logistics industry saw unprecedented growth. Magazines were often published weekly or monthly, while newspapers were printed daily.

This dissemination of information was not restricted to select local areas. The original version of The Sorrows of Young Werther was also available in old Shanghai.

With the ongoing development of the logistics industry, films and vinyl records could now travel overseas and into the hands of people in Shiliyangchang in old Shanghai.

The rise of the modern education system elevated the general public’s demand for informational products, which is reflected in Maslow’s hierarchy of needs.

The progress of information distribution enabled faster distribution of creative works.

Degree in Details of Information Distribution

After World War II, the rapid growth of semi-conductor technology in Japan and Silicon Valley enabled the use of transistors to generate microwaves to transmit information; even images could travel via more advanced technology.

According to the Fourier Transform and the Shannon Entropy theory, various signal compression technologies are developed to achieve the best data transmission effect with limited spectrum frequency band resources.

As a result, more revenue streams became available, including radio, billboards, TV premium channels (mail by check), and the Bertelsmann Liber Stripes (monthly subscription of recommended books). But creators were still largely constrained by these intermediaries.

Forms of Information Product by Creators

With the introduction of the internet, Web 1.0, the diversification of information carriers signified a new era of multi-media.

With the development of HTML and JavaScript technology and the popularity of the Netscape browser, content began to appear as digital files ending in simple texts, such as .html/.css/.js/.txt/.jpg/.mp4, replacing traditional books, periodicals, records, DVDs, etc.

Digital content is inherently interactive; consumers connect to the content naturally, providing unlimited potential for creators to express their creativity. As information began traveling via the internet around the globe in the blink of an eye, creative works started becoming well-known worldwide almost instantly, laying a solid foundation for “internet celebrities” and influencers to be born.

Professional Generated Content (PGC), which was and still is primarily released free of charge, profited only from ads. In the Web 1.0 era, the top players in the space were mostly portal websites. They would publish the content submitted by professional writers, and the editors would put together their selections meticulously, praying that at least one of the many articles would be a hit with audiences. When they did, they would bring a large amount of traffic to the corresponding website, therefore increasing the prices of advertising spots.

For personal website owners or bloggers, creating content in highly vertical tracks, accumulating a number of fans with specific interests, and cashing out on Google AdSense were more common trends. These creators didn’t necessarily possess special creative talents; many were normal people who used these platforms to make their voices heard.

E-commerce platforms began to generate profits from the sales of physical products, and producers collected their share afterward. E-commerce was treated as a substitute to the traditional publishing industry. Publishers and music companies began to shut down. As a consequence, copyright infringements surged, worsening the state of protection for intellectual property.

Then came the emergence of complex interactive products such as games, which integrated images, text, and plotlines. If novels and movies were meant to immerse readers in the parallel world created by their creators, the emergence of information products such as games began to completely immerse users in them. These experiences were the result of contributions from various artists and creators across many disciplines.

The diversification of new information products meant creators had more choices than ever. Even today, the development of the entire industry is complex and ripe with opportunities for a variety of creators.

Inherence Popularity of Creation Model

In the Web 2.0 era, online payments became instant and mobilized; recommendations could be powered by an algorithm. The driving force that promoted the development of Web 2.0 is SNS. Thanks to SNS, everyone could be a creator, sharing equal opportunities. In contrast to ancient society, young people today aspire to be internet celebrities and influencers and gain the attention, fame, and money that comes along with it.

Looking back at history, the advancement of science and technology has benefited human beings immensely. However, the downsides should not be overlooked. Behavioral data theft and the arbitrary deletion of user generated content (USG) has frequently made news headlines. Centralized platforms have endangered the ownership of creators in the following ways;

· Signing unilateral beneficial contracts with creators. Platforms reserve the rights to delete any creator’s content; that is to say, as long as content is stored in their servers, they can delete anything as they please.

· Recommendation algorithm. Platforms are devoted to mining every piece of data available to recommend things a user may like. Every piece of user data is captured and extracted to establish consumption patterns that could lead to a successful purchase.

· The absence of a positive correlation between the popularity of a creator’s content and its perceived value. For example, a certain network promoter produced an article with over 100 million impressions, but its revenue was $0. Based on the results, the conversion rate of the content is 0, but the value of the content is not arbitrarily considered a “0.” This assumption begs the question: how can we use financial tools to measure the value of content influence?

Direction of Tech Stack for Creator Economy

We’ve thoroughly discussed how the creator economy is going, how it developed, and the current problems in the industry.

Now we will dissect Web 3.0 from the perspective of the tech stack and elaborate on the technical development framework, as well as some other specifics.

A typical internet technical layout is illustrated above; each fully functioned application is consistent with the frame.

For the development of a traditional user management system, a Linux system is used to install a Java-friendly environment. The back-end Java code is stored and run in the Linux file system. MySQL is needed to conduct certain operations, such as adding, deleting, and modifying, to maintain a database full of relational data, i.e. username, phone, etc.

Users interact with the front-end through the TCP three-way handshake, exposing the REST Application Programming Interface (API) to the front-end application. They upload GET inquiries to the website via a browser and download resource files, such as HTML, CSS, JS, etc., to the website. Thus, interaction with the user data is completed on the site.

The description above summarizes the basic design of traditional internet applications. The future development of Web 3.0 won’t deviate from the technical foundation established during Web 2.0. Innovation does not pop up out of nowhere; micro-innovations build up incrementally over time. Furthermore, the technical foundation to support these innovations must be established.

So under Web 3.0, what kind of changes are needed from a tech stack development perspective? The Web 3.0 technology has several major features: decentralization, encryption privacy, asset completion, more direct revenue stream, delegated power, etc.

There are also major differences in the creator economy in the Web 3.0 era. Creators exist in a variety of industries, producing different content in different ways and through many formats.

To demonstrate how technology and the creator economy correspond, let’s use a chart. The content types and formats will be on the x-axis and the different levels of tech stacks will be on the y-axis.

File System

A file is the most basic unit of information produced by creators.

The top operating systems in the space (Linux, Windows, and macOS) are sufficient for file management. There is no need to reinvent the wheel when it comes to these systems, as it further increases the barrier to entry.

So what is the biggest file system variable in Web 3.0? The environment of the file system changes from controllable to uncontrollable. Essentially, the aggregated file system transforms from centralized to decentralized.

Due to changes in the external environment, risk control measures and fraud prevention certificates are required for document security management. A typical case is Filecoin, which has a unique duplicate proof algorithm that is considered the safest in the industry.

Why is it the safest? First, we must understand how a file is stored in a decentralized environment.

A simple duplicate proof

Toy solution

  • Each miner submits an encrypted copy to the verifier.
  • The verifier checks the hash of the copy and compares it to the blockchain record.
  • If the hash of the copy matches the blockchain hash
  • Repeat every time

Drawbacks

  • The miner’s proof is very large, and the size is commensurate with an encrypted copy
  • A long time is spent calculating the hash of the copy
  • Potential loss of copy could occur between the two inspections

Probabilistically checkable proof

Implications of PCP:

  • The verifier does not need to verify the entire document. Only random verifications need to take place over time to a achieve a 99.999% confidence level

How:

  • The verifier randomly selects a part of the file c
  • The certifier searches c
  • The certifier provides a statement that c is part of the copy
  • The certifier provides the probability of receiving the statement
  • The action is repeated until the trust level is reached

Advantages:

  • The certifier’s statement needs to be checked only a few times.
  • The scale of the process is downsized. For example, 50 random checks for a 1000-fragment sample is enough to achieve confidence.

Questions Remained:

  • How can you prove c is part of the copy?

Interactive Duplicate Proof

  • According to the PCP theorem, the verifier randomly checks the file parts multiple times.
  • Every certifier follows the following steps:
  • Generate a full Merkle Tree from the file copy
  • Starting from the validator request fragment c, the Merkel route M_route[c] is generated
  • The M_route[c] is sent to the verifier

Verifier executes the following commands:

  • Verify if c is legal
  • Calculate if M-route[c] is legal
  • Verify if Merkle Root is correct (public knowledge)
  • Improvised storage certification scheme
  • Copies may be lost in the verificaton gap

Non-interactive Space-time Proof

  • How to guarantee
  • Miners cannot discard files between the two verifications
  • Not even for a second
  • Solution: repeatedly generate duplicate proof
  • Publicly verify random number r periodically
  • r reflects challenge c
  • c obtains challenge c_1 via the hash function
  • The certifier calculates the Merkel path and generates a copy proof Pi_1
  • Generate challenge c_2 via hash function of [Pi_1,c]

The above content includes excerpts from Applications of Zero Knowledge Proof in Filecoin by Yuming Huang of Huobi Ventures.

Based on the non-interactive space-time proof reinforced by Zero Knowledge Proof, a comparison of different storage proofs demonstrates that Filecoin is the most reliable and secure. Filecoin has addressed the problem of how to efficiently prove if a file is securely stored in an unreliable environment.

Though the security problem has been resolved, there is still room for exploration in terms of file searching, for instance:

  • To accelerate Zero Knowledge algorithm through ASIC chips
  • To simplify the complexity of Zero Knowledge by integrating more advanced mathematics

In the Web 3.0 ecosystem, storage proof is not the only factor to consider at the file system level. Traditional files are stored with file paths and internal IP addresses are recorded by the system; outside parties locate files by tracing the strings compiled with the file path and internal IP address.

Whereas the new naming patterns, represented by file hash CID and data structure IPLD, could be directly traced to retrieve certain files. These technologies are inter-operable between IPFS and many other projects and are widely used.

To summarize, only by verifying the anti-fraud attacks at the mathematical level can the security of the file system be truly achieved. The use of semi-centralized solutions is worthless.

Database

GET and PUT are most frequently employed by creators while processing files. However, more complexity exists under more complicated circumstances. Files may be subject to logical relationship of data, or in need of extra actions as add, delete, search or change.

Let’s say a key opintion leader (KOL) published an article. The article file, the KOL’s user name, the creation time, and the corresponding CID file need to be recorded, so that the KOL can easily locate the file using the information. Whenever the article is revised, all of this information must also be modified. The complexity of the data requires an Excel-like database to manage it all, similar to the way a librarian needs a management system to record book information.

Compared to Web 2.0, there are huge challenges in implementing decentralized databases. Bitcoin and Ethereum are both decentralized database networks. All client nodes were installed with database plug-ins, thus creating an invisible table to integrate data and account status from all blocks. This database requires instant updates for all nodes. As a result, node energy consumption is high, not to mention the delay triggered by the instant update from all nodes occurs at the same time. Why is the storage space on the chain limited? Compromises must be made to ensure high uniformity in a Web 3.0 ecosystem.

Currently, some large decentralized databases use Layer 2, which doesn’t require network-wide consistency yet ensures the data stays tamper-proof. ThreadDB, which is based on IPFS, provides a decentralized database in Layer 2, but security is still not guaranteed. Nonetheless, a decentralized database can be built in Filecoin.

Relational databases and non-relational databases are powered by different programming languages, which vary in terms of indexing or operation speed. More application scenarios, such as database sharding and conjunctive query from multiple nodes, are waiting to be explored. The available opportunities will be revealed as the universal and unique difficulties and challenges of Web 3.0 are experienced.

Unlike the current data lake and data warehouse industry, the external conditions of decentralized databases aren’t trustworthy and the P2P is completely decentralized. The database in different network nodes is classified and networked to achieve the most efficient execution of transactions.

Meanwhile, simultaneously providing anti-fraud proof while running decentralized database nodes remains another challenge.

Computing Layer

The computing layer refers to the computing process, which aims to tackle business related issues conducted by the virtual machine using operation codes on the hardware platform. In terms of functionality, virtual machines can be classified as two types: limited-capacity virtual machines and full-capacity virtual machines.

Virtual Machine with Limited Capacity

EVM, represented by smart contract programming language Solidity, has become prevalent among Web 3.0 tech stacks, although it does have some minor defects. Its popularity is attributed to the open source movement.

The open source movement traveled bottom-up; EVM first became popular with developers. It’s impossible to promote a new standard from the top down, as there is always resistance from the developer community and there is a learning curve. Over time, Ethereum was developed and steadily became the go-to standard among blockchain developers.

However, EVM does have its limitations. There is no timer function, so the execution of smart contracts cannot be activated under certain conditions. The underlying codes cannot communicate with other decentralized storages. Moreover, smart contract codes cannot initiate HTTP inquiries in order to call for services provided by other HTTP APIs, such as a text message API.

For complex codes, gas fees and the pressure of state data explosion are increased. Currently, the blockchain data size for updating all nodes on Ethereum has reached the size of terabytes (TB). That’s why some projects have adopted to reduce the gas cost and reduce the volume of state data generated. They are moving the contract execution layer on Layer 2, where every execution produces Zero Knowledge Proof. Layer 1 only carries out the verifications by adding or erasing execution records. From there, an update is carried out network-wise, and the actual size of storage data comes out to only a few bytes.

Fully Functioned Virtual Machine

Many existing so-called Web 3.0 applications are in fact semi-centralized. For example, how can a decentralized blog platform enable its readers to subscribe to an email newsletter? Unless it uses another vendor’s SaaS subscription, the existing smart contract platform is incapable of providing such a service.

Some projects have adopted alternatives:

One project built a network made up of WASM virtual machine nodes. Every developer utilizes his or her favorite programming language to code the logic as a binary file that is executable by WASM and store it on the IPFS network. Each developer must define the HTTP portal into a standardized descriptive doc and submit to the network. Everyone who would like to utilize the service in the network has to compound services with different properties via DSL programming in order to accomplish his or her goals. For example, if a developer publishes a service that is applicable to mail protocol, he or she could benefit from the use of the service as soon as it is deployed, therefore achieving functions that many DAPP developers only dream of.

WASM virtual machines are so powerful that some teams are even running mature virtual machines such as EVM, LLVM and JVM in the industrial world. In the same virtual machine sandbox environment, Solidity developers can appreciate favorable privileges indirectly via a state machine that EVM could never accomplish. Furthermore, such a scenario could expand the unique benefits of each different chain through a cross-chain interoperable protocol.

How a state machine can achieve a fully functional Web 3.0 application on inter-operable virtual machines remains a mystery yet to be solved. In Web 2.0, the whole ecosystem is inherenlty interoperable via the calls of different REST APIs. So imagine that a creator wants to establish relationships with fans via mail subscriptions; that is difficult to achieve with a Web 3.0 solution. In a typical social network, PUSH/PULL channels like mail subscriptions are crucial to helping creators build healthy, long-term communities.

Transmitting Layer

In Web 3.0 tech stacks, anonymous privacy is the biggest feature of communication technology. Imagine if all your transmitted data was sent in plain text. Your personal data would be easily exposed to danger via a simple main-in-the-middle attack.

So what are the privacy protection technologies currently on the market?

The Tor Onion Network is not secure. A judgement could be easily made via an analysis of the volume change in a fixed time range, captured by a considerable amount of nodes deployed at the entrance and exit of the network.

The Mix is the only truly secure encrypted communication algorithm as an attacker would never be aware of the sender’s IP, the content, or the time it was sent. How is this achieved? A typical multi-layer encrypted data pack is not sent out immediately. Rather, it enters a waiting time queue. A large number of MixNodes flood to the queue after being randomly scattered, mingled, and decrypted layer-by-layer according to the public key.

This makes it difficult for attackers to trace backwards and conduct a bilateral correlation analysis.

There is also the use of zero-knowledge proof for concealing an IP in the market. In a traditional application for network transmission, the most basic link is the TCP three-way handshake. The requester’s IP, HOST and other data need to be brought with the handshake. Only when the three-way handshake is successful can the link be established. Some teams try to transform TCP instead of providing IP and Host information, so they submit ZKP certification to the target server to establish a communication link. Of course, performance expenditures must be addressed upon implementation.

What is the impact on creators of videos, games and 3D works?

Although the bandwidth of the global overall network transmission is increasing day by day, there are challenges transmitting the high volume of files created today. Many use cases require real-time performance, and traditional HTTP only supports half-duplex mode. For those who need to maintain a long connection status in a real-time environment, there is a technology: WebRTC.

In January 2021, WebRTC was announced as the official standard of the W3C and IETF. According to the report issued by Grand View Research, the market cap of WebRTC could surpass 21 billion dollars by 2025 — a compound annual growth of 43.6% in five years compared to some 2.3 billion dollars in 2019. It functions as the following:

1. Both sides need to turn on local camera by sending getUserMedia command;

2. Send a request to join the room to the signaling server;

3. After Peer B receives the SDP object sent by Peer A, Peer B saves the Answer SDP object by the method of SetLocalDescription via PeerConnection, and sends it to Peer A through the signalling server

4. During the offer/answer process of SDP messages, Peer A and Peer B would be prepared to collect Candidate data (i.e. local IP address, public IP address and allocated address by Relay server) via audio channel and video channel set by SDP messages.

5. Peer A sends Candidate data to Peer B via the signaling server once Peer A receives it; the same process applies vice versa.

In this way, Peer A and Peer B have exchanged media information and network information with each other. If they can reach an agreement (find the intersection), they can start communication.

WebRTC itself is a P2P technology, but each user’s local IP is not exposed on the public network. A centralized NAT network penetration service is required to expose the IP, so there will be a relay layer. However, complete decentralization also has its challenges. In other words, it would be redundant since there would be a time consumption during the matching of multi NAT penetration services. There are indeed some projects that successfully launched MMOG by modifying LibP2P.

From another perspective, a file could be compressed with an optimized compression algorithm, which ensures a Hi-Fi outcome, in order to minimize the size of the file being transmitted, thereby relieving the pressure of bandwidth.

Addressing Layer

A node must be associated with an IP address bound to a machine regardless of which chain it is running on. Nodes conduct peer-to-peer communication with each other via IP addresses. However, a creator’s content that is stored in a node machine could not be retrieved by addressing the IP; Domain Name System (DNS) is the answer. DNS is a decentralized database, reflecting IP addresses and Domain Names, that enables a more convenient way of surfing the Internet. DNS employs UDP 53. So far, the character limit for each grade of Domain Name is 63, adding up to a 253 character limit total.

People can better understand and memorize using semantic methods. Based on this basic understanding of DNS, what opportunities lie within Web 3.0 tech stacks?

Layout of Name Space: The domain on the upper level is called Top level Domain (TLD), Second Level Domain, and so forth.

Management and Allocation Right of Domain Names: Domain Name is under the management of Internet Corporation for Assigned Names and Numbers (ICANN), a non profit organization that is in charge of Domain Name management, allocation of IP address, protocol setting configuration and maintenance of main server. ICANN allocated a default TLD for each country. To be more specific , .uk for the UK, .fr for France, .jp for Japan, etc.; China has its TLD as .cn under the management of CNNIC.

Distribution and management of domain names: Domain names are managed by the Internet Corporation for Assigned Names and Numbers (ICANN), a non-profit organization that is in charge of Domain Name management, the allocation of IP addresses, protocol setting configuration, and maintenance of the main server system. ICANN has allocated a default TLD for different countries and regions. For example, .uk for the UK, .fr for France, .jp for Japan, etc. China has the TLD .cn under the management of CNNIC.

Resource Files: The Domain Name system uses a sequential name space order. The large mapping table is disassembled into unparallel secondary tables on the internet; these tables are resource files.

Parsing: The process of parsing is on Domain Name; it extracts inquiries and directs it to corresponding IP servers.

This is why a .eth domain name could not be parsed directly. The domain .eth is technically an Ethiopian TLD designated by ICANN and cannot be changed. Furthermore, it remains unseen whether the resolution layer is able to handle .eth.

The most common solution for implementing decentralized DNS is to apply for a TLD, inheriting the current DNS database, and thus acquiring the ability to communicate with other TLDs.

Due to decades of development, the current DNS structure will stay for years to come. If a new standard is established, it must be reformed from the bottom-up.

Presentation layer

The definition of the so-called presentation layer is nothing more than providing users with an interactive interface, whether it is 2D or 3D. The presentation of the content is different depending on the file type, and the interactive interface will also differentiate accordingly.

Texts:

Texts can be categorized differently according to their lengths and arrangement. HTML websites possess more flexibility than apps, resulting in the lightspeed redirecting of hyperlinks in HTML.

The most frequently used Web 3.0 application can interact with wallet plugins, which are installed in browsers. IOS Safari also supports plug-ins, which can only be installed via Apple AppStore. Apple maintains rigorous control over in-app purchases, whereas independent wallets with open APIs have less strict regulations.

One thing worth mentioning is that some app sites are cached and installed locally, so there is no need to reload network requests and the startup experience is even better. The optimal option for web technology is PWA. The website content material is cached locally, and the next time it is started, it laods faster.

How are HTML page files loaded and rendered?

Let’s use Firefox as an example. Firefox is able to support access to websites via file hashes. With the emergence of new Web 3.0 websites, these mainstream browsers will need to be made compatible.

It is foreseeable that use cases for native APIs would be needed as Web 3.0 applications expand. As we’ve seen in the past, the revolution of open source positively will influence market share by first capturing the minds of the general public.

With the support of HTML and browsers, will traditional text products, such as IM, Blog, Microblog and article websites, or even collaborative text products migrate to Web 3.0 environment? It all some down to the transfer process on the social network. For example, in the email era, users saw people around them using IM software, so they began to use it too. It is meaningless to resist new things merely because the current options work “just fine.” Human beings are inherently social animals forged by their surroundings.

What remains vital for creators of text content in the Web 3.0 era is to build a core group audience and utilize the power of social networks.

Images:

Compared to text-based creative content, images not only have more elements on a visual level, but they also have more dimensions: time, place, environment, and a background. The saying goes, a picture is worth a thousand words. A reflection pulse is converged to a light signal from a glimpse of an image, then processed by the nerve system and received on the cerebral cortex. The feedback loop is much shorter than what occurs when reading a text.

Existing technology, specifically HTML, has been sufficient enough for image processing, whether the image is adapted to for different channels or is altered. In other words, it is superfluous for Web 3.0 tech stacks to reinvent the wheel in this area.

NFT images and artwork have entered the mainstream in recent years. However, people can still right-click to save an image to their local drive. But this is only a small problem, which can be solved by enabling the configuration of “ban downloading image” or “ban downloading website” in HTML code. For screenshots, interpolation processing would be activated by capturing the keyboard combination using JavaScript surveillance.

What is the potential for digital images in the Web 3.0 era?

When it comes to the assetization of images, uniqueness and scarcity are the foundation. The most sought-after NFT images, such as algorithm-formulated images and PFP, were born based on these characteristics.

The algorithm logic being implemented is as follows:

1.A basic implantation of the Fractal Algorithm. The Fractal Algorithm is a well-known branch of modern mathematics; it essentially lays out a new way to see the world. It converges with and supports the chaos theory of dynamical systems. It recognizes that under certain circumstances, parts of the world are similar to the whole world in a particular way (i.e. formation, structure, information, function, time, energy, etc.). Also, it points out that changes in spatial dimension could be either dispersed or continuous. A broader world view was thus introduced.

2. Configure image element and insert to array. During the user’s mint work, a random function is triggered to extract the parameters in the array to generate random pictures.

3. Reproductive algorithm. It uses two sets of pictures as the input source, or the two sets of element arrays, and the genetic algorithm is used to cross the two sets. The output is just like Borges’ Infinite Library; it is optimized by selecting millions of generations of genetic algorithms. Just like pigeon racing, ace pigeons worth millions of dollars are also bred through continuous breeding and selection.

4. Community collaboration. Different members in the community handle different parts of an image, so the group effort is maximized.

5. Layer synthesis algorithm. Images are processed using layers in Photoshop, whereas HTML can achieve the same effect using CSS, a random combination of layers.

6. Use NLP to recognize natural language and convert images to graphics using the GAN network. This may shed light on the beauty of mathematics.

Audios/Videos

Many new products were created during the evolution of the internet. In terms of music products, there are music player/MP3s, music websites, music blogs, medias, RMix, audio streaming, etc. From the perspective of videos, there are video players, video resource websites, long/short video websites, video medias, interactive video websites, special effects plug-ins, video streaming, etc.

What are unique use cases for Web 3.0 stacks?

As decentralized file technology matures, more decentralized audio and video applications will arise. Meanwhile, NFT technology provides ownership for various forms of audios and videos. Owner and collection attributes will be extended to online display and auction platforms, satisfying the creators’ ego. The more the content gets exposure, the more valuable it will become.

Most platforms have their own creation tools to simplify the user experience. For example, a tone-deaf person can sing on pitch with an audio card. An online video editor can make it easy to achieve blockbuster effects in just a few steps. With the variety of simple and useful tools, everyone could potentially be a creator as long as they have an idea. Create+DeFi thus could eventually be a reality.

Games

Games are categorized by how players participate. Let’s categorize games based on the following:

From the chart, we can see that all games are categorized by the roles of the players. It is referred to as the Bartle Method (Richard Bartle, 1996), which categorizes gamers based on their first choice of action in a game.

Achivers: They prefer to increase the number of tokens, levels, equipments and other parameters they deem as measures of success.

Explorers: They enjoy discovery and immerse themselves in the game world.

Social gamers: They find amusement by interacting with other players and NPCs.

Killers: Also known as ACE, they are the ones going for absolute victory.

Most players are defined by more than one category and are a combination of two or more dimensions. For example, World of Warcraft is an MMORPG game. War of Legions satisfies social gamers as well as achievers, as they want to level up and obtain in-game items.

The descriptions above illustrate how the traditional internet categorizes games in terms of content. So how do we redefine those categories in Web 3.0? Another axis must be laid out: DeFi.

The goal of the play-to-earn concept is to unify DeFi with games, capitalizing on every asset possible within the game. The life cycle of a thoughtfully designed online MMORPG game could exceed 20 years. However, with the introduction of DeFi, assets face a 24/7 turnover, which largelhy shortens the life cycle of the game.

With theDeFi axis added, we can explore more users in more dimensions. New use cases don’t suddenly appear; they build on the existing foundation.

The barriers to entry in gaming are much higher than that in videos and audio. However, online editors, like War of Warcraft Map Editing Tool, Mario Manufacture, My World, address some of the difficulties to becoming a game creator by utilizing the UGC mode. There would be a steady stream of game content, laying the foundation for future platforms where everyone could publish P2E games.

The dual acceleration model of GrameFi + CreateFi: CreateFi shortens time consumption of new asset listings, whereas GameFi levels up the asset turnover and discovery of new assets. One addresses the supply end and the other the consumer. The cycle period is so short that funds on the platforms are continuously hot.

Vox (3 Dimensional Model)

VOX is a platform where creators can produce virtual skiuns and props with 3D model files from CryptoVoXel.

Structured Business Data

Structured business data is the set of processed data with a logical structure. For example, on Dune, a data analytical platform, developers can share data analytic scripts publicly. Of course, there are strict privacy protection laws to contend with so the most risky factor is the legal compliance of the data source.

Witnessing the bloom of diverse application scenarios of online data platforms, such as Airtable and Tencent File, they are especially becoming competent for announcements, even social aids, under vast collaboration of multiple personnel. Foreseeably, data products are more than capable of handling situations that are yet to approach.

Right now, the multi-person online data platform Airtable and Tencent documents have a variety of application use cases, especially for multi-person collaboration, such as serving as a data bulletin board, social mutual assistance, etc. From this point of view, compared to Web3.0 in the form of DAO collaboration, data products can produce more use cases.

Conversion of Equity to NFT

In this section, more efficient revenue streams for creators will be discussed.

In traditional revenue streams, creators receive revenue from ad clicks, sales driven by articles, and reader subscriptions.

Clicks from Ads: The focus is on the monetization of content consumers’ attention. Advertisers can make smart investments by collecting users’ cookies to analyze their browsing history so they can target those who might be interested in a specific topic or category. However, this option becomes unavailable in Web 3.0 because of 1) decentralized settings 2) privacy protection 3) the practice of Privacy Protection Law. Previously, viewers would click ads specifically targeted toward them and make apurchase without knowing who the creators are, thus finishing the life cycle of the product.

Sales Driven by Articles: It all comes from the notion that platforms use various forms of content to attract traffic that could potentially be converted to sales. This exploration started with microblogs, long articles, etc. and is peaking with short videos. These types of platforms require a strong business development team, so it would be difficult for Web 3.0 teams to operate like these platforms, as they are often small and limited to only their expertise.

From this perspective, there are many challenges in copying the old model. An overall cross comparison with all products in the market indicates that OpenSea seems to be a better option for creators. By converting creative works to NFTs, things that are not quantifiable can be standardized.

Based on the table above, you can see there is a strong correlation between consumers and the rights and interest of each information product. In traditional internet applications, each corresponding relationship is independent. NFTs turn all rights and interests to the value exchange, which leads to financial value — assuming there is some value in the underlying assets.

For example, classes from an education institute could bring about some value to the students, i.e. landing a better job. The value created by the classes is the underlying value, assuming that the more classes taken, the better job one could get. Imagine that the institute issues a limited quantity of membership cards that vary in terms of the availability of classes: the more expensive the card, the more classes are available. If the membership card could be transferred to another personnel, which creates room for exchange, then the card would likely be sold at premium, as there will always be people who are not satisfied with their current job. (Fictional story, not a real-life example).

An unlimited version will be released on a creator’s site with the purchase of a subscription. When the subscription is converted to an NFT, it can be exchanged, mortgaged, and used as various financial derivatives.

With this type of revenue stream, exchange fees could be applied in order to complete the business cycle. The recruitment of creators could be operated from the bottom up through the MCN DAO and community.

The fragmentation of NFTs is attributed to the lack of liquidity caused by the overvaluation of assets, which blocks the general public from participating.

This financial splitting method increases fund turnover indirectly. As discussed in the previous section, the reality of NFT fragmentation cannot be neglected.

The solution is only possible when the best parts are extracted from the entire content piece.

Fundraising by DAO

For open source contributors in certain programming languages, there are many dilemmas. For instance, the Rust development team is sponsored by Amazon. By doing so, Amazon is increasing its dominance in a space and overtaking the control of the core developers, 10 of whom left the projects after 10 years due to severe financial loss.

Traditional open source, non-profit creators have to deal with the oversight of their sponsors or employers in order to maintain the functioning of the project.

It is valuable to explore the autonomy of project management as well as the sustainability of operation funding under the DAO approach.

Github is a typical online collaborative product. During the COVID-19 pandemic, more online collaborative scenarios emerged. Online books can be edited by multiple authors, online workboards are created to support group collaboration, and all kinds of online products are exploiting group intelligence in the form of a DAO.

The slime mold maze experiment is often used to illustrate this idea. Because it has no human intelligence, a single slime mold does not have the ability to locate the exit with cheese. However, a group of slime molds is able to do so. This is because the slime mold keeps dividing into multiple units, leaving traceable chemicals wherever they go. These units keep repeating all the routes until they find the best possible outcome, which is the exit with cheese in this case.

Many man-made miracles in history were also created by group intelligencein the form of a DAO, which improves instant collaboration efficiency in human beings.

Perpetual Autonomous Dividends

Regarding the automatic dividend distribution of royalties, Ethereum smart contracts lack the ability to meet certain conditions to trigger automatic code execution. The off-chain middleware is required, the execution strategy code is submitted to the middleware, and the dividend logic is automatically executed when the conditions are met.

Traffic by Dividing

Traffic by dividing is the discovery of utilizing social relations to attract traffic without the power of an algorithm.

So far, Web 3.0 tech stacks suffer from a confined infrastructure for SNS. People communicate in software like Telegram and Discord, which are still isolated from each other as they operate using their own databases.

But under a social network database that is publicly constructed, enhanced by certain motivation mechanisms, the effect of dividing could be astonishing.

--

--

HTX Ventures
HTX Ventures

Written by HTX Ventures

Focus on HTX’s venture investment portfolio and supporting innovative blockchain projects through long-term strategies. Twitter:@Ventures_HTX

Responses (1)