Web3-AI Sector Overview: Technical Logic, Application Scenarios, and In-Depth Analysis of Top Projects

Web3-AI Track Panorama Report: Technical Logic, Scenario Applications, and In-Depth Analysis of Top Projects

With the continuous warming of AI narratives, more and more attention is focused on this track. This article provides an in-depth analysis of the technical logic, application scenarios, and representative projects in the Web3-AI track, presenting a comprehensive view and development trends in this field.

1. Web3-AI: Analysis of Technical Logic and Emerging Market Opportunities

1.1 The Integration Logic of Web3 and AI: How to Define the Web-AI Track

In the past year, AI narratives have been exceptionally popular in the Web3 industry, with AI projects emerging like mushrooms after rain. Although many projects involve AI technology, some projects only use AI in certain parts of their products, and the underlying token economics have no substantial connection to the AI products. Therefore, such projects are not included in the discussion of Web3-AI projects in this article.

The focus of this article is on projects that use blockchain to solve production relationship issues and AI to resolve productivity problems. These projects provide AI products and simultaneously use Web3 economic models as tools for production relationships, complementing each other. We categorize such projects as the Web3-AI track. To help readers better understand the Web3-AI track, this article will elaborate on the development process and challenges of AI, as well as how the combination of Web3 and AI perfectly solves problems and creates new application scenarios.

1.2 The Development Process and Challenges of AI: From Data Collection to Model Inference

AI technology is a technology that allows computers to simulate, extend, and enhance human intelligence. It enables computers to perform various complex tasks, from language translation, image classification to applications such as facial recognition and autonomous driving. AI is changing the way we live and work.

The process of developing artificial intelligence models typically includes the following key steps: data collection and data preprocessing, model selection and tuning, model training and inference. For a simple example, to develop a model for classifying images of cats and dogs, you need to:

  1. Data collection and data preprocessing: Collect an image dataset containing cats and dogs, which can be done using public datasets or by collecting real data yourself. Then label each image with the category ( cat or dog ), ensuring that the labels are accurate. Convert the images into a format that the model can recognize, and divide the dataset into training, validation, and test sets.

  2. Model Selection and Tuning: Choose the appropriate model, such as Convolutional Neural Network ( CNN ), which is more suitable for image classification tasks. Tune the model parameters or architecture according to different requirements; generally speaking, the network depth of the model can be adjusted based on the complexity of the AI task. In this simple classification example, a shallower network depth may be sufficient.

  3. Model Training: You can use GPU, TPU, or high-performance computing clusters to train the model, and the training time is influenced by the complexity of the model and the computing power.

  4. Model Inference: The file of the trained model is usually referred to as model weights. The inference process refers to the process of using a trained model to predict or classify new data. During this process, a test set or new data can be used to evaluate the classification performance of the model, usually assessed using metrics such as accuracy, recall, and F1-score to evaluate the effectiveness of the model.

However, the centralized AI development process has some issues in the following scenarios:

User Privacy: In centralized scenarios, the development process of AI is often opaque. User data may be stolen and used for AI training without their knowledge.

Data source acquisition: Small teams or individuals may face restrictions on non-open source data when obtaining data in specific fields such as medical data (.

Model selection and tuning: For small teams, it is difficult to obtain specific domain model resources or spend a lot of costs on model tuning.

Acquiring Computing Power: For individual developers and small teams, the high costs of purchasing GPUs and renting cloud computing power can pose a significant economic burden.

AI Asset Income: Data annotation workers often struggle to earn an income that matches their contributions, and the research results of AI developers are also difficult to match with buyers in need.

The challenges existing in centralized AI scenarios can be addressed by combining with Web3. As a new type of production relationship, Web3 is naturally suited to represent the new productive forces of AI, thereby promoting the simultaneous advancement of technology and production capabilities.

) 1.3 The Synergistic Effects of Web3 and AI: Role Transformation and Innovative Applications

The combination of Web3 and AI can enhance user sovereignty, providing users with an open AI collaboration platform that allows them to transition from being AI users in the Web2 era to participants, creating AI that can be owned by everyone. At the same time, the integration of the Web3 world and AI technology can spark more innovative application scenarios and gameplay.

Based on Web3 technology, the development and application of AI will usher in a brand new collaborative economic system. People's data privacy can be guaranteed, and the data crowdfunding model promotes the advancement of AI models. Numerous open-source AI resources are available for users, and shared computing power can be acquired at a lower cost. With the help of a decentralized collaborative crowdfunding mechanism and an open AI market, a fair income distribution system can be realized, thereby encouraging more people to drive the progress of AI technology.

In the Web3 scenario, AI can have a positive impact across multiple tracks. For example, AI models can be integrated into smart contracts to enhance work efficiency in various application scenarios, such as market analysis, security detection, social clustering, and many other functions. Generative AI not only allows users to experience the role of an "artist," such as using AI technology to create their own NFTs, but also creates rich and diverse game scenarios and interesting interactive experiences in GameFi. The rich infrastructure provides a smooth development experience, allowing both AI experts and newcomers wanting to enter the AI field to find suitable entry points in this world.

2. Interpretation of the Web3-AI Ecological Project Landscape and Architecture

We mainly studied 41 projects in the Web3-AI track and categorized these projects into different tiers. The classification logic for each tier is shown in the figure below, which includes the infrastructure layer, middle layer, and application layer, with each layer further divided into different segments. In the next chapter, we will conduct a depth analysis of some representative projects.

The infrastructure layer encompasses the computing resources and technological architecture that support the entire AI lifecycle, the middle layer includes data management, model development, and validation reasoning services that connect the infrastructure to applications, while the application layer focuses on various applications and solutions directly aimed at users.

![Web3-AI Track Panorama Report: Technical Logic, Scene Applications and Top Projects Depth Analysis]###https://img-cdn.gateio.im/webp-social/moments-ad1811191c5ea0fa48c4b3287f37eaf6.webp(

) Infrastructure Layer:

The infrastructure layer is the foundation of the AI lifecycle. This article classifies computing power, AI Chain, and development platforms as part of the infrastructure layer. It is the support of these infrastructures that enables the training and inference of AI models, presenting powerful and practical AI applications to users.

  • Decentralized computing network: It can provide distributed computing power for AI model training, ensuring efficient and economical use of computing resources. Some projects offer decentralized computing power markets where users can rent computing power at low costs or share computing power for profit, represented by projects like IO.NET and Hyperbolic. Additionally, some projects have derived new gameplay, such as Compute Labs, which propose a tokenization protocol where users can participate in computing power leasing in different ways by purchasing NFTs that represent physical GPUs.

  • AI Chain: Utilizing blockchain as the foundation for the AI lifecycle, achieving seamless interaction between on-chain and off-chain AI resources, and promoting the development of the industry ecosystem. The decentralized AI market on the chain can trade AI assets such as data, models, agents, etc., and provide AI development frameworks and supporting development tools, represented by projects like Sahara AI. AI Chain can also facilitate technological advancements in AI across different fields, such as Bittensor promoting competition among different types of AI subnets through an innovative subnet incentive mechanism.

  • Development Platforms: Some projects offer AI agent development platforms that also facilitate the trading of AI agents, such as Fetch.ai and ChainML. One-stop tools help developers more conveniently create, train, and deploy AI models, with representative projects like Nimble. These infrastructures promote the widespread application of AI technology in the Web3 ecosystem.

Middleware:

This layer involves AI data, models, as well as reasoning and verification. Using Web3 technology can achieve higher work efficiency.

  • Data: The quality and quantity of data are key factors affecting the training effectiveness of models. In the Web3 world, resource utilization can be optimized and data costs reduced through crowdsourced data and collaborative data processing. Users can have autonomy over their data and sell it under privacy protection to avoid their data being stolen and profited from by unscrupulous businesses. For data demanders, these platforms provide a wide range of choices at extremely low costs. Representative projects such as Grass utilize user bandwidth to scrape web data, xData collects media information through user-friendly plugins, and supports users in uploading tweet information.

In addition, some platforms allow domain experts or ordinary users to perform data preprocessing tasks, such as image labeling and data classification. These tasks may require specialized knowledge for financial and legal data processing, and users can tokenize their skills to achieve collaborative crowdsourcing of data preprocessing. Examples include AI markets like Sahara AI, which feature data tasks across different domains and can cover multi-domain data scenarios; while AIT Protocol labels data through human-machine collaboration.

  • Models: In the AI development process mentioned earlier, different types of requirements need to match suitable models. Common models for image tasks include CNN and GAN, while for object detection tasks, the Yolo series can be chosen. Common models for text-related tasks include RNN and Transformer, as well as some specific or general large models. The depth of the models required for tasks of varying complexity also differs, and sometimes model tuning is necessary.

Some projects support users in providing different types of models or collaboratively training models through crowdsourcing. For example, Sentient allows users to place trusted model data in the storage layer and distribution layer for model optimization through modular design. The development tools provided by Sahara AI are equipped with advanced AI algorithms and computing frameworks, and they have the capability for collaborative training.

  • Inference and Verification: After the model is trained, it generates model weight files that can be used directly for classification, prediction, or other specific tasks; this process is called inference. The inference process is usually accompanied by a verification mechanism to validate whether the inference model's source is correct and whether there are any malicious activities. In Web3, inference can typically be integrated into smart contracts, allowing for inference by calling the model. Common verification methods include technologies such as ZKML, OPML, and TEE. Representative projects like the AI oracle on the ORA chain ###OAO( have introduced OPML as a verifiable layer for AI oracles. The ORA official website also mentions their research on the combination of ZKML and opp/ai)ZKML with OPML(.

) Application Layer:

This layer mainly consists of applications directly aimed at users, combining AI with Web3 to create more interesting and innovative gameplay. This article primarily sorts out projects in several areas: AIGC### AI-generated content (, AI agents, and data analysis.

  • AIGC: Through AIGC, it can be extended to NFT, games, and other tracks in Web3. Users can directly generate text, images, and audio through the prompts given by users ), and even create customized gameplay in games according to their preferences. NFT projects like NFPrompt allow users to generate NFTs through AI for trading in the market; games like Sleepless enable users to shape the personality of virtual companions through dialogue to match their preferences.

  • AI Agent: Refers to an artificial intelligence system that can autonomously perform tasks and make decisions. AI agents typically possess capabilities for perception, reasoning, learning, and action, allowing them to execute complex tasks in various environments. Common AI agents include language translation, language learning, image-to-text conversion, etc. In Web3 scenarios, they can generate trading bots, create meme templates, conduct on-chain security checks, and more. MyShell, as an AI agent platform, offers various types of agents, including educational learning, virtual companions, trading agents, etc., and provides user-friendly agent development tools, enabling users to build their own agents without coding.

  • Data Analysis: By integrating AI technology and databases from related fields, data analysis, judgment, and prediction can be achieved. In Web3, market data analysis and smart money dynamics can assist users in making investment decisions. Token prediction is also a unique application scenario in Web3, with representative projects like Ocean, where the official setup has been established.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 5
  • Share
Comment
0/400
MeaninglessGweivip
· 19h ago
Here comes the AI craze again, it's so annoying.
View OriginalReply0
BearMarketMonkvip
· 08-06 05:05
At least a minimum of 6 digits for ecology, right?
View OriginalReply0
LayerHoppervip
· 08-06 04:56
Once again, it's the AI driver.
View OriginalReply0
FarmToRichesvip
· 08-06 04:36
Be Played for Suckers new narrative is here.
View OriginalReply0
MoonlightGamervip
· 08-06 04:36
What? Just trying to ride the wave of AI hype~
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)