UtilityNet White Paper
  • ♦️UtilityNet
    • ♦️Preface
    • ♦️Web3 Era
      • Web3 Chance
      • Web3 Challenges
    • ♦️Computing Revolutions
      • Computing era: a new industrial revolution
      • Needsexpansion for computing
      • The development of computing has entered a bottleneck period
      • UtilityNet computing revolution
    • ♦️The Meaning Of UtilityNet
      • Remodeling The Operating Rules Of The World
      • The Meaning Of Life
      • A Higher Level Of Civilization
      • Consensus Of High Performance Distributed AI Computing
    • ♦️Interpretation Of UtilityNet
      • About UtilityNet
      • Initial Aims Of The Team
      • Web3 Infrastructure
      • UtilityNet-- high performance distributed intelligent computing network
      • Computing Cloud And Edge Computing Of UtilityNet
      • Ultra-Heterogeneous Distributed Computing Network Of UtilityNet
      • UtilityNet Makes Metaverse Smarter
      • UtilityNet DAO
      • UtilityNet Client
      • Market Prospect And Future Value Of UtilityNet
    • ♦️Technical Architecture Of UtilityNet
      • Core Architecture
      • Infrastructure
      • Scheduling Protocol Of The CFN Distributed Computing
      • Consensus Mechanism Of HPOS
      • Intelligent Computing Resource Pool
    • ♦️Token Economics Of UtilityNet
      • Introduction To Tokens
      • Miner’s Mining Reward Of AI Computing
      • Mining And Combustion Rules During Test Period
      • GAS Fee Consumption
      • Output Rules
    • ♦️UtilityNet FUND
    • ♦️Growth Paths Of UtilityNet
    • ♦️Developer's Message
Powered by GitBook
On this page
  1. UtilityNet
  2. Interpretation Of UtilityNet

UtilityNet-- high performance distributed intelligent computing network

UtilityNetaggregates scattered AI physical computing resources into a huge computing pool, and performs computing tasks in response to different development requirements. For developers, operating a distributed AI cluster resource can be as simple as using a computer.

The tasks include graphics processing, machine learning, data analysis and mining, and the latest deep learning technology. Among them, the deep learning module provides a variety of performance optimization schemes from data, algorithms and models, provides fine-grained scheduling and deployment, and accurately supports the exclusive sharing of specific AI computing resources such as TPU, GPU, CPU, memory, etc. to a single node, so that developers can focus on their own professional fields and develop efficiently.

Objectives in the first stage of development tests:

Support x86 cluster architecture, multi-node computing and networking

Support the allocation, recovery and task scheduling of computing resources of 100,000-level clients

Common deep learning and reasoning frameworks can be applied at the user end

Support the fine allocation of single TPU resources

NVMe+RDMA High-bandwidth low-latency mixed flash storage

AI computing cost 2-4 times lower than that of the commercial price

At the software resource level, UtilityNet will integrate common deep learning frameworks such as Caffe, Darknet, MXNet, ONNX, PyTorch, PaddlePaddle, TensorFlow, and common deep learning data sets such as Mnist, MS-COCO, ImageNet, etc., so as to provide developers with software and hardware solutions.

PreviousWeb3 InfrastructureNextComputing Cloud And Edge Computing Of UtilityNet

Last updated 2 years ago

♦️
♦️