Product Advantages
Charging Mode

Pay-by-compute-consumption model, where costs are only incurred when compute tasks are running, resulting in zero idle computing resource

Flexibility

Flexible GPU resource allocation, on-demand GPU resource scaling

Toolset

Out-of-the-box toolkit, providing a complete development toolchain for LLMs and Agents

Pain Points
  • Limited Budgets

    Skyrocketing costs for intelligent computing servers, equipment, and maintenance fees have led to a desperate need for pay-by-usage models.

  • Severe Resource Waste

    Throughout the lifecycle of LLMs, computing resource demands and usage are often intermittent. However, traditional bare-metal leasing models that charge by the month or year can result in significant waste of computing resources.

  • Inflexible Resources

    Computing resource demands fluctuate greatly across LLMs’ lifecycle. Users urgently need a flexible way to scale up or down computing resources in response to changing stage-specific needs.

  • High Technical Barriers

    Users require an elastic and easy-to-use cluster environment, as well as a ready-to-use AI LLM training and fine-tuning toolchain on top of it. This can lower the technical threshold for configuring, managing, and maintaining intelligent computing software and hardware infrastructure.

Application Scenarios

LoRA Model Fine-tuning

LoRA Model Fine-tuning

Fine-tuning a 13B-parameter open-source pre-trained model on 3B tokens was completed in under 17 hours using a 2xH-series GPU setup


Industry Model Fine-tuning

Industry Model Fine-tuning

Fine-tuning an industry-scale 33B-parameter open-source pre-trained model on 4B tokens was completed in under 2 days using an 8xH-series GPU setup


Industry Model Fine-tuning

Industry Model Fine-tuning

Fine-tuning an industry-scale 175B-parameter open-source pre-trained model on 7B tokens was completed in under 2 days using an 64xH-series GPU setup


Model Training

Model Training

Training a 7B-parameter model from scratch on 1.8 trillion tokens was completed in approximately 10 days using a 128xH-series GPU setup


Product Introduction
英文版宣传册(Alaya NeW+算力包)_5
英文版宣传册(Alaya NeW+算力包)_6
英文版宣传册(Alaya NeW+算力包)_7
英文版宣传册(Alaya NeW+算力包)_8

Computing Power Package

Experience the power of DataCanvas Alaya NeW, expertly managing and segmenting massive GPU computing resources into smaller, bite-sized units - "Computing Power Packages".

Get White Paper
Start Free Trial

DataCanvas Computing Power Package,precision-split massive computing power into bite-sized chunks, perfectly suited to each user's unique requirements.

Our comprehensive large model toolchain empowers you to harness the full potential of computing, ushering in a new era of inclusive and accessible computing power for all.


Start Free Trial