Making large AI models cheaper, faster, and more accessible.
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
Efficient Training for Big Models.
A Native-PyTorch Library for LLM Fine-tuning.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ongoing research training transformer models at scale.
DeepSpeed version of NVIDIA's Megatron-LM that adds additional support for several features such as MoE model training, Curriculum Learning, 3D Parallelism, and others.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 20 + 12 = ?*
Save my name, email, and website in this browser for the next time I comment.
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.