A Native-PyTorch Library for LLM Fine-tuning.
A library for accelerating Transformer model training on NVIDIA GPUs.
Making large AI models cheaper, faster, and more accessible.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ongoing research training transformer models at scale.
Generative AI framework built for researchers and PyTorch developers working on Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV) domains.
DeepSpeed version of NVIDIA's Megatron-LM that adds additional support for several features such as MoE model training, Curriculum Learning, 3D Parallelism, and others.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 15 - 10 = ?*
Save my name, email, and website in this browser for the next time I comment.
A library for accelerating Transformer model training on NVIDIA GPUs.