Efficient Training for Big Models.
A Native-PyTorch Library for LLM Fine-tuning.
Generative AI framework built for researchers and PyTorch developers working on Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV) domains.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
A native PyTorch Library for large model training.
A library for accelerating Transformer model training on NVIDIA GPUs.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 17 + 11 = ?*
Save my name, email, and website in this browser for the next time I comment.
A Native-PyTorch Library for LLM Fine-tuning.