A simple, performant and scalable Jax LLM!
A Native-PyTorch Library for LLM Fine-tuning.
DeepSpeed version of NVIDIA's Megatron-LM that adds additional support for several features such as MoE model training, Curriculum Learning, 3D Parallelism, and others.
Ongoing research training transformer models at scale.
Making large AI models cheaper, faster, and more accessible.
Efficient Training for Big Models.
Mesh TensorFlow: Model Parallelism Made Easier.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 10 - 17 = ?*
Save my name, email, and website in this browser for the next time I comment.
A Native-PyTorch Library for LLM Fine-tuning.