Making large AI models cheaper, faster, and more accessible.
Ongoing research training transformer models at scale.
Efficient Training for Big Models.
A simple, performant and scalable Jax LLM!
A native PyTorch Library for large model training.
A library for accelerating Transformer model training on NVIDIA GPUs.
Generative AI framework built for researchers and PyTorch developers working on Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV) domains.
Your email address will not be published. Required fields are marked *
Comment *
Name *
Email *
Website
Captcha: 13 + 12 = ?*
Save my name, email, and website in this browser for the next time I comment.
Ongoing research training transformer models at scale.