llm.c
https://github.com/karpathy/llm.c
Cuda
LLM training in simple, raw C/CUDA
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Cuda not yet supported2 Subscribers
Add a CodeTriage badge to llm.c
Help out
- Issues
- Major FP32 llm.c improvements/refactoring/etc.
- Larger Tokenizers
- add batch limit to 124m script to prevent infinite loop
- Add KV cache for inference
- Different batch_size results in different evaluation loss.
- MPI run error
- Add external KV to LLaMA 3
- Suggestion: Test more Activation Functions
- check libnccl instead of nccl to be more reliable
- Re: Fixed modal script for updated cudnn version, and read errors
- Docs
- Cuda not yet supported