llm.c
https://github.com/karpathy/llm.c
Cuda
LLM training in simple, raw C/CUDA
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Cuda not yet supported2 Subscribers
Add a CodeTriage badge to llm.c
Help out
- Issues
- bt-invariant inference
- layernorm_backward.cu: atomicAdd
- fixed a typo
- Fix build errors by adding compute capability flags to the makefile
- gelu_backwards cuda dev file and float4 dtype for parrallel memory read
- Splitting cuda dev files to use smaller sizes for cpu validation compared to profiling
- cuda code that approaches cublas performance
- float4 with better vectorization for adamw.cu
- Rewrite the encoder_forward float4 kernel with pack128
- convert all float to floatX for layernorm_forward
- Docs
- Cuda not yet supported