llama_index
https://github.com/run-llama/llama_index
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported1 Subscribers
Add a CodeTriage badge to llama_index
Help out
- Issues
- llama-index-llms-ipex-llm: Silent fallback to trust_remote_code=True in tokenizer loading
- llama-index-embeddings-adapter: torch.load() without weights_only=True allows pickle deserialization
- Showcase: LLM-powered Chinese Novel Writing at Zero Cost
- fix: exclude volatile metadata from Node/TextNode hashing and IngestionCache keys to prevent unnecessary re-embeds
- docs: fix query engine link typo
- [Bug]: Node.hash uses MetadataMode.ALL, causing unnecessary re-embeds when volatile file-stat metadata changes
- fix(azureaisearch): store falsy metadata values instead of silently dropping them
- fix(core): stop `LLM*Event.model_dump()` from mutating `response.raw`
- fix(ingestion): merge worker cache entries back into parent IngestionPipeline on multi-worker run
- fix(imdb-reader): assign re.sub and str.replace results back to variable
- Docs
- not yet supported