llama_index
https://github.com/run-llama/llama_index
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported1 Subscribers
Add a CodeTriage badge to llama_index
Help out
- Issues
- [Documentation][Question]: Provide documentation on passing an explicit parameter value to the function tools
- [Question]: How to view vector store index for single node.
- [Question]: how to pass memory to the router query engine for follow up questions and router query engine has retriever query engine and sub question query engine
- [Question]: Token count not working after streaming
- [Feature Request]: LlamaParse: Add credits_used field to JobResult.job_metadata in Python SDK
- Add PraisonAI tools integration
- [Bug]: `llama_index` retries `openai.AuthenticationError`
- [Bug]: OpensearchVectorClient is not closing the opensearch client on destruction, nor does it expose close.
- [Bug]: Cannot use TextToCypherRetriever in Property Graph
- [Feature Request]: Add an optional id parameter to the ChatMessage object.
- Docs
- not yet supported