llama_index
https://github.com/run-llama/llama_index
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported1 Subscribers
Add a CodeTriage badge to llama_index
Help out
- Issues
- [Feature Request]: Opensearch efficient filtering
- [Bug]: JSONalyze Query Engine - DEFAULT_TABLE_NAME
- [Bug]: Unable to use ChromaDB for vector memory
- [Feature Request]: VertexAI AChat_Complete to use with new ReActAgent Workflow
- [Feature Request]: Leave embedding creation to vector stores
- [Bug]: No Input/Output Token count for Gemini 2.5 models
- [Feature Request]: Memory should accept AsyncDBChatStore instead of SQLAlchemyChatStore
- [Bug]: SharePointReader ignores sharepoint_folder_id when sharepoint_folder_path is None, crawls drive root instead
- [Bug]: TypeError when parsing MCP tool schemas with `additionalProperties: false`
- Suggestion: Consider optional HMP protocol support for LlamaIndex
- Docs
- not yet supported