llama_index
https://github.com/run-llama/llama_index
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported1 Subscribers
Add a CodeTriage badge to llama_index
Help out
- Issues
- [Feature Request]: (Truely) Multimodal Embeddings
- [Feature Request]: Availability of Opus-4-1 in Bedrock Converse
- [Feature Request]: Token-based CodeSplitter instead of character based
- [Feature Request]: add (detailed) usage info to raw when using StructuredLLM
- [Bug]: Handoff Issue: System Replies with Function Agent Message Instead of Response
- [Bug]: WorkflowRuntimeError: Got empty message when running ReActAgent
- [Question]: Inconsistent thinking streaming pattern between Ollama and Anthropic integrations
- [Feature Request]: Support multiple QueryBundles in RetrieverQueryEngine
- [Feature Request]: return ThinkingBlock or similar when using llm.response or other API calls to llm models
- [Question]: why the second reponse is empty?
- Docs
- not yet supported