llama_index
https://github.com/run-llama/llama_index
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
not yet supported1 Subscribers
Add a CodeTriage badge to llama_index
Help out
- Issues
- [Feature Request]: Token-based CodeSplitter instead of character based
- [Feature Request]: add (detailed) usage info to raw when using StructuredLLM
- [Bug]: Handoff Issue: System Replies with Function Agent Message Instead of Response
- [Question]: Inconsistent thinking streaming pattern between Ollama and Anthropic integrations
- [Feature Request]: Support multiple QueryBundles in RetrieverQueryEngine
- [Feature Request]: return ThinkingBlock or similar when using llm.response or other API calls to llm models
- [Bug]: QdrantVectorStore crashes with latest qdrant-client – search_batch has been removed
- [Bug]: RetryGuidelineQueryEngine/GuidelineEvaluator causes ~400k token prompts when used with NLSQLTableQueryEngine(synthesize_response=False)
- [Bug]: Pymilvus 2.6.4 breaks `AsyncMilvusClient` in `MilvusVectorStore`
- [Feature Request]: Deterministic tool I/O pre/post-processing (middleware/hooks) for agents (MCP motivating case)
- Docs
- not yet supported