sentence-transformers
https://github.com/ukplab/sentence-transformers
Python
Sentence Embeddings with BERT & XLNet
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported9 Subscribers
Add a CodeTriage badge to sentence-transformers
Help out
- Issues
- Confusion about your model published on https://huggingface.co/nreimers/MiniLM-L6-H384-uncased
- Loading a `.save()`'d model with `revision` and `trust_remote_code` re-downloads code at runtime?
- Fall back to CPU device in case there are no PyTorch parameters
- Enable Sentence Transformer Inference with Intel Gaudi2 GPU Supported ( 'hpu' ) - Follow up for #2557
- Models without PyTorch parameters don't work since v2.3.0
- Text2Topic : a new loss function ?
- Why memory increases during training
- State-of-the-art pretrained model for sentence similarity/clustering?
- Allow extraction of revision id from model
- RuntimeError: Unable to find data type for weight_name='/encoder/layer.0/attention/output/dense/MatMul_output_0'. shape_inference failed to return a type probably this node is from a different domain or using an input produced by such an operator. This may happen if you quantize a model already quantized. You may use extra_options `DefaultTensorType` to indicate the default weight type, usually `onnx.TensorProto.FLOAT`.
- Docs
- Python not yet supported