sentence-transformers
https://github.com/ukplab/sentence-transformers
Python
Sentence Embeddings with BERT & XLNet
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported14 Subscribers
Add a CodeTriage badge to sentence-transformers
Help out
- Issues
- In training script of cross encoder do we train it from scratch or just finetuning?
- ask for help: ImportError: Module "sentence_transformers.models" does not define a "BERT" attribute/class
- How to fine-tune bi-encoder
- When training crossencoder, encounter the problem that the model is easy to fall into the local optimum
- Make_multilingual msmarco
- Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 1 column 317718
- What is the difference between training(https://www.sbert.net/docs/training/overview.html#training-data) and unsupervised learning
- is there any way to print loss during simcse training(unsupervised way)
- the results of distillation of multilingual models is poor
- train bi-encoder with MS MARCO
- Docs
- Python not yet supported