logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

171 lines
4.9 KiB

# Sentence Embedding with Transformers
2 years ago
*author: [Jael Gu](https://github.com/jaelgu)*
<br />
## Description
A sentence embedding operator generates one embedding vector in ndarray for each input text.
The embedding represents the semantic information of the whole input text as one vector.
This operator is implemented with pre-trained models from [Huggingface Transformers](https://huggingface.co/docs/transformers).
<br />
## Code Example
Use the pre-trained model 'sentence-transformers/paraphrase-albert-small-v2'
to generate an embedding for the sentence "Hello, world.".
*Write a pipeline with explicit inputs/outputs name specifications:*
```python
from towhee.dc2 import pipe, ops, DataCollection
p = (
pipe.input('text')
.map('text', 'vec',
ops.sentence_embedding.transformers(model_name='sentence-transformers/paraphrase-albert-small-v2'))
.output('text', 'vec')
)
DataCollection(p('Hello, world.')).show()
```
<img src="./result.png" width="800px"/>
<br />
## Factory Constructor
Create the operator via the following factory method:
***sentence_embedding.transformers(model_name=None)***
**Parameters:**
***model_name***: *str*
The model name in string, defaults to None.
If None, the operator will be initialized without specified model.
Supported model names: NLP transformers models listed in [Huggingface Models](https://huggingface.co/models).
Please note that only models listed in `supported_model_names` are tested.
You can refer to [Towhee Pipeline]() for model performance.
***checkpoint_path***: *str*
The path to local checkpoint, defaults to None.
If None, the operator will download and load pretrained model by `model_name` from Huggingface transformers.
***tokenizer***: *object*
The method to tokenize input text, defaults to None.
If None, the operator will use default tokenizer by `model_name` from Huggingface transformers.
<br />
## Interface
The operator takes a piece of text in string as input.
It loads tokenizer and pre-trained model using model name,
and then return a text emabedding in numpy.ndarray.
***\_\_call\_\_(txt)***
**Parameters:**
***data***: *Union[str, list]*
​ The text in string or a list of texts.
**Returns**:
*numpy.ndarray or list*
​ The text embedding (or token embeddings) extracted by model.
If `data` is string, the operator returns an embedding in numpy.ndarray with shape of (dim,).
If `data` is a list, the operator returns a list of embedding(s) with length of input list.
<br />
***save_model(format='pytorch', path='default')***
Save model to local with specified format.
**Parameters:**
***format***: *str*
​ The format to export model as, such as 'pytorch', 'torchscript', 'onnx',
defaults to 'pytorch'.
***path***: *str*
​ The path where exported model is saved to.
By default, it will save model to `saved` directory under the operator cache.
```python
from towhee import ops
op = ops.sentence_embedding.transformers(model_name='sentence-transformers/paraphrase-albert-small-v2').get_op()
op.save_model('onnx', 'test.onnx')
```
PosixPath('/Home/.towhee/operators/sentence-embedding/transformers/main/test.onnx')
<br />
***supported_model_names(format=None)***
Get a list of all supported model names or supported model names for specified model format.
**Parameters:**
***format***: *str*
​ The model format such as 'pytorch', 'torchscript', 'onnx'.
```python
from towhee import ops
op = ops.sentence_embedding.transformers().get_op()
full_list = op.supported_model_names()
onnx_list = op.supported_model_names(format='onnx')
```
## Fine-tune
### Requirement
If you want to train this operator, besides dependency in requirements.txt, you need install these dependencies.
```python
! python -m pip install datasets evaluate scikit-learn
```
### Get started
Simply speaking, you only need to construct an op instance and pass in some configurations to train the specified task.
```python
import towhee
bert_op = towhee.ops.sentence_embedding.transformers(model_name='bert-base-uncased').get_op()
data_args = {
'dataset_name': 'wikitext',
'dataset_config_name': 'wikitext-2-raw-v1',
}
training_args = {
'num_train_epochs': 3, # you can add epoch number to get a better metric.
'per_device_train_batch_size': 8,
'per_device_eval_batch_size': 8,
'do_train': True,
'do_eval': True,
'output_dir': './tmp/test-mlm',
'overwrite_output_dir': True
}
bert_op.train(task='mlm', data_args=data_args, training_args=training_args)
```
For more infos, refer to the [examples](https://github.com/towhee-io/examples/tree/main/fine_tune/6_train_language_modeling_tasks).
### Dive deep and customize your training
You can change the [training script](https://towhee.io/text-embedding/transformers/src/branch/main/train_clm_with_hf_trainer.py) in your customer way.
Or your can refer to the original [hugging face transformers training examples](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling).