logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

Updated 3 months ago

sentence-embedding

Sentence Embedding with Transformers

author: Jael Gu


Description

A sentence embedding operator generates one embedding vector in ndarray for each input text. The embedding represents the semantic information of the whole input text as one vector. This operator is implemented with pre-trained models from Huggingface Transformers.


Code Example

Use the pre-trained model 'sentence-transformers/paraphrase-albert-small-v2' to generate an embedding for the sentence "Hello, world.".

Write a pipeline with explicit inputs/outputs name specifications:

from towhee import pipe, ops, DataCollection

p = (
    pipe.input('text')
        .map('text', 'vec', 
             ops.sentence_embedding.transformers(model_name='sentence-transformers/paraphrase-albert-small-v2'))
        .output('text', 'vec')
)

DataCollection(p('Hello, world.')).show()


Factory Constructor

Create the operator via the following factory method:

sentence_embedding.transformers(model_name=None)

Parameters:

model_name: str

The model name in string, defaults to None. If None, the operator will be initialized without specified model.

Supported model names: NLP transformers models listed in Huggingface Models. Please note that only models listed in supported_model_names are tested by us. You can refer to Towhee Pipeline for model performance.

checkpoint_path: str

The path to local checkpoint, defaults to None.

  • If None, the operator will download and load pretrained model by model_name from Huggingface transformers.
  • The checkpoint path could be a path to a directory containing model weights saved using save_pretrained() by HuggingFace Transformers.
  • Or you can pass a path to a PyTorch state_dict save file.

tokenizer: object

The method to tokenize input text, defaults to None. If None, the operator will use default tokenizer by model_name from HuggingFace transformers.

return_usage: bool

The flag to return token usage with call method, defaults to False. If True, call method will return a dictionary containing data (embedding).


Interface

The operator takes a piece of text in string as input. It loads tokenizer and pre-trained model using model name, and then return a text emabedding in numpy.ndarray.

__call__(txt)

Parameters:

data: Union[str, list]

​ The text in string or a list of texts.

Returns:

numpy.ndarray or list

​ The text embedding (or token embeddings) extracted by model. If data is string, the operator returns an embedding in numpy.ndarray with shape of (dim,). If data is a list, the operator returns a list of embedding(s) with length of input list.


save_model(format='pytorch', path='default')

Save model to local with specified format.

Parameters:

format: str

​ The format to export model as, such as 'pytorch', 'torchscript', 'onnx', defaults to 'pytorch'.

path: str

​ The path where exported model is saved to. By default, it will save model to saved directory under the operator cache.

from towhee import ops

op = ops.sentence_embedding.transformers(model_name='sentence-transformers/paraphrase-albert-small-v2').get_op()
op.save_model('onnx', 'test.onnx')

PosixPath('/Home/.towhee/operators/sentence-embedding/transformers/main/test.onnx')


supported_model_names(format=None)

Get a list of all supported model names or supported model names for specified model format.

Parameters:

format: str

​ The model format such as 'pytorch', 'torchscript', 'onnx'.

from towhee import ops


op = ops.sentence_embedding.transformers().get_op()
full_list = op.supported_model_names()
onnx_list = op.supported_model_names(format='onnx')

Fine-tune

Requirement

If you want to train this operator, besides dependency in requirements.txt, you need install these dependencies.

! python -m pip install datasets evaluate scikit-learn

Get started

Simply speaking, you only need to construct an op instance and pass in some configurations to train the specified task.

import towhee

bert_op = towhee.ops.sentence_embedding.transformers(model_name='bert-base-uncased').get_op()
data_args = {
    'dataset_name': 'wikitext',
    'dataset_config_name': 'wikitext-2-raw-v1',
}
training_args = {
    'num_train_epochs': 3, # you can add epoch number to get a better metric.
    'per_device_train_batch_size': 8,
    'per_device_eval_batch_size': 8,
    'do_train': True,
    'do_eval': True,
    'output_dir': './tmp/test-mlm',
    'overwrite_output_dir': True
}
bert_op.train(task='mlm', data_args=data_args, training_args=training_args)

For more infos, refer to the examples.

Dive deep and customize your training

You can change the training script in your customer way. Or your can refer to the original hugging face transformers training examples.

# More Resources

- [All-Mpnet-Base-V2: Enhancing Sentence Embedding with AI - Zilliz blog](https://zilliz.com/learn/all-mpnet-base-v2-enhancing-sentence-embedding-with-ai): Delve into one of the deep learning models that has played a significant role in the development of sentence embedding: MPNet.
Jael Gu 46e63505c3 Add more resources 38 Commits
folder-icon benchmark Update benchmark 2 years ago
file-icon .gitattributes
1.1 KiB
download-icon
Initial commit 2 years ago
file-icon README.md
7.5 KiB
download-icon
Add more resources 3 months ago
file-icon __init__.py
723 B
download-icon
Add files 2 years ago
file-icon auto_transformers.py
14 KiB
download-icon
Fix token count 7 months ago
file-icon requirements.txt
56 B
download-icon
Add files 2 years ago
file-icon result.png
5.7 KiB
download-icon
Add files 2 years ago
file-icon test_onnx.py
3.5 KiB
download-icon
Disable use_cache 2 years ago
file-icon train_clm_with_hf_trainer.py
18 KiB
download-icon
train 2 years ago
file-icon train_mlm_with_hf_trainer.py
20 KiB
download-icon
train 2 years ago