logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

66 lines
1.4 KiB

# Text Embedding with Transformers
*author: Jael Gu*
<br />
## Desription
A text embedding operator takes a sentence, paragraph, or document in string as an input
and output an embedding vector in ndarray which captures the input's core semantic elements.
This operator is implemented with pretrained models from [Huggingface Transformers](https://huggingface.co/docs/transformers).
<br />
## Code Example
Use the pretrained model 'distilbert-base-cased'
to generate a text embedding for the sentence "Hello, world.".
*Write the pipeline*:
```python
import towhee
towhee.dc(["Hello, world."]) \
.text_embedding.transformers(model_name="distilbert-base-cased")
```
<br />
## Factory Constructor
Create the operator via the following factory method
***text_embedding.transformers(model_name="bert-base-uncased")***
**Parameters:**
***model_name***: *str*
The model name in string.
You can get the list of supported model names by calling `get_model_list` from [auto_transformers.py](https://towhee.io/text-embedding/transformers/src/branch/main/auto_transformers.py).
<br />
## Interface
The operator takes a text in string as input.
It loads tokenizer and pre-trained model using model name.
and then return text embedding in ndarray.
**Parameters:**
***text***: *str*
The text in string.
**Returns**:
*numpy.ndarray*
The text embedding extracted by model.
2 years ago