logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

75 lines
1.4 KiB

# Text Embedding with Transformers
*author: Jael Gu*
## Desription
A text embedding operator implemented with pretrained models from [Huggingface Transformers](https://huggingface.co/docs/transformers).
```python
from towhee import ops
text_encoder = ops.text_embedding.transformers(model_name="bert-base-cased")
text_embedding = text_encoder("Hello, world.")
```
## Factory Constructor
Create the operator via the following factory method
***ops.text_embedding.transformers(model_name)***
## Interface
A text embedding operator takes a sentence, paragraph, or document in string as an input
and output an embedding vector in ndarray which captures the input's core semantic elements.
**Parameters:**
***text***: *str*
​ The text in string.
**Returns**: *numpy.ndarray*
​ The text embedding extracted by model.
## Code Example
Use the pretrained Bert-Base-Cased model ('bert-base-cased')
to generate a text embedding for the sentence "Hello, world.".
*Write the pipeline in simplified style*:
```python
import towhee.DataCollection as dc
dc.glob("Hello, world.")
.text_embedding.transformers('bert-base-cased')
.show()
```
*Write a same pipeline with explicit inputs/outputs name specifications:*
```python
from towhee import DataCollection as dc
dc.glob['text']('Hello, world.')
.text_embedding.transformers['text', 'vec']('bert-base-cased')
.select('vec')
.show()
```
2 years ago