copied
Readme
Files and versions
Updated 3 years ago
text-embedding
Text Embedding with Transformers
author: Jael Gu
Desription
A text embedding operator implemented with pretrained models from Huggingface Transformers.
from towhee import ops
text_encoder = ops.text_embedding.transformers(model_name="bert-base-cased")
text_embedding = text_encoder("Hello, world.")
Factory Constructor
Create the operator via the following factory method
ops.text_embedding.transformers(model_name)
Interface
A text embedding operator takes a sentence, paragraph, or document in string as an input and output an embedding vector in ndarray which captures the input's core semantic elements.
Parameters:
text: str
The text in string.
Returns: numpy.ndarray
The text embedding extracted by model.
Code Example
Use the pretrained Bert-Base-Cased model ('bert-base-cased') to generate a text embedding for the sentence "Hello, world.".
Write the pipeline in simplified style:
import towhee.DataCollection as dc
dc.glob("Hello, world.")
.text_embedding.transformers('bert-base-cased')
.show()
Write a same pipeline with explicit inputs/outputs name specifications:
from towhee import DataCollection as dc
dc.glob['text']('Hello, world.')
.text_embedding.transformers['text', 'vec']('bert-base-cased')
.select('vec')
.show()
Jael Gu
e3ca09f145
| 3 Commits | ||
---|---|---|---|
.gitattributes |
1.1 KiB
|
3 years ago | |
README.md |
1.4 KiB
|
3 years ago | |
__init__.py |
718 B
|
3 years ago | |
auto_transformers.py |
2.3 KiB
|
3 years ago | |
requirements.txt |
42 B
|
3 years ago |