logo
Browse Source

[DOC] Refine Readme

Signed-off-by: LocoRichard <lichen.wang@zilliz.com>
main
LocoRichard 3 years ago
parent
commit
0deb52b1d0
  1. 11
      README.md

11
README.md

@ -12,10 +12,7 @@ The Longformer model was presented in Longformer: The Long-Document Transformer
**Longformer** models were proposed in “[Longformer: The Long-Document Transformer][2].
Transformer-based models are unable to process long sequences due to their self-attention
operation, which scales quadratically with the sequence length. To address this limitation,
we introduce the Longformer with an attention mechanism that scales linearly with sequence
length, making it easy to process documents of thousands of tokens or longer[2].
> Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer[2].
### References
@ -27,7 +24,7 @@ length, making it easy to process documents of thousands of tokens or longer[2].
## Code Example
Use the pretrained model "facebook/dpr-ctx_encoder-single-nq-base"
Use the pre-trained model "facebook/dpr-ctx_encoder-single-nq-base"
to generate a text embedding for the sentence "Hello, world.".
*Write the pipeline*:
@ -43,7 +40,7 @@ towhee.dc(["Hello, world."]) \
## Factory Constructor
Create the operator via the following factory method
Create the operator via the following factory method:
***text_embedding.dpr(model_name="allenai/longformer-base-4096")***
@ -66,7 +63,7 @@ Supported model names:
## Interface
The operator takes a text in string as input.
It loads tokenizer and pre-trained model using model name.
It loads tokenizer and pre-trained model using model name
and then return text embedding in ndarray.
**Parameters:**

Loading…
Cancel
Save