Browse Source
Update
Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
main
1 changed files with
2 additions and
2 deletions
-
README.md
|
|
@ -16,7 +16,7 @@ operation, which scales quadratically with the sequence length. To address this |
|
|
|
we introduce the Longformer with an attention mechanism that scales linearly with sequence |
|
|
|
length, making it easy to process documents of thousands of tokens or longer[2]. |
|
|
|
|
|
|
|
## Reference |
|
|
|
### References |
|
|
|
|
|
|
|
[1].https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/longformer#transformers.LongformerConfig |
|
|
|
|
|
|
@ -34,7 +34,7 @@ from towhee import dc |
|
|
|
|
|
|
|
|
|
|
|
dc.stream(["Hello, world."]) |
|
|
|
.text_embedding.longformer(model_name="allenai/longformer-base-4096") |
|
|
|
.text_embedding.longformer(model_name=c"allenai/longformer-base-4096") |
|
|
|
.show() |
|
|
|
``` |
|
|
|
|
|
|
|