towhee
copied
Readme
Files and versions
3.8 KiB
NLP embedding: Longformer Operator
Authors: Kyle He, Jael Gu
Overview
This operator uses Longformer to convert long text to embeddings.
The Longformer model was presented in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan[1].
Longformer models were proposed in “[Longformer: The Long-Document Transformer][2].
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer[2].
Interface
__init__(self, model_name: str, framework: str = 'pytorch')
Args:
- model_name:
- the model name for embedding
- supported types:
str
, for example 'allenai/longformer-base-4096' or 'allenai/longformer-large-4096'
- framework:
- the framework of the model
- supported types:
str
, default is 'pytorch'
__call__(self, txt: str)
Args:
txt:
- the input text content
- supported types: str
Returns:
The Operator returns a tuple Tuple[('feature_vector', numpy.ndarray)]
containing following fields:
- feature_vector:
- the embedding of the text
- data type:
numpy.ndarray
- shape: (dim,)
Requirements
You can get the required python package by requirements.txt.
How it works
The towhee/nlp-longformer
Operator implements the conversion from text to embedding, which can add to the pipeline.
Reference
[2].https://arxiv.org/pdf/2004.05150.pdf
More Resources
- What is a Transformer Model? An Engineer's Guide: A transformer model is a neural network architecture. It's proficient in converting a particular type of input into a distinct output. Its core strength lies in its ability to handle inputs and outputs of different sequence length. It does this through encoding the input into a matrix with predefined dimensions and then combining that with another attention matrix to decode. This transformation unfolds through a sequence of collaborative layers, which deconstruct words into their corresponding numerical representations. At its heart, a transformer model is a bridge between disparate linguistic structures, employing sophisticated neural network configurations to decode and manipulate human language input. An example of a transformer model is GPT-3, which ingests human language and generates text output.
- Sentence Transformers for Long-Form Text - Zilliz blog: Deep diving into modern transformer-based embeddings for long-form text.
- OpenAI text-embedding-3-large | Zilliz: Building GenAI applications with text-embedding-3-large model and Zilliz Cloud / Milvus
- The guide to jina-embeddings-v2-base-en | Jina AI: jina-embeddings-v2-base-en: specialized embedding model for English text and long documents; support sequences of up to 8192 tokens
- Neural Networks and Embeddings for Language Models - Zilliz blog: Exploring neural network language models, specifically recurrent neural networks, and taking a sneak peek at how embeddings are generated.
- The guide to jina-embeddings-v2-small-en | Jina AI: jina-embeddings-v2-small-en: specialized text embedding model for long English documents; up to 8192 tokens.
3.8 KiB
NLP embedding: Longformer Operator
Authors: Kyle He, Jael Gu
Overview
This operator uses Longformer to convert long text to embeddings.
The Longformer model was presented in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan[1].
Longformer models were proposed in “[Longformer: The Long-Document Transformer][2].
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer[2].
Interface
__init__(self, model_name: str, framework: str = 'pytorch')
Args:
- model_name:
- the model name for embedding
- supported types:
str
, for example 'allenai/longformer-base-4096' or 'allenai/longformer-large-4096'
- framework:
- the framework of the model
- supported types:
str
, default is 'pytorch'
__call__(self, txt: str)
Args:
txt:
- the input text content
- supported types: str
Returns:
The Operator returns a tuple Tuple[('feature_vector', numpy.ndarray)]
containing following fields:
- feature_vector:
- the embedding of the text
- data type:
numpy.ndarray
- shape: (dim,)
Requirements
You can get the required python package by requirements.txt.
How it works
The towhee/nlp-longformer
Operator implements the conversion from text to embedding, which can add to the pipeline.
Reference
[2].https://arxiv.org/pdf/2004.05150.pdf
More Resources
- What is a Transformer Model? An Engineer's Guide: A transformer model is a neural network architecture. It's proficient in converting a particular type of input into a distinct output. Its core strength lies in its ability to handle inputs and outputs of different sequence length. It does this through encoding the input into a matrix with predefined dimensions and then combining that with another attention matrix to decode. This transformation unfolds through a sequence of collaborative layers, which deconstruct words into their corresponding numerical representations. At its heart, a transformer model is a bridge between disparate linguistic structures, employing sophisticated neural network configurations to decode and manipulate human language input. An example of a transformer model is GPT-3, which ingests human language and generates text output.
- Sentence Transformers for Long-Form Text - Zilliz blog: Deep diving into modern transformer-based embeddings for long-form text.
- OpenAI text-embedding-3-large | Zilliz: Building GenAI applications with text-embedding-3-large model and Zilliz Cloud / Milvus
- The guide to jina-embeddings-v2-base-en | Jina AI: jina-embeddings-v2-base-en: specialized embedding model for English text and long documents; support sequences of up to 8192 tokens
- Neural Networks and Embeddings for Language Models - Zilliz blog: Exploring neural network language models, specifically recurrent neural networks, and taking a sneak peek at how embeddings are generated.
- The guide to jina-embeddings-v2-small-en | Jina AI: jina-embeddings-v2-small-en: specialized text embedding model for long English documents; up to 8192 tokens.