# Video-Text Retrieval Embedding with BridgeFormer
*author: Jinling Xu*
<br/>
## Description
This operator extracts features for video or text with [BridgeFormer](https://arxiv.org/pdf/2201.04850.pdf) which can generate embeddings for text and video by jointly training a video encoder and text encoder to maximize the cosine similarity.
<br/>
## Code Example
Load a video from path './demo_video.mp4' to generate a video embedding.
Read the text 'kids feeding and playing with the horse' to generate a text embedding.
The model name of frozen in time. Supported model names:
- frozen_model
- clip_initialized_model
***modality:*** *str*
Which modality(*video* or *text*) is used to generate the embedding.
***weight_path:*** *str*
pretrained model weights path.
<br/>
## Interface
An video-text embedding operator takes a list of [Towhee VideoFrame](link/to/towhee/image/api/doc) or string as input and generate an embedding in ndarray.
**Parameters:**
***data:*** *List[towhee.types.Image]* or *str*
The data (list of Towhee VideoFrame (which is uniform subsampled from a video) or text based on specified modality) to generate embedding.