# Video-Text Retrieval Embedding with BridgeFormer *author: Jinling Xu*
## Description This operator extracts features for video or text with [BridgeFormer](https://arxiv.org/pdf/2201.04850.pdf) which can generate embeddings for text and video by jointly training a video encoder and text encoder to maximize the cosine similarity.
## Code Example Load a video from path './demo_video.mp4' to generate a video embedding. Read the text 'kids feeding and playing with the horse' to generate a text embedding. *Write the pipeline in simplified style*: - Encode video (default): ```python import towhee towhee.dc(['./demo_video.mp4']) \ .video_decode.ffmpeg() \ .video_text_embedding.bridge_former(model_name='frozen_model', modality='video') \ .show() ``` - Encode text: ```python import towhee towhee.dc(['kids feeding and playing with the horse']) \ .video_text_embedding.bridge_former(model_name='frozen_model', modality='text') \ .show() ``` *Write a same pipeline with explicit inputs/outputs name specifications:* ```python import towhee towhee.dc['path'](['./demo_video.mp4']) \ .video_decode.ffmpeg['path', 'frames'](sample_type='uniform_temporal_subsample', args={'num_samples': 4}) \ .runas_op['frames', 'frames'](func=lambda x: [y for y in x]) \ .video_text_embedding.bridge_former['frames', 'vec'](model_name='frozen_model', modality='video') \ .select['path', 'vec']() \ .show(formatter={'path': 'video_path'}) towhee.dc['text'](["kids feeding and playing with the horse"]) \ .video_text_embedding.bridge_former['text','vec'](model_name='frozen_model', modality='text') \ .select['text', 'vec']() \ .show() ```
## Factory Constructor Create the operator via the following factory method ***bridge_former(model_name, modality, weight_path)*** **Parameters:** ​ ***model_name:*** *str* ​ The model name of frozen in time. Supported model names: - frozen_model - clip_initialized_model ​ ***modality:*** *str* ​ Which modality(*video* or *text*) is used to generate the embedding. ​ ***weight_path:*** *str* ​ pretrained model weights path.
## Interface An video-text embedding operator takes a list of [Towhee VideoFrame](link/to/towhee/image/api/doc) or string as input and generate an embedding in ndarray. **Parameters:** ​ ***data:*** *List[towhee.types.Image]* or *str* ​ The data (list of Towhee VideoFrame (which is uniform subsampled from a video) or text based on specified modality) to generate embedding. **Returns:** *numpy.ndarray* ​ The data embedding extracted by model.