logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

Updated 1 year ago

video-text-embedding

Video-Text Retrieval Embedding with DRL

author: Chen Zhang


Description

This operator extracts features for video or text with DRL(Disentangled Representation Learning for Text-Video Retrieval), and then it can get the similarity by Weighted Token-wise Interaction (WTI) module.


Code Example

Read the text 'kids feeding and playing with the horse' to generate a text embedding.

from towhee import pipe, ops, DataCollection

p = (
    pipe.input('text') \
        .map('text', 'vec', ops.video_text_embedding.drl(base_encoder='clip_vit_b32', modality='text', device='cuda:0')) \
        .output('text', 'vec')
)

DataCollection(p('kids feeding and playing with the horse')).show()

Load an video from path './demo_video.mp4' to generate a video embedding.

from towhee import pipe, ops, DataCollection

p = (
    pipe.input('video_path') \
        .map('video_path', 'flame_gen', ops.video_decode.ffmpeg(sample_type='uniform_temporal_subsample', args={'num_samples': 12})) \
        .map('flame_gen', 'flame_list', lambda x: [y for y in x]) \
        .map('flame_list', 'vec', ops.video_text_embedding.drl(base_encoder='clip_vit_b32', modality='video', device='cuda:0')) \
        .output('video_path', 'flame_list', 'vec')
)

DataCollection(p('./demo_video.mp4')).show()



Note: For this model, cpu is not support, and you must specify device='cuda...'

Factory Constructor

Create the operator via the following factory method

drl(base_encoder, modality)

Parameters:

base_encoder: str

​ The base CLIP encode name in DRL model. Supported model names:

  • clip_vit_b32

modality: str

​ Which modality(video or text) is used to generate the embedding.


Interface

An video-text embedding operator takes a list of towhee VideoFrame or string as input and generate an embedding in ndarray.

Parameters:

data: List[towhee.types.VideoFrame] or str

​ The data (list of VideoFrame(which is uniform subsampled from a video) or text based on specified modality) to generate embedding.

Returns: numpy.ndarray

​ The data embedding extracted by model. When text, the shape is (text_token_num, model_dim), when video, the shape is (video_token_num, model_dim)

ChengZi 714bb52036 rm dc2 in readme 9 Commits
file-icon .gitattributes
1.2 KiB
download-icon
add pth weights 2 years ago
file-icon README.md
2.4 KiB
download-icon
rm dc2 in readme 1 year ago
file-icon WTI.png
82 KiB
download-icon
init DRL model 2 years ago
file-icon __init__.py
719 B
download-icon
init DRL model 2 years ago
file-icon clip_vit_b32_wti.pth
388 MiB
download-icon
add pth weights 2 years ago
file-icon demo_video.mp4
950 KiB
download-icon
init DRL model 2 years ago
file-icon drl.py
5.1 KiB
download-icon
update readme with dc2 1 year ago
file-icon requirements.txt
58 B
download-icon
get_similarity_logits 2 years ago
file-icon text_emb_result.png
15 KiB
download-icon
update readme with dc2 1 year ago
file-icon video_emb_result.png
30 KiB
download-icon
update readme with dc2 1 year ago