logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

2.4 KiB

Image-Text Retrieval Embdding with SLIP

author: David Wang


Description

This operator extracts features for image or text with SLIP, a multi-task learning framework for combining self-supervised learning and CLIP pre-training. This is an adaptation from facebookresearch/SLIP.


Code Example

Load an image from path './teddy.jpg' to generate an image embedding.

Read the text 'A teddybear on a skateboard in Times Square.' to generate an text embedding.

Write the pipeline in simplified style:

import towhee

towhee.glob('./moon.jpeg') \
      .image_decode() \
      .image_text_embedding.slip(model_name='slip_vit_small', modality='image') \
      .show()

towhee.dc(['moon in the night.']) \
      .image_text_embedding.slip(model_name='slip_vit_small', modality='text') \
      .show()
result1 result2

Write a same pipeline with explicit inputs/outputs name specifications:

import towhee

towhee.glob['path']('./moon.jpeg') \
      .image_decode['path', 'img']() \
      .image_text_embedding.slip['img', 'vec'](model_name='slip_vit_small', modality='image') \
      .select['img', 'vec']() \
      .show()

towhee.dc['text'](['moon in the night.']) \
      .image_text_embedding.slip['text','vec'](model_name= 'slip_vit_small', modality='text') \
      .select['text', 'vec']() \
      .show()
result1 result2


Factory Constructor

Create the operator via the following factory method

slip(model_name, modality)

Parameters:

model_name: str

​ The model name of SLIP. Supported model names:

  • slip_vit_small
  • slip_vit_base
  • slip_vit_large

modality: str

​ Which modality(image or text) is used to generate the embedding.


Interface

An image-text embedding operator takes a towhee image or string as input and generate an embedding in ndarray.

Parameters:

data: towhee.types.Image (a sub-class of numpy.ndarray) or str

​ The data (image or text based on specified modality) to generate embedding.

Returns: numpy.ndarray

​ The data embedding extracted by model.

2.4 KiB

Image-Text Retrieval Embdding with SLIP

author: David Wang


Description

This operator extracts features for image or text with SLIP, a multi-task learning framework for combining self-supervised learning and CLIP pre-training. This is an adaptation from facebookresearch/SLIP.


Code Example

Load an image from path './teddy.jpg' to generate an image embedding.

Read the text 'A teddybear on a skateboard in Times Square.' to generate an text embedding.

Write the pipeline in simplified style:

import towhee

towhee.glob('./moon.jpeg') \
      .image_decode() \
      .image_text_embedding.slip(model_name='slip_vit_small', modality='image') \
      .show()

towhee.dc(['moon in the night.']) \
      .image_text_embedding.slip(model_name='slip_vit_small', modality='text') \
      .show()
result1 result2

Write a same pipeline with explicit inputs/outputs name specifications:

import towhee

towhee.glob['path']('./moon.jpeg') \
      .image_decode['path', 'img']() \
      .image_text_embedding.slip['img', 'vec'](model_name='slip_vit_small', modality='image') \
      .select['img', 'vec']() \
      .show()

towhee.dc['text'](['moon in the night.']) \
      .image_text_embedding.slip['text','vec'](model_name= 'slip_vit_small', modality='text') \
      .select['text', 'vec']() \
      .show()
result1 result2


Factory Constructor

Create the operator via the following factory method

slip(model_name, modality)

Parameters:

model_name: str

​ The model name of SLIP. Supported model names:

  • slip_vit_small
  • slip_vit_base
  • slip_vit_large

modality: str

​ Which modality(image or text) is used to generate the embedding.


Interface

An image-text embedding operator takes a towhee image or string as input and generate an embedding in ndarray.

Parameters:

data: towhee.types.Image (a sub-class of numpy.ndarray) or str

​ The data (image or text based on specified modality) to generate embedding.

Returns: numpy.ndarray

​ The data embedding extracted by model.