logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

105 lines
2.6 KiB

# Japanese Image-Text Retrieval Embdding with CLIP
*author: David Wang*
<br />
## Description
This operator extracts features for image or text with [Japanese-CLIP](https://github.com/rinnakk/japanese-clip
) developed by [rinna Co., Ltd](https://rinna.co.jp/), which can generate embeddings for Japanese text and image by jointly training an image encoder and text encoder to maximize the cosine similarity.
<br />
## Code Example
Load an image from path './teddy.jpg' to generate an image embedding.
Read the text 'スケートボードに乗っているテディベア。' to generate an text embedding.
*Write the pipeline in simplified style*:
```python
import towhee
towhee.glob('./teddy.jpg') \
.image_decode() \
.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='image') \
.show()
towhee.dc(["スケートボードに乗っているテディベア。"]) \
.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='text') \
.show()
```
<img src="./vec1.png" alt="result1" style="height:20px;"/>
<img src="./vec2.png" alt="result2" style="height:20px;"/>
*Write a same pipeline with explicit inputs/outputs name specifications:*
```python
import towhee
towhee.glob['path']('./teddy.jpg') \
.image_decode['path', 'img']() \
.image_text_embedding.japanese_clip['img', 'vec'](model_name='japanese-clip-vit-b-16', modality='image') \
.select['img', 'vec']() \
.show()
towhee.dc['text'](["スケートボードに乗っているテディベア。"]) \
.image_text_embedding.japanese_clip['text','vec'](model_name='japanese-clip-vit-b-16', modality='text') \
.select['text', 'vec']() \
.show()
```
<img src="./tabular1.png" alt="result1" style="height:60px;"/>
<img src="./tabular2.png" alt="result2" style="height:60px;"/>
<br />
## Factory Constructor
Create the operator via the following factory method
***japanese_clip(model_name, modality)***
**Parameters:**
***model_name:*** *str*
​ The model name of CLIP. Supported model names:
- japanese-clip-vit-b-16
- japanese-cloob-vit-b-16
***modality:*** *str*
​ Which modality(*image* or *text*) is used to generate the embedding.
<br />
## Interface
An image-text embedding operator takes a [towhee image](link/to/towhee/image/api/doc) or string as input and generate an embedding in ndarray.
**Parameters:**
***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* or *str*
​ The data (image or text based on specified modality) to generate embedding.
**Returns:** *numpy.ndarray*
​ The data embedding extracted by model.
2 years ago