# Japanese Image-Text Retrieval Embdding with CLIP *author: David Wang*
## Description This operator extracts features for image or text with [Japanese-CLIP](https://github.com/rinnakk/japanese-clip ) developed by [rinna Co., Ltd](https://rinna.co.jp/), which can generate embeddings for Japanese text and image by jointly training an image encoder and text encoder to maximize the cosine similarity.
## Code Example Load an image from path './teddy.jpg' to generate an image embedding. Read the text 'スケートボードに乗っているテディベア。' to generate an text embedding. *Write a same pipeline with explicit inputs/outputs name specifications:* ```python from towhee.dc2 import pipe, ops, DataCollection img_pipe = ( pipe.input('url') .map('url', 'img', ops.image_decode.cv2_rgb()) .map('img', 'vec', ops.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='image')) .output('img', 'vec') ) text_pipe = ( pipe.input('text') .map('text', 'vec', ops.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='text')) .output('text', 'vec') ) DataCollection(img_pipe('./teddy.jpg')).show() DataCollection(text_pipe('スケートボードに乗っているテディベア。')).show() ``` result1 result2
## Factory Constructor Create the operator via the following factory method ***japanese_clip(model_name, modality)*** **Parameters:** ​ ***model_name:*** *str* ​ The model name of Japanese CLIP. Supported model names: - japanese-clip-vit-b-16 - japanese-cloob-vit-b-16 ​ ***modality:*** *str* ​ Which modality(*image* or *text*) is used to generate the embedding.
## Interface An image-text embedding operator takes a [towhee image](link/to/towhee/image/api/doc) or string as input and generate an embedding in ndarray. **Parameters:** ​ ***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* or *str* ​ The data (image or text based on specified modality) to generate embedding. **Returns:** *numpy.ndarray* ​ The data embedding extracted by model.