# Chinese Image-Text Retrieval Embdding with Taiyi *author: David Wang*
## Description This operator extracts features for image or text(in Chinese) with [Taiyi(太乙)](https://arxiv.org/abs/2209.02970) which can generate embeddings for text and image by jointly training an image encoder and text encoder to maximize the cosine similarity. This method is developed by [IDEA-CCNL](https://github.com/IDEA-CCNL/Fengshenbang-LM/).
## Code Example Load an image from path './teddy.jpg' to generate an image embedding. Read the text 'A teddybear on a skateboard in Times Square.' to generate an text embedding. *Write a pipeline with explicit inputs/outputs name specifications:* ```python from towhee import pipe, ops, DataCollection img_pipe = ( pipe.input('url') .map('url', 'img', ops.image_decode.cv2_rgb()) .map('img', 'vec', ops.image_text_embedding.taiyi(model_name='taiyi-clip-roberta-102m-chinese', modality='image')) .output('img', 'vec') ) text_pipe = ( pipe.input('text') .map('text', 'vec', ops.image_text_embedding.taiyi(model_name='taiyi-clip-roberta-102m-chinese', modality='text')) .output('text', 'vec') ) DataCollection(img_pipe('./dog.jpg')).show() DataCollection(text_pipe('一只小狗')).show() ``` result1 result2
## Factory Constructor Create the operator via the following factory method ***taiyi(model_name, modality)*** **Parameters:** ​ ***model_name:*** *str* ​ The model name of Taiyi. Supported model names: - taiyi-clip-roberta-102m-chinese - taiyi-clip-roberta-large-326m-chinese ​ ***modality:*** *str* ​ Which modality(*image* or *text*) is used to generate the embedding. ​ ***clip_checkpoint_path:*** *str* ​ The weight path to load for the clip branch. ​ ***text_checkpoint_path:*** *str* ​ The weight path to load for the text branch. ​ ***devcice:*** *str* ​ The device in string, defaults to None. If None, it will enable "cuda" automatically when cuda is available.
## Interface An image-text embedding operator takes a [towhee image](link/to/towhee/image/api/doc) or string as input and generate an embedding in ndarray. **Parameters:** ​ ***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* or *str* ​ The data (image or text based on specified modality) to generate embedding. **Returns:** *numpy.ndarray* ​ The data embedding extracted by model.