# Image-Text Retrieval Embdding with BLIP

*author: David Wang*


<br />



## Description

This operator extracts features for image or text with [BLIP](https://arxiv.org/abs/2201.12086) which can generate embeddings for text and image by jointly training an image encoder and text encoder to maximize the cosine similarity. This is a adaptation from [salesforce/BLIP](https://github.com/salesforce/BLIP).


<br />


## Code Example

Load an image from path './teddy.jpg' to generate an image embedding. 

Read the text 'A teddybear on a skateboard in Times Square.' to generate an text embedding. 

*Write a pipeline with explicit inputs/outputs name specifications:*

```python
from towhee.dc2 import pipe, ops, DataCollection

img_pipe = (
    pipe.input('url')
    .map('url', 'img', ops.image_decode.cv2_rgb())
    .map('img', 'vec', ops.image_text_embedding.blip(model_name='blip_itm_base_coco', modality='image'))
    .output('img', 'vec')
)

text_pipe = (
    pipe.input('text')
    .map('text', 'vec', ops.image_text_embedding.blip(model_name='blip_itm_base_coco', modality='text'))
    .output('text', 'vec')
)

DataCollection(img_pipe('./teddy.jpg')).show()
DataCollection(text_pipe('A teddybear on a skateboard in Times Square.')).show()

```
<img src="https://towhee.io/image-text-embedding/blip/raw/branch/main/tabular1.png" alt="result1" style="height:60px;"/>
<img src="https://towhee.io/image-text-embedding/blip/raw/branch/main/tabular2.png" alt="result2" style="height:60px;"/>


<br />



## Factory Constructor

Create the operator via the following factory method

***blip(model_name, modality)***

**Parameters:**

​   ***model_name:*** *str*

​   The model name of BLIP. Supported model names: 
- blip_itm_base_coco
- blip_itm_large_coco
- blip_itm_base_flickr
- blip_itm_large_flickr


​   ***modality:*** *str*

​   Which modality(*image* or *text*) is used to generate the embedding. 

<br />



## Interface

An image-text embedding operator takes a [towhee image](link/to/towhee/image/api/doc) or string as input and generate an embedding in ndarray.

***save_model(format='pytorch', path='default')***

Save model to local with specified format.

**Parameters:**

***format***: *str*

​	The format of saved model, defaults to 'pytorch'.

***path***: *str*

​	The path where model is saved to. By default, it will save model to the operator directory.


```python
from towhee import ops

op = ops.image_text_embedding.blip(model_name='blip_itm_base_coco', modality='image').get_op()
op.save_model('onnx', 'test.onnx')
```
<br />



**Parameters:**

​	***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)*  or *str*

​  The data (image or text based on specified modality) to generate embedding.	



**Parameters:**

​	***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)*  or *str*

​  The data (image or text based on specified modality) to generate embedding.	



**Returns:** *numpy.ndarray*

​   The data embedding extracted by model.

***supported_model_names(format=None)***

Get a list of all supported model names or supported model names for specified model format.

**Parameters:**

***format***: *str*

​	The model format such as 'pytorch', 'torchscript'.

```python
from towhee import ops


op = ops.image_text_embedding.blip(model_name='blip_itm_base_coco', modality='image').get_op()
full_list = op.supported_model_names()
onnx_list = op.supported_model_names(format='onnx')
print(f'Onnx-support/Total Models: {len(onnx_list)}/{len(full_list)}')
```

<br />

## Fine-tune
### Requirement
If you want to train this operator, besides dependency in requirements.txt, you need install these dependencies.
There is also an [example](https://github.com/towhee-io/examples/blob/main/image/text_image_search/2_deep_dive_text_image_search.ipynb) to show how to finetune it on a custom dataset.
```python
! python -m pip install datasets 
```
### Get start

```python
import towhee

blip_op = towhee.ops.image_text_embedding.blip(model_name='blip_itm_base_coco', modality='image').get_op()

data_args = {
    'dataset_name': 'ydshieh/coco_dataset_script',
    'dataset_config_name': '2017',
    'max_seq_length': 77,
    'data_dir': path_to_your_coco_dataset,
    'image_mean': [0.48145466, 0.4578275, 0.40821073],
    'image_std': [0.26862954, 0.26130258, 0.27577711]
}
training_args = {
    'num_train_epochs': 3, # you can add epoch number to get a better metric.
    'per_device_train_batch_size': 8,
    'per_device_eval_batch_size': 8,
    'do_train': True,
    'do_eval': True,
    'remove_unused_columns': False,
    'output_dir': './tmp/test-blip',
    'overwrite_output_dir': True,
}
model_args = {
    'freeze_vision_model': False,
    'freeze_text_model': False,
    'cache_dir': './cache'
}

blip_op.train(data_args=data_args, training_args=training_args, model_args=model_args)
```

### Dive deep and customize your training
You can change the [training script](https://towhee.io/image-text-embedding/blip/src/branch/main/train_blip_with_hf_trainer.py) in your customer way. 
Or your can refer to the original [hugging face transformers training examples](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text).