# Image Captioning with BLIP *author: David Wang*
## Description This operator generates the caption with [BLIP](https://arxiv.org/abs/2201.12086) which describes the content of the given image. This is an adaptation from [salesforce/BLIP](https://github.com/salesforce/BLIP).
## Code Example Load an image from path './animals.jpg' to generate the caption. *Write the pipeline in simplified style*: ```python import towhee towhee.glob('./animals.jpg') \ .image_decode() \ .image_captioning.blip(model_name='blip_base') \ .select() \ .show() ``` result1 *Write a same pipeline with explicit inputs/outputs name specifications:* ```python import towhee towhee.glob['path']('./animals.jpg') \ .image_decode['path', 'img']() \ .image_captioning.blip['img', 'text'](model_name='blip_base') \ .select['img', 'text']() \ .show() ``` result2
## Factory Constructor Create the operator via the following factory method ***blip(model_name)*** **Parameters:** ​ ***model_name:*** *str* ​ The model name of BLIP. Supported model names: - blip_base
## Interface An image-text embedding operator takes a [towhee image](link/to/towhee/image/api/doc) as input and generate the correspoing caption. **Parameters:** ​ ***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* or *str* ​ The image to generate embedding. **Returns:** *str* ​ The caption generated by model.