# Image Captioning with ClipCap *author: David Wang*
## Description This operator generates the caption with [ClipCap](https://arxiv.org/abs/2111.09734) which describes the content of the given image. ClipCap uses CLIP encoding as a prefix to the caption, by employing a simple mapping network, and then fine-tunes a language model to generate the image captions. This is an adaptation from [rmokady/CLIP_prefix_caption](https://github.com/rmokady/CLIP_prefix_caption).
## Code Example Load an image from path './hulk.jpg' to generate the caption. *Write a pipeline with explicit inputs/outputs name specifications:* ```python from towhee import pipe, ops, DataCollection p = ( pipe.input('url') .map('url', 'img', ops.image_decode.cv2_rgb()) .map('img', 'text', ops.image_captioning.clipcap(model_name='clipcap_coco')) .output('img', 'text') ) DataCollection(p('./image.jpg')).show() ``` result2
## Factory Constructor Create the operator via the following factory method ***clipcap(model_name)*** **Parameters:** ​ ***model_name:*** *str* ​ The model name of ClipCap. Supported model names: - clipcap_coco - clipcap_conceptual
## Interface An image captioning operator takes a [towhee image](link/to/towhee/image/api/doc) as input and generate the correspoing caption. **Parameters:** ​ ***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* ​ The image to generate caption. **Returns:** *str* ​ The caption generated by model. # More Resources - [CLIP Object Detection: Merging AI Vision with Language Understanding - Zilliz blog](https://zilliz.com/learn/CLIP-object-detection-merge-AI-vision-with-language-understanding): CLIP Object Detection combines CLIP's text-image understanding with object detection tasks, allowing CLIP to locate and identify objects in images using texts. - [Multimodal RAG locally with CLIP and Llama3 - Zilliz blog](https://zilliz.com/blog/multimodal-RAG-with-CLIP-Llama3-and-milvus): A tutorial walks you through how to build a multimodal RAG with CLIP, Llama3, and Milvus. - [Supercharged Semantic Similarity Search in Production - Zilliz blog](https://zilliz.com/learn/supercharged-semantic-similarity-search-in-production): Building a Blazing Fast, Highly Scalable Text-to-Image Search with CLIP embeddings and Milvus, the most advanced open-source vector database. - [The guide to clip-vit-base-patch32 | OpenAI](https://zilliz.com/ai-models/clip-vit-base-patch32): clip-vit-base-patch32: a CLIP multimodal model variant by OpenAI for image and text embedding. - [Exploring OpenAI CLIP: The Future of Multi-Modal AI Learning - Zilliz blog](https://zilliz.com/learn/exploring-openai-clip-the-future-of-multimodal-ai-learning): Multimodal AI learning can get input and understand information from various modalities like text, images, and audio together, leading to a deeper understanding of the world. Learn more about OpenAI's CLIP (Contrastive Language-Image Pre-training), a popular multimodal model for text and image data. - [From Text to Image: Fundamentals of CLIP - Zilliz blog](https://zilliz.com/blog/fundamentals-of-clip): Search algorithms rely on semantic similarity to retrieve the most relevant results. With the CLIP model, the semantics of texts and images can be connected in a high-dimensional vector space. Read this simple introduction to see how CLIP can help you build a powerful text-to-image service.