copied
Readme
Files and versions
Updated 2 years ago
image-captioning
Image Captioning with ClipCap
author: David Wang
Description
This operator generates the caption with ClipCap which describes the content of the given image. ClipCap uses CLIP encoding as a prefix to the caption, by employing a simple mapping network, and then fine-tunes a language model to generate the image captions. This is an adaptation from rmokady/CLIP_prefix_caption.
Code Example
Load an image from path './hulk.jpg' to generate the caption.
Write a pipeline with explicit inputs/outputs name specifications:
from towhee import pipe, ops, DataCollection
p = (
pipe.input('url')
.map('url', 'img', ops.image_decode.cv2_rgb())
.map('img', 'text', ops.image_captioning.clipcap(model_name='clipcap_coco'))
.output('img', 'text')
)
DataCollection(p('./image.jpg')).show()
Factory Constructor
Create the operator via the following factory method
clipcap(model_name)
Parameters:
model_name: str
The model name of ClipCap. Supported model names:
- clipcap_coco
- clipcap_conceptual
Interface
An image captioning operator takes a towhee image as input and generate the correspoing caption.
Parameters:
data: towhee.types.Image (a sub-class of numpy.ndarray)
The image to generate caption.
Returns: str
The caption generated by model.
wxywb
f9a73d1d19
| 9 Commits | ||
---|---|---|---|
clipcap_model | 2 years ago | ||
weights | 3 years ago | ||
.gitattributes |
1.1 KiB
|
3 years ago | |
README.md |
1.6 KiB
|
2 years ago | |
__init__.py |
685 B
|
3 years ago | |
cap.png |
9.8 KiB
|
3 years ago | |
clipcap.py |
3.6 KiB
|
2 years ago | |
requirements.txt |
50 B
|
3 years ago | |
tabular.png |
107 KiB
|
3 years ago |