copied
Readme
Files and versions
Updated 2 years ago
image-captioning
Image Captioning with BLIP
author: David Wang
Description
This operator generates the caption with BLIP which describes the content of the given image. This is an adaptation from salesforce/BLIP.
Code Example
Load an image from path './animals.jpg' to generate the caption.
Write the pipeline in simplified style:
import towhee
towhee.glob('./animals.jpg') \
.image_decode() \
.image_captioning.blip(model_name='blip_base') \
.show()
Write a same pipeline with explicit inputs/outputs name specifications:
import towhee
towhee.glob['path']('./animals.jpg') \
.image_decode['path', 'img']() \
.image_captioning.blip['img', 'text'](model_name='blip_base') \
.select['img', 'text']() \
.show()
Factory Constructor
Create the operator via the following factory method
blip(model_name)
Parameters:
model_name: str
The model name of BLIP. Supported model names:
- blip_base
Interface
An image-text embedding operator takes a towhee image as input and generate the correspoing caption.
Parameters:
img: towhee.types.Image (a sub-class of numpy.ndarray)
The image to generate embedding.
Returns: str
The caption generated by model.
wxywb
823619c7af
| 4 Commits | ||
---|---|---|---|
configs | 2 years ago | ||
models | 2 years ago | ||
.gitattributes |
1.1 KiB
|
2 years ago | |
README.md |
1.5 KiB
|
2 years ago | |
__init__.py |
673 B
|
2 years ago | |
blip.py |
2.6 KiB
|
2 years ago | |
cap.png |
8.8 KiB
|
2 years ago | |
requirements.txt |
73 B
|
2 years ago | |
tabular.png |
88 KiB
|
2 years ago |