expansionnet-v2
copied
wxywb
2 years ago
4 changed files with 85 additions and 2 deletions
@ -1,2 +1,78 @@ |
|||
# expansionnet-v2 |
|||
# Image Captioning with ExpansionNet v2 |
|||
|
|||
*author: David Wang* |
|||
|
|||
|
|||
<br /> |
|||
|
|||
|
|||
## Description |
|||
|
|||
This operator generates the caption with [ExpansionNet v2](https://arxiv.org/abs/2208.06551) which describes the content of the given image. ExpansionNet v2 introduces the Block Static Expansion which distributes and processes the input over a heterogeneous and arbitrarily big collection of sequences characterized by a different length compared to the input one. This is an adaptation from [jchenghu/ExpansionNet_v2](https://github.com/jchenghu/expansionnet_v2). |
|||
|
|||
|
|||
<br /> |
|||
|
|||
|
|||
## Code Example |
|||
|
|||
Load an image from path './image.jpg' to generate the caption. |
|||
|
|||
*Write the pipeline in simplified style*: |
|||
|
|||
```python |
|||
import towhee |
|||
|
|||
towhee.glob('./image.jpg') \ |
|||
.image_decode() \ |
|||
.image_captioning.expansionnet_v2(model_name='expansionnet_rf') \ |
|||
.show() |
|||
``` |
|||
<img src="./cap.png" alt="result1" style="height:20px;"/> |
|||
|
|||
*Write a same pipeline with explicit inputs/outputs name specifications:* |
|||
|
|||
```python |
|||
import towhee |
|||
|
|||
towhee.glob['path']('./image.jpg') \ |
|||
.image_decode['path', 'img']() \ |
|||
.image_captioning.expansionnet_v2['img', 'text'](model_name='expansionnet_rf') \ |
|||
.select['img', 'text']() \ |
|||
.show() |
|||
``` |
|||
<img src="./tabular.png" alt="result2" style="height:60px;"/> |
|||
|
|||
|
|||
<br /> |
|||
|
|||
|
|||
## Factory Constructor |
|||
|
|||
Create the operator via the following factory method |
|||
|
|||
***expansionnet_v2(model_name)*** |
|||
|
|||
**Parameters:** |
|||
|
|||
***model_name:*** *str* |
|||
|
|||
The model name of ExpansionNet v2. Supported model names: |
|||
- expansionnet_rf |
|||
|
|||
<br /> |
|||
|
|||
## Interface |
|||
|
|||
An image captioning operator takes a [towhee image](link/to/towhee/image/api/doc) as input and generate the correspoing caption. |
|||
|
|||
|
|||
**Parameters:** |
|||
|
|||
***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* |
|||
|
|||
The image to generate caption. |
|||
|
|||
**Returns:** *str* |
|||
|
|||
The caption generated by model. |
|||
|
After Width: | Height: | Size: 10 KiB |
After Width: | Height: | Size: 90 KiB |
Loading…
Reference in new issue