|
@ -18,18 +18,18 @@ This operator extracts features for image or text with [CLIP](https://arxiv.org/ |
|
|
## Code Example |
|
|
## Code Example |
|
|
|
|
|
|
|
|
Load an image from path './teddy.jpg' to generate an image embedding. |
|
|
Load an image from path './teddy.jpg' to generate an image embedding. |
|
|
Read the text 'a dog' to generate an text embedding. |
|
|
|
|
|
|
|
|
Read the text 'A teddybear on a skateboard in Times Square.' to generate an text embedding. |
|
|
*Write the pipeline in simplified style*: |
|
|
*Write the pipeline in simplified style*: |
|
|
|
|
|
|
|
|
```python |
|
|
```python |
|
|
import towhee |
|
|
import towhee |
|
|
|
|
|
|
|
|
towhee.glob('./dog.jpg') \ |
|
|
|
|
|
|
|
|
towhee.glob('./teddy.jpg') \ |
|
|
.image_decode.cv2() \ |
|
|
.image_decode.cv2() \ |
|
|
.towhee.clip(name='ViT-B/32', modality='image') \ |
|
|
.towhee.clip(name='ViT-B/32', modality='image') \ |
|
|
.show() |
|
|
.show() |
|
|
|
|
|
|
|
|
towhee.dc(["a dog"]) \ |
|
|
|
|
|
|
|
|
towhee.dc(["A teddybear on a skateboard in Times Square."]) \ |
|
|
.image_decode.cv2() \ |
|
|
.image_decode.cv2() \ |
|
|
.towhee.clip(name='ViT-B/32', modality='text') \ |
|
|
.towhee.clip(name='ViT-B/32', modality='text') \ |
|
|
.show() |
|
|
.show() |
|
@ -42,19 +42,19 @@ towhee.dc(["a dog"]) \ |
|
|
```python |
|
|
```python |
|
|
import towhee |
|
|
import towhee |
|
|
|
|
|
|
|
|
towhee.glob['path']('./dog.jpg') \ |
|
|
|
|
|
|
|
|
towhee.glob['path']('./teddy.jpg') \ |
|
|
.image_decode.cv2['path', 'img']() \ |
|
|
.image_decode.cv2['path', 'img']() \ |
|
|
.towhee.clip['data', 'vec'](name='ViT-B/32', modality='image') \ |
|
|
.towhee.clip['data', 'vec'](name='ViT-B/32', modality='image') \ |
|
|
.select['data', 'vec']() \ |
|
|
.select['data', 'vec']() \ |
|
|
.show() |
|
|
.show() |
|
|
|
|
|
|
|
|
towhee.dc(["a dog"]) \ |
|
|
|
|
|
.select['img', 'vec']() \ |
|
|
|
|
|
.towhee.clip['data', 'vec'](name='ViT-B/32', modality='image') \ |
|
|
|
|
|
.select['data', 'vec']() \ |
|
|
|
|
|
|
|
|
towhee.dc['text'](["A teddybear on a skateboard in Times Square."]) \ |
|
|
|
|
|
.towhee.clip['text','vec'](name='ViT-B/32', modality='text') \ |
|
|
|
|
|
.select['text', 'vec']() \ |
|
|
.show() |
|
|
.show() |
|
|
``` |
|
|
``` |
|
|
<img src="https://towhee.io/towhee/clip/raw/branch/main/tabular1.png" alt="result1" style="height:60px;"/> |
|
|
<img src="https://towhee.io/towhee/clip/raw/branch/main/tabular1.png" alt="result1" style="height:60px;"/> |
|
|
|
|
|
<img src="https://towhee.io/towhee/clip/raw/branch/main/tabular2.png" alt="result2" style="height:60px;"/> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<br /> |
|
|
<br /> |
|
@ -71,7 +71,16 @@ Create the operator via the following factory method |
|
|
|
|
|
|
|
|
***name:*** *str* |
|
|
***name:*** *str* |
|
|
|
|
|
|
|
|
The model name of CLIP. |
|
|
|
|
|
|
|
|
The model name of CLIP. avaliable options are: |
|
|
|
|
|
- RN50 |
|
|
|
|
|
- RN101 |
|
|
|
|
|
- RN50x4 |
|
|
|
|
|
- RN50x16 |
|
|
|
|
|
- RN50x64 |
|
|
|
|
|
- ViT-B/32 |
|
|
|
|
|
- ViT-B/64 |
|
|
|
|
|
- ViT-L/14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
***modality:*** *str* |
|
|
***modality:*** *str* |
|
|
|
|
|
|
|
@ -83,7 +92,7 @@ Create the operator via the following factory method |
|
|
|
|
|
|
|
|
## Interface |
|
|
## Interface |
|
|
|
|
|
|
|
|
An image-text embedding operator takes a [towhee image](link/to/towhee/image/api/doc) or string as input and generate an image embedding in ndarray. |
|
|
|
|
|
|
|
|
An image-text embedding operator takes a [towhee image](link/to/towhee/image/api/doc) or string as input and generate an embedding in ndarray. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
**Parameters:** |
|
|
**Parameters:** |
|
|