logo
clip
repo-copy-icon

copied

Browse Source

update clip readme.

Signed-off-by: wxywb <xy.wang@zilliz.com>
main
wxywb 3 years ago
parent
commit
4ff571dafd
  1. 103
      README.md
  2. 4
      __init__.py

103
README.md

@ -1,2 +1,103 @@
# clip
# Image-Text Retrieval Embdding with CLIP
*author: David Wang*
<br />
## Description
This operator extracts features for image or text with [CLIP](https://arxiv.org/abs/2108.02927) which can genearte the embedding for text and image by jointly training an image encoder and text encoder to maximize the cosine similarity. This operator is an adaptation from [openai/CLIP](https://github.com/openai/CLIP).
<br />
## Code Example
Load an image from path './dog.jpg' to generate an image embedding.
Read the text 'a dog' to generate an text embedding.
*Write the pipeline in simplified style*:
```python
import towhee
towhee.glob('./dog.jpg') \
.image_decode.cv2() \
.towhee.clip(name='ViT-B/32', modality='image') \
.show()
towhee.dc(["a dog"]) \
.image_decode.cv2() \
.towhee.clip(name='ViT-B/32', modality='text') \
.show()
```
<img src="https://towhee.io/image-embedding/dolg/raw/branch/main/result1.png" alt="result1" style="height:20px;"/>
*Write a same pipeline with explicit inputs/outputs name specifications:*
```python
import towhee
towhee.glob['path']('./dog.jpg') \
.image_decode.cv2['path', 'img']() \
.towhee.clip['data', 'vec'](name='ViT-B/32', modality='image') \
.select['data', 'vec']() \
.show()
towhee.dc(["a dog"]) \
.select['img', 'vec']() \
.towhee.clip['data', 'vec'](name='ViT-B/32', modality='image') \
.select['data', 'vec']() \
.show()
```
<img src="https://towhee.io/image-embedding/dolg/raw/branch/main/result2.png" alt="result2" style="height:60px;"/>
<br />
## Factory Constructor
Create the operator via the following factory method
***clip(name, modality)***
**Parameters:**
***name:*** *str*
​ The model name of CLIP.
***modality:*** *str*
​ Which modality(*image* or *text*) is used to generate the embedding.
<br />
## Interface
An image embedding operator takes a [towhee image](link/to/towhee/image/api/doc) as input.
It uses the pre-trained model specified by model name to generate an image embedding in ndarray.
**Parameters:**
***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* or *str*
​ The data(image or text based on choosed modality) to generate the embedding.
**Returns:** *numpy.ndarray*
​ The data embedding extracted by model.

4
__init__.py

@ -14,9 +14,5 @@
from .clip import Clip
def dolg(img_size=512, input_dim=3, hidden_dim=1024, output_dim=2048):
return Dolg(img_size, input_dim, hidden_dim, output_dim)
def clip(name: str, modality: str):
return Clip(name, modality)

Loading…
Cancel
Save