logo
Browse Source

init the opeartor.

Signed-off-by: image-embedding <image-embedding@zilliz.com>
main
wxywb 2 years ago
committed by image-embedding
parent
commit
c06e50946f
  1. 93
      README.md
  2. 19
      __init__.py
  3. 37
      data2vec_vision.py
  4. 2
      requirements.txt
  5. BIN
      result1.png
  6. BIN
      result2.png

93
README.md

@ -1,2 +1,93 @@
# data2vec-vision
# Image Embdding with data2vec
*author: David Wang*
<br />
## Description
This operator extracts features for image with [data2vec](https://arxiv.org/abs/2202.03555). The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture.
<br />
## Code Example
Load an image from path './towhee.jpg' to generate an image embedding.
*Write the pipeline in simplified style*:
```python
import towhee
towhee.glob('./towhee.jpg') \
.image_decode.cv2() \
.image_embedding.data2vec_vision(model_name='facebook/data2vec-vision-base-ft1k') \
.show()
```
<img src="https://towhee.io/image-embedding/data2vec-vision/raw/branch/main/result1.png" alt="result1" style="height:20px;"/>
*Write a same pipeline with explicit inputs/outputs name specifications:*
```python
import towhee
towhee.glob['path']('./towhee.jpg') \
.image_decode.cv2['path', 'img']() \
.image_embedding.data2vec_vision['img', 'vec'](model_name='facebook/data2vec-vision-base-ft1k') \
.select['img', 'vec']() \
.show()
```
<img src="https://towhee.io/image-embedding/data2vec-vision/raw/branch/main/result2.png" alt="result2" style="height:60px;"/>
<br />
## Factory Constructor
Create the operator via the following factory method
***data2vec_vision(model_name='facebook/data2vec-vision-base')***
**Parameters:**
***model_name***: *str*
The model name in string.
The default value is "facebook/data2vec-vision-base-ft1k".
Supported model name:
- facebook/data2vec-vision-base-ft1k
- facebook/data2vec-vision-large-ft1k
<br />
## Interface
An image embedding operator takes a [towhee image](link/to/towhee/image/api/doc) as input.
It uses the pre-trained model specified by model name to generate an image embedding in ndarray.
**Parameters:**
***img:*** *towhee.types.Image (a sub-class of numpy.ndarray)*
​ The decoded image data in towhee.types.Image (numpy.ndarray).
**Returns:** *numpy.ndarray*
​ The image embedding extracted by model.

19
__init__.py

@ -0,0 +1,19 @@
# Copyright 2021 Zilliz. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .data2vec_vision import Data2VecVision
def data2vec_vision(model_name='facebook/data2vec-vision-base'):
return Data2VecVision(model_name)

37
data2vec_vision.py

@ -0,0 +1,37 @@
# Copyright 2021 Zilliz. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy
import torch
import towhee
from PIL import Image as PILImage
from transformers import BeitFeatureExtractor, Data2VecVisionForImageClassification
from towhee.operator.base import NNOperator
from towhee.types.arg import arg, to_image_color
class Data2VecVision(NNOperator):
def __init__(self, model_name='facebook/data2vec-vision-base'):
self.model = Data2VecVisionForImageClassification.from_pretrained(model_name)
self.feature_extractor = BeitFeatureExtractor.from_pretrained(model_name)
@arg(1, to_image_color('RGB'))
def __call__(self, img: towhee._types.Image) -> numpy.ndarray:
img = PILImage.fromarray(img.astype('uint8'), 'RGB')
inputs = self.feature_extractor(img, return_tensors="pt")
with torch.no_grad():
outputs = self.model.data2vec_vision(**inputs).pooler_output
return outputs.detach().cpu().numpy().flatten()

2
requirements.txt

@ -0,0 +1,2 @@
numpy
transformers>4.19.0

BIN
result1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
result2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Loading…
Cancel
Save