|
@ -18,13 +18,13 @@ which maintains SOTA deep-learning models and tools in computer vision. |
|
|
Load an image from path './towhee.jpg' |
|
|
Load an image from path './towhee.jpg' |
|
|
and use the pretrained ResNet50 model ('resnet50') to generate an image embedding. |
|
|
and use the pretrained ResNet50 model ('resnet50') to generate an image embedding. |
|
|
|
|
|
|
|
|
*Write the pipeline in simplified style*: |
|
|
|
|
|
|
|
|
*Write the pipeline in simplified style:* |
|
|
|
|
|
|
|
|
```python |
|
|
```python |
|
|
import towhee |
|
|
import towhee |
|
|
|
|
|
|
|
|
towhee.glob('./towhee.jpg') \ |
|
|
towhee.glob('./towhee.jpg') \ |
|
|
.image_decode.cv2() \ |
|
|
|
|
|
|
|
|
.image_decode() \ |
|
|
.image_embedding.timm(model_name='resnet50') \ |
|
|
.image_embedding.timm(model_name='resnet50') \ |
|
|
.show() |
|
|
.show() |
|
|
``` |
|
|
``` |
|
@ -36,9 +36,9 @@ towhee.glob('./towhee.jpg') \ |
|
|
import towhee |
|
|
import towhee |
|
|
|
|
|
|
|
|
towhee.glob['path']('./towhee.jpg') \ |
|
|
towhee.glob['path']('./towhee.jpg') \ |
|
|
.image_decode.cv2['path', 'img']() \ |
|
|
|
|
|
|
|
|
.image_decode['path', 'img']() \ |
|
|
.image_embedding.timm['img', 'vec'](model_name='resnet50') \ |
|
|
.image_embedding.timm['img', 'vec'](model_name='resnet50') \ |
|
|
.select('img', 'vec') \ |
|
|
|
|
|
|
|
|
.select['img', 'vec']() \ |
|
|
.show() |
|
|
.show() |
|
|
``` |
|
|
``` |
|
|
<img src="./result2.png" height="150px"/> |
|
|
<img src="./result2.png" height="150px"/> |
|
@ -53,18 +53,17 @@ Create the operator via the following factory method |
|
|
|
|
|
|
|
|
**Parameters:** |
|
|
**Parameters:** |
|
|
|
|
|
|
|
|
***model_name***: *str* |
|
|
|
|
|
|
|
|
***model_name:*** *str* |
|
|
|
|
|
|
|
|
The model name in string. The default value is "resnet34". |
|
|
The model name in string. The default value is "resnet34". |
|
|
Refer [Timm Docs](https://fastai.github.io/timmdocs/#List-Models-with-Pretrained-Weights) to get a full list of supported models. |
|
|
Refer [Timm Docs](https://fastai.github.io/timmdocs/#List-Models-with-Pretrained-Weights) to get a full list of supported models. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
***num_classes***: *int* |
|
|
|
|
|
|
|
|
***num_classes:*** *int* |
|
|
|
|
|
|
|
|
The number of classes. The default value is 1000. |
|
|
The number of classes. The default value is 1000. |
|
|
It is related to model and dataset. |
|
|
It is related to model and dataset. |
|
|
|
|
|
|
|
|
***skip_preprocess***: *bool* |
|
|
|
|
|
|
|
|
***skip_preprocess:*** *bool* |
|
|
|
|
|
|
|
|
The flag to control whether to skip image preprocess. |
|
|
The flag to control whether to skip image preprocess. |
|
|
The default value is False. |
|
|
The default value is False. |
|
@ -78,18 +77,15 @@ In this case, input image data must be prepared in advance in order to properly |
|
|
An image embedding operator takes a towhee image as input. |
|
|
An image embedding operator takes a towhee image as input. |
|
|
It uses the pre-trained model specified by model name to generate an image embedding in ndarray. |
|
|
It uses the pre-trained model specified by model name to generate an image embedding in ndarray. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
**Parameters:** |
|
|
**Parameters:** |
|
|
|
|
|
|
|
|
***img***: *towhee.types.Image (a sub-class of numpy.ndarray)* |
|
|
|
|
|
|
|
|
***img:*** *towhee.types.Image (a sub-class of numpy.ndarray)* |
|
|
|
|
|
|
|
|
The decoded image data in numpy.ndarray. |
|
|
The decoded image data in numpy.ndarray. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
**Returns**: |
|
|
|
|
|
|
|
|
|
|
|
*numpy.ndarray* |
|
|
|
|
|
|
|
|
**Returns:** *numpy.ndarray* |
|
|
|
|
|
|
|
|
The image embedding extracted by model. |
|
|
The image embedding extracted by model. |
|
|
|
|
|
|
|
|