|
|
@ -18,13 +18,13 @@ which maintains SOTA deep-learning models and tools in computer vision. |
|
|
|
Load an image from path './towhee.jpg' |
|
|
|
and use the pretrained ResNet50 model ('resnet50') to generate an image embedding. |
|
|
|
|
|
|
|
*Write the pipeline in simplified style*: |
|
|
|
*Write the pipeline in simplified style:* |
|
|
|
|
|
|
|
```python |
|
|
|
import towhee |
|
|
|
|
|
|
|
towhee.glob('./towhee.jpg') \ |
|
|
|
.image_decode.cv2() \ |
|
|
|
.image_decode() \ |
|
|
|
.image_embedding.timm(model_name='resnet50') \ |
|
|
|
.show() |
|
|
|
``` |
|
|
@ -36,9 +36,9 @@ towhee.glob('./towhee.jpg') \ |
|
|
|
import towhee |
|
|
|
|
|
|
|
towhee.glob['path']('./towhee.jpg') \ |
|
|
|
.image_decode.cv2['path', 'img']() \ |
|
|
|
.image_decode['path', 'img']() \ |
|
|
|
.image_embedding.timm['img', 'vec'](model_name='resnet50') \ |
|
|
|
.select('img', 'vec') \ |
|
|
|
.select['img', 'vec']() \ |
|
|
|
.show() |
|
|
|
``` |
|
|
|
<img src="./result2.png" height="150px"/> |
|
|
@ -53,18 +53,17 @@ Create the operator via the following factory method |
|
|
|
|
|
|
|
**Parameters:** |
|
|
|
|
|
|
|
***model_name***: *str* |
|
|
|
***model_name:*** *str* |
|
|
|
|
|
|
|
The model name in string. The default value is "resnet34". |
|
|
|
Refer [Timm Docs](https://fastai.github.io/timmdocs/#List-Models-with-Pretrained-Weights) to get a full list of supported models. |
|
|
|
|
|
|
|
|
|
|
|
***num_classes***: *int* |
|
|
|
***num_classes:*** *int* |
|
|
|
|
|
|
|
The number of classes. The default value is 1000. |
|
|
|
It is related to model and dataset. |
|
|
|
|
|
|
|
***skip_preprocess***: *bool* |
|
|
|
***skip_preprocess:*** *bool* |
|
|
|
|
|
|
|
The flag to control whether to skip image preprocess. |
|
|
|
The default value is False. |
|
|
@ -78,18 +77,15 @@ In this case, input image data must be prepared in advance in order to properly |
|
|
|
An image embedding operator takes a towhee image as input. |
|
|
|
It uses the pre-trained model specified by model name to generate an image embedding in ndarray. |
|
|
|
|
|
|
|
|
|
|
|
**Parameters:** |
|
|
|
|
|
|
|
***img***: *towhee.types.Image (a sub-class of numpy.ndarray)* |
|
|
|
***img:*** *towhee.types.Image (a sub-class of numpy.ndarray)* |
|
|
|
|
|
|
|
The decoded image data in numpy.ndarray. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
**Returns**: |
|
|
|
|
|
|
|
*numpy.ndarray* |
|
|
|
**Returns:** *numpy.ndarray* |
|
|
|
|
|
|
|
The image embedding extracted by model. |
|
|
|
|
|
|
|