This operator extracts features for images with [Multi-Path Vision Transformer (MPViT)](https://arxiv.org/abs/2112.11010) which can generate embeddings for images. MPViT embeds features of the same size~(i.e., sequence length) with patches of different scales simultaneously by using overlapping convolutional patch embedding. Tokens of different scales are then independently fed into the Transformer encoders via multiple paths and the resulting features are aggregated, enabling both fine and coarse feature representations at the same feature level.
Pretrained model name include `mpvit_tiny`, `mpvit_xsmall`, `mpvit_small` or `mpvit_base`, all of which are pretrained on ImageNet-1K dataset, for more information, please refer the original [MPViT github page](https://github.com/youngwanLEE/MPViT).
***weights_path:*** *str*
Your local weights path, default is None, which means using the pretrained model weights.
***device:*** *str*
Model device, `cpu` or `cuda`.
***num_classes:*** *int*
The number of classes. The default value is 1000.
It is related to model and dataset. If you want to fine-tune this operator, you can change this value to adapt your datasets.
***skip_preprocess:*** *bool*
The flag to control whether to skip image pre-process.
The default value is False.
If set to True, it will skip image preprocessing steps (transforms).
In this case, input image data must be prepared in advance in order to properly fit the model.
<br/>
## Interface
An image embedding operator takes a towhee image as input.
It uses the pre-trained model specified by model name to generate an image embedding in ndarray.