towhee
copied
Readme
Files and versions
2.1 KiB
Image Embedding
Description
An image embedding pipeline generates a vector given an image. This Pipeline extracts features for image with 'ResNet50' models provided by Timm. Timm is a deep-learning library developed by Ross Wightman, who maintains SOTA deep-learning models and tools in computer vision.
Code Example
Create pipeline with the default configuration
from towhee import AutoPipes
p = AutoPipes.pipeline('image-embedding')
res = p('https://github.com/towhee-io/towhee/raw/main/towhee_logo.png')
res.get()
Create pipeline and set the configuration
More parameters refer to the Configuration.
from towhee import AutoPipes, AutoConfig
conf = AutoConfig.load_config('image-embedding')
conf.model_name = 'resnet34'
p = AutoPipes.pipeline('image-embedding', conf)
res = p('https://github.com/towhee-io/towhee/raw/main/towhee_logo.png')
res.get()
Configuration
ImageEmbeddingConfig
You can find some parameters in image_decode.cv2 and image_embedding.timm operators.
mode: str
The mode for image, 'BGR' or 'RGB', defaults to 'BGR'.
model_name: str
The model name in string. The default value is "resnet50". Refer to Timm Docs to get a full list of supported models.
num_classes: int
The number of classes. The default value is 1000. It is related to model and dataset.
skip_preprocess: bool
The flag to control whether to skip image pre-process. The default value is False. If set to True, it will skip image preprocessing steps (transforms). In this case, input image data must be prepared in advance in order to properly fit the model.
device: int
The number of GPU device, defaults to -1, which means using CPU.
Interface
Encode the image and generate embedding vectors.
Parameters:
img: str
Path or url of the image to be loaded.
Returns: np.ndarray
Embedding vectors to represent the image.
2.1 KiB
Image Embedding
Description
An image embedding pipeline generates a vector given an image. This Pipeline extracts features for image with 'ResNet50' models provided by Timm. Timm is a deep-learning library developed by Ross Wightman, who maintains SOTA deep-learning models and tools in computer vision.
Code Example
Create pipeline with the default configuration
from towhee import AutoPipes
p = AutoPipes.pipeline('image-embedding')
res = p('https://github.com/towhee-io/towhee/raw/main/towhee_logo.png')
res.get()
Create pipeline and set the configuration
More parameters refer to the Configuration.
from towhee import AutoPipes, AutoConfig
conf = AutoConfig.load_config('image-embedding')
conf.model_name = 'resnet34'
p = AutoPipes.pipeline('image-embedding', conf)
res = p('https://github.com/towhee-io/towhee/raw/main/towhee_logo.png')
res.get()
Configuration
ImageEmbeddingConfig
You can find some parameters in image_decode.cv2 and image_embedding.timm operators.
mode: str
The mode for image, 'BGR' or 'RGB', defaults to 'BGR'.
model_name: str
The model name in string. The default value is "resnet50". Refer to Timm Docs to get a full list of supported models.
num_classes: int
The number of classes. The default value is 1000. It is related to model and dataset.
skip_preprocess: bool
The flag to control whether to skip image pre-process. The default value is False. If set to True, it will skip image preprocessing steps (transforms). In this case, input image data must be prepared in advance in order to properly fit the model.
device: int
The number of GPU device, defaults to -1, which means using CPU.
Interface
Encode the image and generate embedding vectors.
Parameters:
img: str
Path or url of the image to be loaded.
Returns: np.ndarray
Embedding vectors to represent the image.