# Image Embedding

## **Description**

An image embedding pipeline generates a vector given an image. This Pipeline extracts features for image with 'ResNet50' models provided by [Timm](https://github.com/rwightman/pytorch-image-models). Timm is a deep-learning library developed by [Ross Wightman](https://twitter.com/wightmanr), who maintains SOTA deep-learning models and tools in computer vision.

<br />



## Code Example

### Create pipeline with the default configuration

```python
from towhee import AutoPipes

p = AutoPipes.pipeline('image-embedding')
res = p('https://github.com/towhee-io/towhee/raw/main/towhee_logo.png')
res.get()
```

### Create pipeline and set the configuration

> More parameters refer to the Configuration.

```python
from towhee import AutoPipes, AutoConfig

conf = AutoConfig.load_config('image-embedding')
conf.model_name = 'resnet34'

p = AutoPipes.pipeline('image-embedding', conf)
res = p('https://github.com/towhee-io/towhee/raw/main/towhee_logo.png')
res.get()
```

<br />



## **Configuration**

### **ImageEmbeddingConfig**

> You can find some parameters in [image_decode.cv2](https://towhee.io/image-decode/cv2) and [image_embedding.timm](https://towhee.io/image-embedding/timm) operators.

***mode:*** str

The mode for image, 'BGR' or 'RGB', defaults to 'BGR'.

***model_name:*** *str*

The model name in string. The default value is "resnet50". Refer to [Timm Docs](https://fastai.github.io/timmdocs/#List-Models-with-Pretrained-Weights) to get a full list of supported models.

***num_classes:*** *int*

The number of classes. The default value is 1000. It is related to model and dataset.

***skip_preprocess:*** *bool*

The flag to control whether to skip image pre-process. The default value is False. If set to True, it will skip image preprocessing steps (transforms). In this case, input image data must be prepared in advance in order to properly fit the model.

***device***: int

The number of GPU device, defaults to -1, which means using CPU.

<br />



## Interface

Encode the image and generate embedding vectors.

**Parameters:**

 ***img***: str

Path or url of the image to be loaded.

**Returns:** np.ndarray

 Embedding vectors to represent the image.