diff --git a/README.md b/README.md
index 95ed7e1..1c02fea 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,103 @@
-# clip
+# Image-Text Retrieval Embdding with CLIP
+
+*author: David Wang*
+
+
+
+
+
+
+## Description
+
+This operator extracts features for image or text with [CLIP](https://arxiv.org/abs/2108.02927) which can genearte the embedding for text and image by jointly training an image encoder and text encoder to maximize the cosine similarity. This operator is an adaptation from [openai/CLIP](https://github.com/openai/CLIP).
+
+
+
+
+
+## Code Example
+
+Load an image from path './dog.jpg' to generate an image embedding.
+Read the text 'a dog' to generate an text embedding.
+ *Write the pipeline in simplified style*:
+
+```python
+import towhee
+
+towhee.glob('./dog.jpg') \
+ .image_decode.cv2() \
+ .towhee.clip(name='ViT-B/32', modality='image') \
+ .show()
+
+towhee.dc(["a dog"]) \
+ .image_decode.cv2() \
+ .towhee.clip(name='ViT-B/32', modality='text') \
+ .show()
+```
+
+
+*Write a same pipeline with explicit inputs/outputs name specifications:*
+
+```python
+import towhee
+
+towhee.glob['path']('./dog.jpg') \
+ .image_decode.cv2['path', 'img']() \
+ .towhee.clip['data', 'vec'](name='ViT-B/32', modality='image') \
+ .select['data', 'vec']() \
+ .show()
+
+towhee.dc(["a dog"]) \
+ .select['img', 'vec']() \
+ .towhee.clip['data', 'vec'](name='ViT-B/32', modality='image') \
+ .select['data', 'vec']() \
+ .show()
+```
+
+
+
+
+
+
+
+## Factory Constructor
+
+Create the operator via the following factory method
+
+***clip(name, modality)***
+
+**Parameters:**
+
+ ***name:*** *str*
+
+ The model name of CLIP.
+
+ ***modality:*** *str*
+
+ Which modality(*image* or *text*) is used to generate the embedding.
+
+
+
+
+
+## Interface
+
+An image embedding operator takes a [towhee image](link/to/towhee/image/api/doc) as input.
+It uses the pre-trained model specified by model name to generate an image embedding in ndarray.
+
+
+**Parameters:**
+
+ ***data:*** *towhee.types.Image (a sub-class of numpy.ndarray)* or *str*
+
+ The data(image or text based on choosed modality) to generate the embedding.
+
+
+
+**Returns:** *numpy.ndarray*
+
+ The data embedding extracted by model.
+
+
+
diff --git a/__init__.py b/__init__.py
index b80b17d..18b43d4 100644
--- a/__init__.py
+++ b/__init__.py
@@ -14,9 +14,5 @@
from .clip import Clip
-def dolg(img_size=512, input_dim=3, hidden_dim=1024, output_dim=2048):
- return Dolg(img_size, input_dim, hidden_dim, output_dim)
-
-
def clip(name: str, modality: str):
return Clip(name, modality)