diff --git a/README.md b/README.md
index 0d5a3e3..2f5ee2e 100644
--- a/README.md
+++ b/README.md
@@ -17,42 +17,31 @@ This operator extracts features for image or text with [SLIP](https://arxiv.org/
## Code Example
-Load an image from path './teddy.jpg' to generate an image embedding.
+Load an image from path './moon.jpg' to generate an image embedding.
-Read the text 'A teddybear on a skateboard in Times Square.' to generate an text embedding.
+Read the text 'moon in the night.' to generate a text embedding.
- *Write the pipeline in simplified style*:
+*Write a pipeline with explicit inputs/outputs name specifications:*
```python
-import towhee
+from towhee.dc2 import pipe, ops, DataCollection
-towhee.glob('./moon.jpeg') \
- .image_decode() \
- .image_text_embedding.slip(model_name='slip_vit_small', modality='image') \
- .show()
+img_pipe = (
+ pipe.input('url')
+ .map('url', 'img', ops.image_decode.cv2_rgb())
+ .map('img', 'vec', ops.image_text_embedding.slip(model_name='slip_vit_small', modality='image'))
+ .output('img', 'vec')
+)
-towhee.dc(['moon in the night.']) \
- .image_text_embedding.slip(model_name='slip_vit_small', modality='text') \
- .show()
-```
-
-
+text_pipe = (
+ pipe.input('text')
+ .map('text', 'vec', ops.image_text_embedding.slip(model_name='slip_vit_small', modality='text'))
+ .output('text', 'vec')
+)
-*Write a same pipeline with explicit inputs/outputs name specifications:*
+DataCollection(img_pipe('./moon.jpg')).show()
+DataCollection(text_pipe('moon in the night.')).show()
-```python
-import towhee
-
-towhee.glob['path']('./moon.jpeg') \
- .image_decode['path', 'img']() \
- .image_text_embedding.slip['img', 'vec'](model_name='slip_vit_small', modality='image') \
- .select['img', 'vec']() \
- .show()
-
-towhee.dc['text'](['moon in the night.']) \
- .image_text_embedding.slip['text','vec'](model_name= 'slip_vit_small', modality='text') \
- .select['text', 'vec']() \
- .show()
```