diff --git a/README.md b/README.md
index 333ca49..f81201e 100644
--- a/README.md
+++ b/README.md
@@ -21,38 +21,26 @@ Load an image from path './teddy.jpg' to generate an image embedding.
Read the text 'スケートボードに乗っているテディベア。' to generate an text embedding.
- *Write the pipeline in simplified style*:
-
-```python
-import towhee
-
-towhee.glob('./teddy.jpg') \
- .image_decode() \
- .image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='image') \
- .show()
-
-towhee.dc(["スケートボードに乗っているテディベア。"]) \
- .image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='text') \
- .show()
-```
-
-
-
*Write a same pipeline with explicit inputs/outputs name specifications:*
```python
-import towhee
-
-towhee.glob['path']('./teddy.jpg') \
- .image_decode['path', 'img']() \
- .image_text_embedding.japanese_clip['img', 'vec'](model_name='japanese-clip-vit-b-16', modality='image') \
- .select['img', 'vec']() \
- .show()
-
-towhee.dc['text'](["スケートボードに乗っているテディベア。"]) \
- .image_text_embedding.japanese_clip['text','vec'](model_name='japanese-clip-vit-b-16', modality='text') \
- .select['text', 'vec']() \
- .show()
+from towhee.dc2 import pipe, ops, DataCollection
+
+img_pipe = (
+ pipe.input('url')
+ .map('url', 'img', ops.image_decode.cv2_rgb())
+ .map('img', 'vec', ops.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='image'))
+ .output('img', 'vec')
+)
+
+text_pipe = (
+ pipe.input('text')
+ .map('text', 'vec', ops.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='text'))
+ .output('text', 'vec')
+)
+
+DataCollection(img_pipe('./teddy.jpg')).show()
+DataCollection(text_pipe('スケートボードに乗っているテディベア。')).show()
```
@@ -72,7 +60,7 @@ Create the operator via the following factory method
***model_name:*** *str*
- The model name of CLIP. Supported model names:
+ The model name of Japanese CLIP. Supported model names:
- japanese-clip-vit-b-16
- japanese-cloob-vit-b-16