diff --git a/README.md b/README.md
index 15d3d25..872bfa6 100644
--- a/README.md
+++ b/README.md
@@ -18,38 +18,23 @@ and maps vectors with labels provided by datasets used for pre-training.
Use the pretrained Movinet model to classify and generate a vector for the given video path './jumpingjack.gif'
([download](https://github.com/tensorflow/models/raw/f8af2291cced43fc9f1d9b41ddbf772ae7b0d7d2/official/projects/movinet/files/jumpingjack.gif)).
- *Write the pipeline in simplified style*:
+*Write a pipeline with explicit inputs/outputs name specifications*:
-- Predict labels (default):
```python
-import towhee
-
-(
- towhee.glob('./jumpingjack.gif')
- .video_decode.ffmpeg()
- .action_classification.movinet(
- model_name='movineta0', topk=5)
- .show()
+from towhee.dc2 import pipe, ops, DataCollection
+
+p = (
+ pipe.input('path')
+ .map('path', 'frames', ops.video_decode.ffmpeg())
+ .map('frames', ('labels', 'scores', 'features'),
+ ops.action_classification.movinet(model_name='movineta0'))
+ .output('path', 'labels', 'scores', 'features')
)
-```
-
-
-*Write a same pipeline with explicit inputs/outputs name specifications*:
-```python
-import towhee
-
-(
- towhee.glob['path']('./jumpingjack.gif')
- .video_decode.ffmpeg['path', 'frames']()
- .action_classification.movinet['frames', ('labels', 'scores', 'features')](
- model_name='movineta0')
- .select['path', 'labels', 'scores', 'features']()
- .show(formatter={'path': 'video_path'})
-)
+DataCollection(p('./jumpingjack.gif')).show()
```
-
+
diff --git a/result.png b/result.png
new file mode 100644
index 0000000..450b730
Binary files /dev/null and b/result.png differ
diff --git a/result1.png b/result1.png
deleted file mode 100644
index 81320aa..0000000
Binary files a/result1.png and /dev/null differ
diff --git a/result2.png b/result2.png
deleted file mode 100644
index fb07273..0000000
Binary files a/result2.png and /dev/null differ