logo
Browse Source

update readme with dc2

main
ChengZi 2 years ago
parent
commit
0b5f976c7e
  1. 49
      README.md
  2. BIN
      result1.png
  3. BIN
      result2.png
  4. BIN
      result3.png
  5. BIN
      result4.png
  6. BIN
      text_emb_result.png
  7. BIN
      video_emb_result.png

49
README.md

@ -18,47 +18,34 @@ Load a video from path './demo_video.mp4' to generate a video embedding.
Read the text 'kids feeding and playing with the horse' to generate a text embedding.
*Write the pipeline in simplified style*:
- Encode video (default):
```python
import towhee
towhee.dc(['./demo_video.mp4']) \
.video_decode.ffmpeg() \
.video_text_embedding.bridge_former(model_name='frozen_model', modality='video') \
.show()
from towhee.dc2 import pipe, ops, DataCollection
p = (
pipe.input('video_path') \
.map('video_path', 'video_frames', ops.video_decode.ffmpeg()) \
.map('video_frames', 'vec', ops.video_text_embedding.bridge_former(model_name='frozen_model', modality='video')) \
.output('video_path', 'video_frames', 'vec')
)
DataCollection(p('./demo_video.mp4')).show()
```
<img src="./result1.png" width="800px"/>
<img src="./video_emb_result.png" width="800px"/>
- Encode text:
```python
import towhee
from towhee.dc2 import pipe, ops, DataCollection
towhee.dc(['kids feeding and playing with the horse']) \
.video_text_embedding.bridge_former(model_name='frozen_model', modality='text') \
.show()
```
<img src="./result2.png" width="800px"/>
p = (
pipe.input('text') \
.map('text', 'vec', ops.video_text_embedding.bridge_former(model_name='frozen_model', modality='text')) \
.output('text', 'vec')
)
*Write a same pipeline with explicit inputs/outputs name specifications:*
```python
import towhee
towhee.dc['path'](['./demo_video.mp4']) \
.video_decode.ffmpeg['path', 'frames']() \
.video_text_embedding.bridge_former['frames', 'vec'](model_name='frozen_model', modality='video') \
.select['path', 'vec']() \
.show(formatter={'path': 'video_path'})
towhee.dc['text'](["kids feeding and playing with the horse"]) \
.video_text_embedding.bridge_former['text','vec'](model_name='frozen_model', modality='text') \
.select['text', 'vec']() \
.show()
DataCollection(p('kids feeding and playing with the horse')).show()
```
<img src="./result3.png" width="800px"/>
<img src="./result4.png" width="800px"/>
<img src="./text_emb_result.png" width="800px"/>
<br />

BIN
result1.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

BIN
result2.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

BIN
result3.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 115 KiB

BIN
result4.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

BIN
text_emb_result.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
video_emb_result.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Loading…
Cancel
Save