logo
Browse Source

Update

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
main
Jael Gu 2 years ago
parent
commit
0e28a8a4b7
  1. 37
      README.md
  2. BIN
      result.png
  3. BIN
      result1.png
  4. BIN
      result2.png

37
README.md

@ -18,38 +18,23 @@ and maps vectors with labels provided by datasets used for pre-training.
Use the pretrained Omnivore model to classify and generate a vector for the given video path './archery.mp4'
([download](https://dl.fbaipublicfiles.com/pytorchvideo/projects/archery.mp4)).
*Write the pipeline in simplified style*:
*Write a pipeline with explicit inputs/outputs name specifications*:
- Predict labels (default):
```python
import towhee
(
towhee.glob('./archery.mp4')
.video_decode.ffmpeg()
.action_classification.omnivore(
model_name='omnivore_swinT', topk=5)
.show()
from towhee.dc2 import pipe, ops, DataCollection
p = (
pipe.input('path')
.map('path', 'frames', ops.video_decode.ffmpeg())
.map('frames', ('labels', 'scores', 'features'),
ops.action_classification.omnivore(model_name='omnivore_swinT'))
.output('path', 'labels', 'scores', 'features')
)
```
<img src="./result1.png" height="px"/>
*Write a same pipeline with explicit inputs/outputs name specifications*:
```python
import towhee
(
towhee.glob['path']('./archery.mp4')
.video_decode.ffmpeg['path', 'frames']()
.action_classification.omnivore['frames', ('labels', 'scores', 'features')](
model_name='omnivore_swinT')
.select['path', 'labels', 'scores', 'features']()
.show(formatter={'path': 'video_path'})
)
DataCollection(p('./archery.mp4')).show()
```
<img src="./result2.png" height="px"/>
<img src="./result.png" height="px"/>
<br />

BIN
result.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
result1.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

BIN
result2.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

Loading…
Cancel
Save