logo
Browse Source

modify README

Signed-off-by: gexy5 <xinyu.ge@zilliz.com>
main
gexy5 3 years ago
parent
commit
fe352b6ea0
  1. 12
      README.md
  2. 3
      movinet.py
  3. BIN
      result1.png
  4. BIN
      result2.png

12
README.md

@ -15,8 +15,8 @@ and maps vectors with labels provided by datasets used for pre-training.
## Code Example ## Code Example
Use the pretrained Movinet model to classify and generate a vector for the given video path './archery.mp4'
([download](https://dl.fbaipublicfiles.com/pytorchvideo/projects/archery.mp4)).
Use the pretrained Movinet model to classify and generate a vector for the given video path './jumpingjack.gif'
([download](https://github.com/tensorflow/models/raw/f8af2291cced43fc9f1d9b41ddbf772ae7b0d7d2/official/projects/movinet/files/jumpingjack.gif)).
*Write the pipeline in simplified style*: *Write the pipeline in simplified style*:
@ -25,7 +25,7 @@ Use the pretrained Movinet model to classify and generate a vector for the given
import towhee import towhee
( (
towhee.glob('./archery.mp4')
towhee.glob('./jumpingjack.gif')
.video_decode.ffmpeg() .video_decode.ffmpeg()
.action_classification.movinet( .action_classification.movinet(
model_name='movineta0', topk=5) model_name='movineta0', topk=5)
@ -40,7 +40,7 @@ import towhee
import towhee import towhee
( (
towhee.glob['path']('./archery.mp4')
towhee.glob['path']('./jumpingjack.gif')
.video_decode.ffmpeg['path', 'frames']() .video_decode.ffmpeg['path', 'frames']()
.action_classification.movinet['frames', ('labels', 'scores', 'features')]( .action_classification.movinet['frames', ('labels', 'scores', 'features')](
model_name='movineta0') model_name='movineta0')
@ -57,14 +57,14 @@ import towhee
Create the operator via the following factory method Create the operator via the following factory method
***video_classification.omnivore(
***video_classification.movinet(
model_name='movineta0', skip_preprocess=False, classmap=None, topk=5)*** model_name='movineta0', skip_preprocess=False, classmap=None, topk=5)***
**Parameters:** **Parameters:**
***model_name***: *str* ***model_name***: *str*
​ The name of pre-trained movinet model.
​ The name of pre-trained MoViNet model.
​ Supported model names: ​ Supported model names:
- movineta0 - movineta0

3
movinet.py

@ -71,7 +71,7 @@ class Movinet(NNOperator):
self.transform_cfgs = get_configs( self.transform_cfgs = get_configs(
side_size=172, side_size=172,
crop_size=172, crop_size=172,
num_frames=30,
num_frames=13,
mean=self.input_mean, mean=self.input_mean,
std=self.input_std, std=self.input_std,
) )
@ -104,6 +104,7 @@ class Movinet(NNOperator):
) )
inputs = data.to(self.device)[None, ...] inputs = data.to(self.device)[None, ...]
self.model.clean_activation_buffers()
feats = self.model.forward_features(inputs) feats = self.model.forward_features(inputs)
features = feats.to('cpu').squeeze(0).detach().numpy() features = feats.to('cpu').squeeze(0).detach().numpy()

BIN
result1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

BIN
result2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Loading…
Cancel
Save