|  | @ -18,36 +18,23 @@ and maps vectors with labels provided by datasets used for pre-training. | 
		
	
		
			
				|  |  | Use the pretrained ActionClip model to classify and generate a vector for the given video path './archery.mp4'  |  |  | Use the pretrained ActionClip model to classify and generate a vector for the given video path './archery.mp4'  | 
		
	
		
			
				|  |  | ([download](https://dl.fbaipublicfiles.com/pytorchvideo/projects/archery.mp4)). |  |  | ([download](https://dl.fbaipublicfiles.com/pytorchvideo/projects/archery.mp4)). | 
		
	
		
			
				|  |  | 
 |  |  | 
 | 
		
	
		
			
				|  |  |  *Write the pipeline in simplified style*: |  |  |  | 
		
	
		
			
				|  |  |  |  |  | *Write a pipeline with explicit inputs/outputs name specifications:* | 
		
	
		
			
				|  |  | 
 |  |  | 
 | 
		
	
		
			
				|  |  | ```python |  |  | ```python | 
		
	
		
			
				|  |  | import towhee |  |  |  | 
		
	
		
			
				|  |  | 
 |  |  |  | 
		
	
		
			
				|  |  | ( |  |  |  | 
		
	
		
			
				|  |  |     towhee.glob('./archery.mp4')  |  |  |  | 
		
	
		
			
				|  |  |           .video_decode.ffmpeg() |  |  |  | 
		
	
		
			
				|  |  |           .action_classification.actionclip(model_name='clip_vit_b16') |  |  |  | 
		
	
		
			
				|  |  |           .show() |  |  |  | 
		
	
		
			
				|  |  |  |  |  | from towhee.dc2 import pipe, ops, DataCollection | 
		
	
		
			
				|  |  |  |  |  | 
 | 
		
	
		
			
				|  |  |  |  |  | p = ( | 
		
	
		
			
				|  |  |  |  |  |     pipe.input('path') | 
		
	
		
			
				|  |  |  |  |  |         .map('path', 'frames', ops.video_decode.ffmpeg()) | 
		
	
		
			
				|  |  |  |  |  |         .map('frames', ('labels', 'scores', 'features'), | 
		
	
		
			
				|  |  |  |  |  |              ops.action_classification.actionclip(model_name='clip_vit_b16')) | 
		
	
		
			
				|  |  |  |  |  |         .output('path', 'labels', 'scores', 'features') | 
		
	
		
			
				|  |  | ) |  |  | ) | 
		
	
		
			
				|  |  | ``` |  |  |  | 
		
	
		
			
				|  |  | 
 |  |  |  | 
		
	
		
			
				|  |  | <img src="./result1.png" width="800px"/> |  |  |  | 
		
	
		
			
				|  |  | 
 |  |  | 
 | 
		
	
		
			
				|  |  | *Write a same pipeline with explicit inputs/outputs name specifications:* |  |  |  | 
		
	
		
			
				|  |  | 
 |  |  |  | 
		
	
		
			
				|  |  | ```python |  |  |  | 
		
	
		
			
				|  |  | import towhee |  |  |  | 
		
	
		
			
				|  |  | 
 |  |  |  | 
		
	
		
			
				|  |  | ( |  |  |  | 
		
	
		
			
				|  |  |     towhee.glob['path']('./archery.mp4') |  |  |  | 
		
	
		
			
				|  |  |       .video_decode.ffmpeg['path', 'frames']() |  |  |  | 
		
	
		
			
				|  |  |       .action_classification.actionclip['frames', ('labels', 'scores', 'features')](model_name='clip_vit_b16') |  |  |  | 
		
	
		
			
				|  |  |       .select['path', 'labels', 'scores', 'features']() |  |  |  | 
		
	
		
			
				|  |  |       .show(formatter={'path': 'video_path'}) |  |  |  | 
		
	
		
			
				|  |  | ) |  |  |  | 
		
	
		
			
				|  |  |  |  |  | DataCollection(p('./archery.mp4')).show() | 
		
	
		
			
				|  |  | ``` |  |  | ``` | 
		
	
		
			
				|  |  | 
 |  |  | 
 | 
		
	
		
			
				|  |  | <img src="./result2.png" width="800px"/> |  |  |  | 
		
	
		
			
				|  |  |  |  |  | <img src="./result.png" width="800px"/> | 
		
	
		
			
				|  |  | 
 |  |  | 
 | 
		
	
		
			
				|  |  | <br /> |  |  | <br /> | 
		
	
		
			
				|  |  | 
 |  |  | 
 | 
		
	
	
		
			
				|  | 
 |