# Audio Embedding with Vggish *Author: [Jael Gu](https://github.com/jaelgu)*
## Description The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 0.9s. This operator is built on top of [VGGish](https://github.com/tensorflow/models/tree/master/research/audioset/vggish) with Pytorch. The model is a [VGG](https://arxiv.org/abs/1409.1556) variant pre-trained with a large scale of audio dataset [AudioSet](https://research.google.com/audioset). As suggested, it is suitable to extract features at high level or warm up a larger model.
## Code Example Generate embeddings for the audio "test.wav". *Write a same pipeline with explicit inputs/outputs name specifications:* ```python from towhee import pipe, ops p = ( pipe.input('path') .map('path', 'frame', ops.audio_decode.ffmpeg()) .map('frame', 'vecs', ops.audio_embedding.vggish()) .output('vecs') ) p('test.wav').get()[0] ``` | [-0.4931737, -0.40068552, -0.032327592, ...] shape=(10, 128) |
## Factory Constructor Create the operator via the following factory method ***audio_embedding.vggish(weights_path=None, framework="pytorch")*** **Parameters:** *weights_path: str* The path to model weights. If None, it will load default model weights. *framework: str* The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.
## Interface An audio embedding operator generates vectors in numpy.ndarray given towhee audio frames. **Parameters:** *data: List[towhee.types.audio_frame.AudioFrame]* Input audio data is a list of towhee audio frames. The input data should represent for an audio longer than 0.9s. **Returns**: *numpy.ndarray* Audio embeddings in shape (num_clips, 128). Each embedding stands for features of an audio clip with length of 0.9s.