# Audio Embedding with Vggish
*Author: Jael Gu*
## Desription
The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics.
This operator is built on top of [VGGish ](https://github.com/tensorflow/models/tree/master/research/audioset/vggish ) with Pytorch.
The model is a [VGG ](https://arxiv.org/abs/1409.1556 ) variant pre-trained with a large scale of audio dataset [AudioSet ](https://research.google.com/audioset ).
As suggested, it is suitable to extract features at high level or warm up a larger model.
## Code Example
Generate embeddings for the audio "test.wav".
*Write the pipeline in simplified style* :
```python
from towhee import dc
dc.glob('test.wav')
.audio_decode()
.time_window(range=30)
.audio_embedding.vggish()
.show()
```
*Write a same pipeline with explicit inputs/outputs name specifications:*
```python
from towhee import dc
dc.glob['path']('test.wav')
.audio_decode['path', 'audio']()
.time_window['audio', 'frames'](range=30)
.audio_embedding.vggish['frames', 'vecs']()
.select('vecs')
.show()
```
## Factory Constructor
Create the operator via the following factory method
***audio_embedding.vggish(weights_path=None, framework="pytorch")***
**Parameters:**
*weights_path: str*
The path to model weights. If None, it will load default model weights.
*framework: str*
The framework of model implementation.
Default value is "pytorch" since the model is implemented in Pytorch.
## Interface
An audio embedding operator generates vectors in numpy.ndarray given an audio file path or a [towhee audio ](link/to/AudioFrame/api/doc ).
**Parameters:**
*Union[str, towhee.types.Audio]*
The audio path or link in string.
Or audio input data in towhee audio frames.
**Returns**:
*numpy.ndarray*
Audio embeddings in shape (num_clips, 128).