# Pipeline: Audio Embedding using VGGish Authors: Jael Gu ## Overview This pipeline extracts features of a given audio file using a VGGish model implemented in Pytorch. This is a supervised model pre-trained with [AudioSet](https://research.google.com/audioset/), which contains over 2 million sound clips. ## Interface **Input Arguments:** - audio_path: - the input audio in `.wav` - supported types: `str` (path to the audio) - the audio should be as least 1 second **Pipeline Output:** The Operator returns a list of named tuple `[NamedTuple('AudioOutput', [('vec', 'ndarray')])]` containing following fields: - each item in the output list represents for embedding(s) for an audio clip, which depends on `time-window` in [yaml](./audio_embedding_vggish.yaml). - vec: - embeddings of input audio - data type: numpy.ndarray - shape: (num_clips, 128) ## How to use 1. Install [Towhee](https://github.com/towhee-io/towhee) ```bash $ pip3 install towhee ``` > You can refer to [Getting Started with Towhee](https://towhee.io/) for more details. If you have any questions, you can [submit an issue to the towhee repository](https://github.com/towhee-io/towhee/issues). 2. Run it with Towhee ```python >>> from towhee import pipeline >>> embedding_pipeline = pipeline('towhee/audio-embedding-vggish') >>> embedding = embedding_pipeline('/path/to/your/audio') ``` ## How it works This pipeline includes a main operator type [audio-embedding](https://towhee.io/operators?limit=30&page=1&filter=3%3Aaudio-embedding) (default: [towhee/torch-vggish](https://hub.towhee.io/towhee/torch-vggish)). The pipeline first decodes the input audio file into audio frames and then combine frames depending on time-window configs. The audio-embedding operator takes combined frames as input and generate corresponding audio embeddings.