logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

1.9 KiB

Audio Embedding with Vggish

Author: Jael Gu

Desription

The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. This operator is built on top of VGGish with Pytorch. The model is a VGG variant pre-trained with a large scale of audio dataset AudioSet. As suggested, it is suitable to extract features at high level or warm up a larger model.

Code Example

Generate embeddings for the audio "test.wav".

Write the pipeline in simplified style:

from towhee import dc

dc.glob('test.wav')
  .audio_decode()
  .time_window(range=30)
  .audio_embedding.vggish()
  .show()

Write a same pipeline with explicit inputs/outputs name specifications:

from towhee import dc

dc.glob['path']('test.wav')
  .audio_decode['path', 'audio']()
  .time_window['audio', 'frames'](range=30)
  .audio_embedding.vggish['frames', 'vecs']()
  .select('vecs')
  .show()

Factory Constructor

Create the operator via the following factory method

audio_embedding.vggish(weights_path=None, framework="pytorch")

Parameters:

weights_path: str

​ The path to model weights. If None, it will load default model weights.

framework: str

​ The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.

Interface

An audio embedding operator generates vectors in numpy.ndarray given an audio file path or a towhee audio.

Parameters:

Union[str, towhee.types.Audio]

​ The audio path or link in string. Or audio input data in towhee audio frames.

Returns:

numpy.ndarray

​ Audio embeddings in shape (num_clips, 128).

1.9 KiB

Audio Embedding with Vggish

Author: Jael Gu

Desription

The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. This operator is built on top of VGGish with Pytorch. The model is a VGG variant pre-trained with a large scale of audio dataset AudioSet. As suggested, it is suitable to extract features at high level or warm up a larger model.

Code Example

Generate embeddings for the audio "test.wav".

Write the pipeline in simplified style:

from towhee import dc

dc.glob('test.wav')
  .audio_decode()
  .time_window(range=30)
  .audio_embedding.vggish()
  .show()

Write a same pipeline with explicit inputs/outputs name specifications:

from towhee import dc

dc.glob['path']('test.wav')
  .audio_decode['path', 'audio']()
  .time_window['audio', 'frames'](range=30)
  .audio_embedding.vggish['frames', 'vecs']()
  .select('vecs')
  .show()

Factory Constructor

Create the operator via the following factory method

audio_embedding.vggish(weights_path=None, framework="pytorch")

Parameters:

weights_path: str

​ The path to model weights. If None, it will load default model weights.

framework: str

​ The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.

Interface

An audio embedding operator generates vectors in numpy.ndarray given an audio file path or a towhee audio.

Parameters:

Union[str, towhee.types.Audio]

​ The audio path or link in string. Or audio input data in towhee audio frames.

Returns:

numpy.ndarray

​ Audio embeddings in shape (num_clips, 128).