logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

1.7 KiB

Audio Embedding with CLMR

Author: Jael Gu


Description

The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 2s. This operator is built on top of the original implementation of CLMR. The default model weight provided is pretrained on Magnatagatune Dataset with SampleCNN.


Code Example

Generate embeddings for the audio "test.wav".

Write a pipeline with explicit inputs/outputs name specifications:

from towhee import pipe, ops, DataCollection

p = (
    pipe.input('path')
        .map('path', 'frame', ops.audio_decode.ffmpeg())
        .map('frame', 'vecs', ops.audio_embedding.clmr())
        .output('path', 'vecs')
)

DataCollection(p('./test.wav')).show()


Factory Constructor

Create the operator via the following factory method

audio_embedding.clmr(framework="pytorch")

Parameters:

framework: str

The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.


Interface

An audio embedding operator generates vectors in numpy.ndarray given towhee audio frames.

Parameters:

data: List[towhee.types.audio_frame.AudioFrame]

Input audio data is a list of towhee audio frames. The input data should represent for an audio longer than 3s.

Returns:

numpy.ndarray

Audio embeddings in shape (num_clips, 512). Each embedding stands for features of an audio clip with length of 2.7s.

1.7 KiB

Audio Embedding with CLMR

Author: Jael Gu


Description

The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 2s. This operator is built on top of the original implementation of CLMR. The default model weight provided is pretrained on Magnatagatune Dataset with SampleCNN.


Code Example

Generate embeddings for the audio "test.wav".

Write a pipeline with explicit inputs/outputs name specifications:

from towhee import pipe, ops, DataCollection

p = (
    pipe.input('path')
        .map('path', 'frame', ops.audio_decode.ffmpeg())
        .map('frame', 'vecs', ops.audio_embedding.clmr())
        .output('path', 'vecs')
)

DataCollection(p('./test.wav')).show()


Factory Constructor

Create the operator via the following factory method

audio_embedding.clmr(framework="pytorch")

Parameters:

framework: str

The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.


Interface

An audio embedding operator generates vectors in numpy.ndarray given towhee audio frames.

Parameters:

data: List[towhee.types.audio_frame.AudioFrame]

Input audio data is a list of towhee audio frames. The input data should represent for an audio longer than 3s.

Returns:

numpy.ndarray

Audio embeddings in shape (num_clips, 512). Each embedding stands for features of an audio clip with length of 2.7s.