# Audio Embedding with CLMR *Author: Jael Gu* ## Desription The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 2s. This operator is built on top of the original implementation of [CLMR](https://github.com/Spijkervet/CLMR). The [default model weight](clmr_checkpoint_10000.pt) provided is pretrained on [Magnatagatune Dataset](https://paperswithcode.com/dataset/magnatagatune) with [SampleCNN](sample_cnn.py). ## Code Example Generate embeddings for the audio "test.wav". *Write the pipeline in simplified style*: ```python from towhee import dc dc.glob('test.wav') .audio_decode() .time_window(range=10) .audio_embedding.clmr() .show() ``` | [-2.1045141, 0.55381, 0.4537212, ...] shape=(6, 512) | *Write a same pipeline with explicit inputs/outputs name specifications:* ```python from towhee import dc dc.glob['path']('test.wav') .audio_decode['path', 'audio']() .time_window['audio', 'frames'](range=10) .audio_embedding.clmr['frames', 'vecs']() .select('vecs') .to_vec() ``` [array([[-2.1045141 , 0.55381 , 0.4537212 , ..., 0.18805158, 0.3079657 , -1.216063 ], [-2.1045141 , 0.55381036, 0.45372102, ..., 0.18805173, 0.3079657 , -1.216063 ], [-2.0874703 , 0.5511826 , 0.46051833, ..., 0.18650496, 0.33218473, -1.2182183 ], [-2.0874703 , 0.55118287, 0.4605182 , ..., 0.18650502, 0.3321851 , -1.2182183 ], [-2.0771544 , 0.5641223 , 0.43814823, ..., 0.18220925, 0.33022994, -1.2070589 ], [-2.0771549 , 0.5641221 , 0.43814805, ..., 0.1822092 , 0.33022994, -1.2070588 ]], dtype=float32)] ## Factory Constructor Create the operator via the following factory method ***audio_embedding.clmr(framework="pytorch")*** **Parameters:** ​ *framework: str* ​ The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch. ## Interface An audio embedding operator generates vectors in numpy.ndarray given an audio file path or a [towhee audio](link/to/AudioFrame/api/doc). **Parameters:** ​ *Union[str, towhee.types.Audio]* ​ The audio path or link in string. Or audio input data in towhee audio frames. The input data should represent for an audio longer than 2s. **Returns**: ​ *numpy.ndarray* ​ Audio embeddings in shape (num_clips, 512). Each embedding stands for features of an audio clip with length of 2s.