# Audio Embedding with CLMR *Author: [Jael Gu](https://github.com/jaelgu)*
## Description The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 2s. This operator is built on top of the original implementation of [CLMR](https://github.com/Spijkervet/CLMR). The [default model weight](clmr_checkpoint_10000.pt) provided is pretrained on [Magnatagatune Dataset](https://paperswithcode.com/dataset/magnatagatune) with [SampleCNN](sample_cnn.py).
## Code Example Generate embeddings for the audio "test.wav". *Write a pipeline with explicit inputs/outputs name specifications:* ```python from towhee import pipe, ops, DataCollection p = ( pipe.input('path') .map('path', 'frame', ops.audio_decode.ffmpeg()) .map('frame', 'vecs', ops.audio_embedding.clmr()) .output('path', 'vecs') ) DataCollection(p('./test.wav')).show() ```
## Factory Constructor Create the operator via the following factory method ***audio_embedding.clmr(framework="pytorch")*** **Parameters:** *framework: str* The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.
## Interface An audio embedding operator generates vectors in numpy.ndarray given towhee audio frames. **Parameters:** *data: List[towhee.types.audio_frame.AudioFrame]* Input audio data is a list of towhee audio frames. The input data should represent for an audio longer than 3s. **Returns**: *numpy.ndarray* Audio embeddings in shape (num_clips, 512). Each embedding stands for features of an audio clip with length of 2.7s.