# Audio Embedding with Neural Network Fingerprint *Author: [Jael Gu](https://github.com/jaelgu)*
## Description The audio embedding operator converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 1s. This operator generates audio embeddings with fingerprinting method introduced by [Neural Audio Fingerprint](https://arxiv.org/abs/2010.11910). The model is implemented in Pytorch. We've also trained the nnfp model with [FMA dataset](https://github.com/mdeff/fma) (& some noise audio) and shared weights in this operator. The nnfp operator is suitable for audio fingerprinting.
## Code Example Generate embeddings for the audio "test.wav". *Write a pipeline with explicit inputs/outputs name specifications:* ```python from towhee import pipe, ops, DataCollection p = ( pipe.input('path') .map('path', 'frame', ops.audio_decode.ffmpeg()) .map('frame', 'vecs', ops.audio_embedding.nnfp(device='cpu')) .output('path', 'vecs') ) DataCollection(p('test.wav')).show() ```
## Factory Constructor Create the operator via the following factory method ***audio_embedding.nnfp(model_name='nnfp_default', model_path=None, framework='pytorch')*** **Parameters:** *model_name: str* Model name to create nnfp model with different parameters. *model_path: str* The path to model. If None, it will load default model weights. *framework: str* The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.
## Interface An audio embedding operator generates vectors in numpy.ndarray given towhee audio frames. ***\_\_call\_\_(data)*** **Parameters:** *data: List[towhee.types.audio_frame.AudioFrame]* Input audio data is a list of towhee audio frames. The audio input should be at least 1s. **Returns**: *numpy.ndarray* Audio embeddings in shape (num_clips, 128). Each embedding stands for features of an audio clip with length of 1s.
***save_model(format='pytorch', path='default')*** **Parameters:** *format: str* Format used to save model, defaults to 'pytorch'. Accepted formats: 'pytorch', 'torchscript, 'onnx', 'tensorrt' (in progress) *path: str* Path to save model, defaults to 'default'. The default path is under 'saved' in the same directory of operator cache. ```python from towhee import ops op = ops.audio_embedding.nnfp(device='cpu').get_op() op.save_model('onnx', 'test.onnx') ``` PosixPath('/Home/.towhee/operators/audio-embedding/nnfp/main/test.onnx') ## Fine-tune To fine-tune this operator, please refer to [this example guide](https://github.com/towhee-io/examples/blob/main/fine_tune/7_fine_tune_audio_embedding_operator.ipynb). # More Resources - [Scalar Quantization and Product Quantization - Zilliz blog](https://zilliz.com/learn/scalar-quantization-and-product-quantization): A hands-on dive into scalar quantization (integer quantization) and product quantization with Python. - [How to Get the Right Vector Embeddings - Zilliz blog](https://zilliz.com/blog/how-to-get-the-right-vector-embeddings): A comprehensive introduction to vector embeddings and how to generate them with popular open-source models. - [Audio Retrieval Based on Milvus - Zilliz blog](https://zilliz.com/blog/audio-retrieval-based-on-milvus): Create an audio retrieval system using Milvus, an open-source vector database. Classify and analyze sound data in real time. - [Vector Database Use Case: Audio Similarity Search - Zilliz](https://zilliz.com/vector-database-use-cases/audio-similarity-search): Building agile and reliable audio similarity search with Zilliz vector database (fully managed Milvus). - [Neural Networks and Embeddings for Language Models - Zilliz blog](https://zilliz.com/learn/Neural-Networks-and-Embeddings-for-Language-Models): Exploring neural network language models, specifically recurrent neural networks, and taking a sneak peek at how embeddings are generated. - [Understanding Neural Network Embeddings - Zilliz blog](https://zilliz.com/learn/understanding-neural-network-embeddings): This article is dedicated to going a bit more in-depth into embeddings/embedding vectors, along with how they are used in modern ML algorithms and pipelines.