towhee
copied
Readme
Files and versions
1.0 KiB
Audio Embedding
Description
The audio embedding pipeline converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 0.9s. This operator is built on top of VGGish with Pytorch.
Code Example
- Create audio embedding pipeline with the default configuration.
from towhee import AutoPipes
p = AutoPipes.pipeline('audio-embedding')
res = p('test.wav')
res.get()
Interface
AudioEmbeddingConfig
You can find some parameters in audio_decode.ffmpeg and audio_embedding.vggish operators.
weights_path: str
The path to model weights. If None, it will load default model weights.
framework: str
The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.
device: int
The number of GPU device, defaults to -1, which means using CPU.
1.0 KiB
Audio Embedding
Description
The audio embedding pipeline converts an input audio into a dense vector which can be used to represent the audio clip's semantics. Each vector represents for an audio clip with a fixed length of around 0.9s. This operator is built on top of VGGish with Pytorch.
Code Example
- Create audio embedding pipeline with the default configuration.
from towhee import AutoPipes
p = AutoPipes.pipeline('audio-embedding')
res = p('test.wav')
res.get()
Interface
AudioEmbeddingConfig
You can find some parameters in audio_decode.ffmpeg and audio_embedding.vggish operators.
weights_path: str
The path to model weights. If None, it will load default model weights.
framework: str
The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.
device: int
The number of GPU device, defaults to -1, which means using CPU.