logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

2.4 KiB

Audio Classification with PANNS

Author: Jael Gu


Description

The audio classification operator classify the given audio data with 527 labels from the large-scale AudioSet dataset. The pre-trained model used here is from the paper PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition (paper link).


Code Example

Predict labels and generate embeddings given the audio path "test.wav".

Write the pipeline in simplified style:

import towhee

(
    towhee.glob('test.wav')
          .audio_decode.ffmpeg()
          .runas_op(func=lambda x:[y[0] for y in x])
          .audio_classification.panns()
          .show() 
)

Write a same pipeline with explicit inputs/outputs name specifications:

import towhee

(
    towhee.glob['path']('test.wav')
          .audio_decode.ffmpeg['path', 'frames']()
          .runas_op['frames', 'frames'](func=lambda x:[y[0] for y in x])
          .audio_classification.panns['frames', ('labels', 'scores', 'vec')]()
          .show() 
)


Factory Constructor

Create the operator via the following factory method

audio_classification.panns(weights_path=None, framework='pytorch', sample_rate=32000, topk=5)

Parameters:

weights_path: str

The path to model weights. If None, it will load default model weights.

framework: str

The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.

sample_rate: int

The target sample rate of audio data after convention, defaults to 32000.

topk: int

The number of labels & corresponding scores to be returned, sorting by possibility from high to low. Default value is 5.


Interface

An audio embedding operator generates vectors in numpy.ndarray given towhee audio frames.

Parameters:

data: List[towhee.types.audio_frame.AudioFrame]

Input audio data is a list of towhee audio frames. The input data should represent for an audio longer than 2s.

Returns:

labels, scores, vec: Tuple(List[str], List(float), numpy.ndarray)

  • labels: a list of topk predicted labels by model.
  • scores: a list of scores corresponding to labels, representing for possibility.
  • vec: a audio embedding generated by model, shape of which is (2048,)

2.4 KiB

Audio Classification with PANNS

Author: Jael Gu


Description

The audio classification operator classify the given audio data with 527 labels from the large-scale AudioSet dataset. The pre-trained model used here is from the paper PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition (paper link).


Code Example

Predict labels and generate embeddings given the audio path "test.wav".

Write the pipeline in simplified style:

import towhee

(
    towhee.glob('test.wav')
          .audio_decode.ffmpeg()
          .runas_op(func=lambda x:[y[0] for y in x])
          .audio_classification.panns()
          .show() 
)

Write a same pipeline with explicit inputs/outputs name specifications:

import towhee

(
    towhee.glob['path']('test.wav')
          .audio_decode.ffmpeg['path', 'frames']()
          .runas_op['frames', 'frames'](func=lambda x:[y[0] for y in x])
          .audio_classification.panns['frames', ('labels', 'scores', 'vec')]()
          .show() 
)


Factory Constructor

Create the operator via the following factory method

audio_classification.panns(weights_path=None, framework='pytorch', sample_rate=32000, topk=5)

Parameters:

weights_path: str

The path to model weights. If None, it will load default model weights.

framework: str

The framework of model implementation. Default value is "pytorch" since the model is implemented in Pytorch.

sample_rate: int

The target sample rate of audio data after convention, defaults to 32000.

topk: int

The number of labels & corresponding scores to be returned, sorting by possibility from high to low. Default value is 5.


Interface

An audio embedding operator generates vectors in numpy.ndarray given towhee audio frames.

Parameters:

data: List[towhee.types.audio_frame.AudioFrame]

Input audio data is a list of towhee audio frames. The input data should represent for an audio longer than 2s.

Returns:

labels, scores, vec: Tuple(List[str], List(float), numpy.ndarray)

  • labels: a list of topk predicted labels by model.
  • scores: a list of scores corresponding to labels, representing for possibility.
  • vec: a audio embedding generated by model, shape of which is (2048,)