logo
Browse Source

Update README

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
main
Jael Gu 3 years ago
parent
commit
e07b20a25f
  1. 13
      README.md

13
README.md

@ -19,7 +19,9 @@ This pipeline extracts features of a given audio file using a VGGish model imple
The Operator returns a list of named tuple `[NamedTuple('AudioOutput', [('vec', 'ndarray')])]` containing following fields: The Operator returns a list of named tuple `[NamedTuple('AudioOutput', [('vec', 'ndarray')])]` containing following fields:
- each item in the output list represents for embedding(s) for an audio clip, which depends on `time-window` in [yaml](./audio_embedding_vggish.yaml).
- each item in the output list represents for embedding(s) for an audio clip,
length & timestamps of which depend on `time-window` in [yaml](./audio_embedding_vggish.yaml)
(You can modify `time_range_sec` & `time_step_sec` to change the way of audio split.)
- vec: - vec:
- embeddings of input audio - embeddings of input audio
@ -42,9 +44,14 @@ $ pip3 install towhee
>>> from towhee import pipeline >>> from towhee import pipeline
>>> embedding_pipeline = pipeline('towhee/audio-embedding-vggish') >>> embedding_pipeline = pipeline('towhee/audio-embedding-vggish')
>>> embedding = embedding_pipeline('/path/to/your/audio')
>>> outs = embedding_pipeline('/path/to/your/audio')
>>> embeds = outs[0][0]
``` ```
## How it works ## How it works
This pipeline includes a main operator type [audio-embedding](https://towhee.io/operators?limit=30&page=1&filter=3%3Aaudio-embedding) (default: [towhee/torch-vggish](https://hub.towhee.io/towhee/torch-vggish)). The pipeline first decodes the input audio file into audio frames and then combine frames depending on time-window configs. The audio-embedding operator takes combined frames as input and generate corresponding audio embeddings.
This pipeline includes two main operator types:
[audio-decode](https://towhee.io/operators?limit=30&page=1&filter=1%3Aaudio-decode) & [audio-embedding](https://towhee.io/operators?limit=30&page=1&filter=3%3Aaudio-embedding).
By default, the pipeline uses [towhee/audio-decoder](https://towhee.io/towhee/audio-decoder) to load audio path as a list of audio frames in ndarray.
Then `time-window` will combine audio frames into a list of ndarray, each of which represents an audio clip in fixed length.
At the end, the [towhee/torch-vggish](https://hub.towhee.io/towhee/torch-vggish)) operator will generate a list of audio embeddings for each audio clip.

Loading…
Cancel
Save