From fadc1cd66d67ba7f51cf151f6391a3fa0bade85f Mon Sep 17 00:00:00 2001
From: Jael Gu <mengjia.gu@zilliz.com>
Date: Thu, 21 Apr 2022 17:42:42 +0800
Subject: [PATCH] Update readme

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
---
 README.md | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/README.md b/README.md
index c84a342..07b1fbe 100644
--- a/README.md
+++ b/README.md
@@ -13,15 +13,18 @@ This pipeline extracts features of a given audio file using a VGGish model imple
 - audio_path:
   - the input audio in `.wav`
   - supported types: `str` (path to the audio)
+  - the audio should be as least 1 second
 
 **Pipeline Output:**
 
-The Operator returns a tuple `Tuple[('embs', numpy.ndarray)]` containing following fields:
+The Operator returns a list of named tuple `[NamedTuple('AudioOutput', [('vec', 'ndarray')])]` containing following fields:
 
-- embs:
+- each item in the output list represents for embedding(s) for an audio clip, which depends on `time-window` in [yaml](./audio_embedding_vggish.yaml).
+
+- vec:
   - embeddings of input audio
   - data type: numpy.ndarray
-  - shape: (num_clips,128)
+  - shape: (num_clips, 128)
 
 ## How to use
 
@@ -39,9 +42,9 @@ $ pip3 install towhee
 >>> from towhee import pipeline
 
 >>> embedding_pipeline = pipeline('towhee/audio-embedding-vggish')
->>> embedding = embedding_pipeline('path/to/your/audio')
+>>> embedding = embedding_pipeline('/path/to/your/audio')
 ```
 
 ## How it works
 
-This pipeline includes a main operator: [audio-embedding](https://towhee.io/operators?limit=30&page=1&filter=3%3Aaudio-embedding) (default: [towhee/torch-vggish](https://hub.towhee.io/towhee/torch-vggish)). The audio embedding operator encodes audio file and finally output a set of vectors of the given audio.
+This pipeline includes a main operator type [audio-embedding](https://towhee.io/operators?limit=30&page=1&filter=3%3Aaudio-embedding) (default: [towhee/torch-vggish](https://hub.towhee.io/towhee/torch-vggish)). The pipeline first decodes the input audio file into audio frames and then combine frames depending on time-window configs. The audio-embedding operator takes combined frames as input and generate corresponding audio embeddings.