Browse Source
Update evaluation method
Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
main
3 changed files with
2 additions and
2 deletions
-
benchmark/README.md
-
benchmark/run.py
-
benchmark/run.sh
|
|
@ -3,7 +3,7 @@ |
|
|
|
## Introduction |
|
|
|
|
|
|
|
Build a classification system based on similarity search across embeddings. |
|
|
|
The core ideas in `evaluate.py`: |
|
|
|
The core ideas in `run.py`: |
|
|
|
1. create a new Milvus collection each time |
|
|
|
2. extract embeddings using a pretrained model with model name specified by `--model` |
|
|
|
3. specify inference method with `--format` in value of `pytorch` or `onnx` |
|
|
|
|
|
@ -25,7 +25,7 @@ parser.add_argument('--onnx_dir', type=str, default='../saved/onnx') |
|
|
|
|
|
|
|
args = parser.parse_args() |
|
|
|
model_name = args.model |
|
|
|
onnx_path = os.path.join(args.onnx_dir, model_name.replace('/', '-'), '.onnx') |
|
|
|
onnx_path = os.path.join(args.onnx_dir, model_name.replace('/', '-') + '.onnx') |
|
|
|
dataset_name = args.dataset |
|
|
|
insert_size = args.insert_size |
|
|
|
query_size = args.query_size |