logo
Browse Source

Update evaluation method

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
main
Jael Gu 2 years ago
parent
commit
2cf6fabd69
  1. 2
      benchmark/README.md
  2. 2
      benchmark/run.py
  3. 0
      benchmark/run.sh

2
benchmark/README.md

@ -3,7 +3,7 @@
## Introduction
Build a classification system based on similarity search across embeddings.
The core ideas in `evaluate.py`:
The core ideas in `run.py`:
1. create a new Milvus collection each time
2. extract embeddings using a pretrained model with model name specified by `--model`
3. specify inference method with `--format` in value of `pytorch` or `onnx`

2
benchmark/evaluate.py → benchmark/run.py

@ -25,7 +25,7 @@ parser.add_argument('--onnx_dir', type=str, default='../saved/onnx')
args = parser.parse_args()
model_name = args.model
onnx_path = os.path.join(args.onnx_dir, model_name.replace('/', '-'), '.onnx')
onnx_path = os.path.join(args.onnx_dir, model_name.replace('/', '-') + '.onnx')
dataset_name = args.dataset
insert_size = args.insert_size
query_size = args.query_size

0
benchmark/evaluate.sh → benchmark/run.sh

Loading…
Cancel
Save