diff --git a/README.md b/README.md index 8f7ed78..2946c71 100644 --- a/README.md +++ b/README.md @@ -1,72 +1,68 @@ -# Mobilefacenet Face Landmark Detecter +# MobileFaceNet Face Landmark Detecter *authors: David Wang* ## Desription -A class of extremely efficient CNN models to extract 68 landmarks from a facial image[MobileFaceNets](https://arxiv.org/pdf/1804.07573.pdf). +[MobileFaceNets](https://arxiv.org/pdf/1804.07573) is a class of extremely efficient CNN models to extract 68 landmarks from a facial image, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices. This repo is an adaptation from [cuijian/pytorch_face_landmark](https://github.com/cunjian/pytorch_face_landmark). ## Code Example -extracted facial landmark from './img1.jpg'. +Extract facial landmarks from './img1.jpg'. *Write the pipeline in simplified style*: ```python -from towhee import dc +import towhee -dc.glob('./img1.jpg') \ +towhee.glob('./img1.jpg') \ + .image_decode.cv2() \ .face_landmark_detection.mobilefacenet() \ + .select('img','landmark') \ .to_list() ``` *Write a same pipeline with explicit inputs/outputs name specifications:* ```python -from towhee import dc +import towhee -dc.glob['path']('./img1.jpg') \ +towhee.glob['path']('./img1.jpg') \ .image_decode.cv2['path', 'img']() \ - .face_landmark_detection.mobilefacenet() \ - .to_list() + .face_landmark_detection.mobilefacenet['img', 'landmark']() \ + .select('img','landmark') \ + .show() ``` +result1 + ## Factory Constructor Create the operator via the following factory method -***ops.face_landmark_detection.mobilefacenet(pretrained = True)*** +***face_landmark_detection.mobilefacenet(pretrained = True)*** **Parameters:** ​ ***pretrained*** -​ whether load the pretrained weights.. +​ whether load the pretrained weights. -​ supported types: `bool`, default is True, using pretrained weights +​ supported types: `bool`, default is True, using pretrained weights. ## Interface An image embedding operator takes an image as input. it extracts the embedding back to ndarray. -**Args:** - -​ ***pretrained*** - -​ whether load the pretrained weights.. - -​ supported types: `bool`, default is True, using pretrained weights - - **Parameters:** -​ ***image***: *np.ndarray* +​ ***img***: *towhee.types.Image (a sub-class of numpy.ndarray)* ​ The input image. -**Returns:**: *numpy.ndarray* +**Returns:** *numpy.ndarray* -​ The extracted facial landmark. +​ The extracted facial landmarks. diff --git a/__init__.py b/__init__.py index 6136253..ac8dd1a 100644 --- a/__init__.py +++ b/__init__.py @@ -14,6 +14,6 @@ from .mobilefacenet import Mobilefacenet -def mobilefacenet(): - return Mobilefacenet() +def mobilefacenet(pretrained = True): + return Mobilefacenet(pretrained) diff --git a/mobilefacenet.py b/mobilefacenet.py index e8a472d..571795b 100644 --- a/mobilefacenet.py +++ b/mobilefacenet.py @@ -54,8 +54,8 @@ class Mobilefacenet(NNOperator): normalize]) @arg(1, to_image_color('RGB') ) - def __call__(self, image: Image): - image = to_pil(image) + def __call__(self, img: Image): + image = to_pil(img) h, w = image.size tensor = self._preprocess(image) if len(tensor.shape) == 3: diff --git a/result.png b/result.png new file mode 100644 index 0000000..065bece Binary files /dev/null and b/result.png differ