logo
Browse Source

update the readme.

Signed-off-by: wxywb <xy.wang@zilliz.com>
main
wxywb 3 years ago
parent
commit
597e959f1c
  1. 44
      README.md
  2. 4
      __init__.py
  3. 4
      mobilefacenet.py
  4. BIN
      result.png

44
README.md

@ -1,72 +1,68 @@
# Mobilefacenet Face Landmark Detecter
# MobileFaceNet Face Landmark Detecter
*authors: David Wang* *authors: David Wang*
## Desription ## Desription
A class of extremely efficient CNN models to extract 68 landmarks from a facial image[MobileFaceNets](https://arxiv.org/pdf/1804.07573.pdf).
[MobileFaceNets](https://arxiv.org/pdf/1804.07573) is a class of extremely efficient CNN models to extract 68 landmarks from a facial image, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices. This repo is an adaptation from [cuijian/pytorch_face_landmark](https://github.com/cunjian/pytorch_face_landmark).
## Code Example ## Code Example
extracted facial landmark from './img1.jpg'.
Extract facial landmarks from './img1.jpg'.
*Write the pipeline in simplified style*: *Write the pipeline in simplified style*:
```python ```python
from towhee import dc
import towhee
dc.glob('./img1.jpg') \
towhee.glob('./img1.jpg') \
.image_decode.cv2() \
.face_landmark_detection.mobilefacenet() \ .face_landmark_detection.mobilefacenet() \
.select('img','landmark') \
.to_list() .to_list()
``` ```
*Write a same pipeline with explicit inputs/outputs name specifications:* *Write a same pipeline with explicit inputs/outputs name specifications:*
```python ```python
from towhee import dc
import towhee
dc.glob['path']('./img1.jpg') \
towhee.glob['path']('./img1.jpg') \
.image_decode.cv2['path', 'img']() \ .image_decode.cv2['path', 'img']() \
.face_landmark_detection.mobilefacenet() \
.to_list()
.face_landmark_detection.mobilefacenet['img', 'landmark']() \
.select('img','landmark') \
.show()
``` ```
<img src="https://towhee.io/face-landmark-detection/mobilefacenet/raw/branch/main/result.png" alt="result1" style="height:20px;"/>
## Factory Constructor ## Factory Constructor
Create the operator via the following factory method Create the operator via the following factory method
***ops.face_landmark_detection.mobilefacenet(pretrained = True)***
***face_landmark_detection.mobilefacenet(pretrained = True)***
**Parameters:** **Parameters:**
***pretrained*** ***pretrained***
​ whether load the pretrained weights..
​ whether load the pretrained weights.
​ supported types: `bool`, default is True, using pretrained weights
​ supported types: `bool`, default is True, using pretrained weights.
## Interface ## Interface
An image embedding operator takes an image as input. it extracts the embedding back to ndarray. An image embedding operator takes an image as input. it extracts the embedding back to ndarray.
**Args:**
***pretrained***
​ whether load the pretrained weights..
​ supported types: `bool`, default is True, using pretrained weights
**Parameters:** **Parameters:**
***image***: *np.ndarray*
***img***: *towhee.types.Image (a sub-class of numpy.ndarray)*
​ The input image. ​ The input image.
**Returns:**: *numpy.ndarray*
**Returns:** *numpy.ndarray*
​ The extracted facial landmark.
​ The extracted facial landmarks.

4
__init__.py

@ -14,6 +14,6 @@
from .mobilefacenet import Mobilefacenet from .mobilefacenet import Mobilefacenet
def mobilefacenet():
return Mobilefacenet()
def mobilefacenet(pretrained = True):
return Mobilefacenet(pretrained)

4
mobilefacenet.py

@ -54,8 +54,8 @@ class Mobilefacenet(NNOperator):
normalize]) normalize])
@arg(1, to_image_color('RGB') ) @arg(1, to_image_color('RGB') )
def __call__(self, image: Image):
image = to_pil(image)
def __call__(self, img: Image):
image = to_pil(img)
h, w = image.size h, w = image.size
tensor = self._preprocess(image) tensor = self._preprocess(image)
if len(tensor.shape) == 3: if len(tensor.shape) == 3:

BIN
result.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Loading…
Cancel
Save