copied
Readme
Files and versions
Updated 3 years ago
img2img-translation
Animating using AnimeGanV2
author: Filip Haltmayer
Description
Convert an image into an animated image using AnimeganV2
.
Code Example
Load an image from path './image.png'.
Write the pipeline in simplified style:
import towhee
from PIL import Image
import numpy
pipeline = towhee.glob('./image.png').image_decode.cv2().img2img_translation.animegan(model_name = 'hayao')
img = pipeline.to_list()[0]
img = numpy.transpose(img, (1,2,0))
img = Image.fromarray((img * 255).astype(numpy.uint8))
img.show()
Factory Constructor
Create the operator via the following factory method
img2img_translation.animegan(model_name = 'which anime model to use')
Model options:
- celeba
- facepaintv1
- facepaintv2
- hayao
- paprika
- shinkai
Interface
Takes in a numpy rgb image in channels first. It transforms input into animated image in numpy form.
Parameters:
model_name: str
Which model to use for transfer.
framework: str
Which ML framework being used, for now only supports PyTorch.
Returns: numpy.ndarray
The new image.
Reference
Jie Chen, Gang Liu, Xin Chen "AnimeGAN: A Novel Lightweight GAN for Photo Animation." ISICA 2019: Artificial Intelligence Algorithms and Applications pp 242-256, 2019.
Filip
7429513ed7
| 4 Commits | ||
---|---|---|---|
pytorch | 3 years ago | ||
.gitattributes |
1.1 KiB
|
3 years ago | |
README.md |
1.3 KiB
|
3 years ago | |
__init__.py |
76 B
|
3 years ago | |
animegan.py |
1.3 KiB
|
3 years ago | |
requirements.txt |
19 B
|
3 years ago |