copied
Readme
Files and versions
Updated 1 year ago
text2image
Image generation using Stable Diffusion
A text2image operator generates image given a text prompt. This operator is implemented with Huggingface Diffusers.
Code example
from towhee import pipe, ops
pipe = (
pipe.input('prompt')
.map('prompt', 'image', ops.text2image.stable_diffusion())
.output('image')
)
image = pipe('an orange cat')
image.save('an_orange_cat.png')
Factory Constructor
Create the operator via the following factory method:
text2image.stable_diffusion(model_id='stabilityai/stable-diffusion-2-1', device=None)
Parameters:
model_id: str
The model id in string, defaults to 'stabilityai/stable-diffusion-2-1'.
Supported model names: pretrained diffuser models
device: str
The device to running model on, defaults to None. If None, it will automatically use cuda if gpu is available.
Interface
The operator takes a text prompt in string as input. It loads pretrained diffuser model and generates an image.
__call__(txt)
Parameters:
prompt: str
The text in string.
Returns:
PIL.Image
The generated image.
Jael Gu
0ed039695d
| 15 Commits | ||
---|---|---|---|
.gitattributes |
1.1 KiB
|
1 year ago | |
README.md |
1.3 KiB
|
1 year ago | |
__init__.py |
127 B
|
1 year ago | |
an_orange_cat.png |
448 KiB
|
1 year ago | |
requirements.txt |
17 B
|
1 year ago | |
stable_diffusion.py |
943 B
|
1 year ago |