Download the PHP package andrey-helldar/easy-diffusion-ui-samplers-generator without Composer
On this page you can find all versions of the php package andrey-helldar/easy-diffusion-ui-samplers-generator. It is possible to download/install these versions without Composer. Possible dependencies are resolved automatically.
Download andrey-helldar/easy-diffusion-ui-samplers-generator
More information about andrey-helldar/easy-diffusion-ui-samplers-generator
Files in andrey-helldar/easy-diffusion-ui-samplers-generator
Package easy-diffusion-ui-samplers-generator
Short Description Sampler generator for Easy Diffusion UI
License MIT
Informations about the package easy-diffusion-ui-samplers-generator
Easy Diffusion UI: Samplers Generator
Installation
First you need to download and run Easy Diffusion.
Next, make sure you have Composer, PHP 8.1 or higher and Git installed on your computer.
Next, you may create a new Samplers Generator
project via the Composer create-project
command:
Or you can download this repository:
Next, go to the project folder and install the dependencies:
Configuration
The project has few static settings - these are the number of steps for generating images and sizes.
Configuration files are located in the config folder.
Usage
First you need to run the neural network python script See more: https://github.com/cmdr2/stable-diffusion-ui/wiki/How-to-Use
Sampler generation for all available models
To do this, you need to call the bin/sampler models
console command, passing it the required parameter --prompt
.
For example:
If you want to generate a sampler table for a previously generated image, then you need to also pass the --seed
parameter with this value when invoking the console command.
For example:
Available options
---prompt
- Query string for image generation. It's a string.--negative-prompt
- Exclusion words for query generation. It's a string.--tags
- Image generation modifiers. It's array.--fix-faces
- Enable fix incorrect faces and eyes via GFPGANv1.3. It's a boolean.--path
- Path to save the generated samples. By default, in the./build
subfolder inside the current directory.--seed
- Seed ID of an early generated image.--output-format
- Sets the file export format: jpeg or png. By default, jpeg.--output-quality
- Specifies the percentage quality of the output image. By default, 75.
Sampler generation for one model
To do this, you need to call the bin/sampler model
console command, passing it the required parameters --prompt
and --model
.
For example:
If you want to generate a sampler table for a previously generated image, then you need to also pass the --seed
parameter with this value when invoking the console command.
For example:
Available options
---model
- Model for generating samples. It's a string.---prompt
- Query string for image generation. It's a string.--negative-prompt
- Exclusion words for query generation. It's a string.--tags
- Image generation modifiers. It's array.--fix-faces
- Enable fix incorrect faces and eyes via GFPGANv1.3. It's a boolean.--path
- Path to save the generated samples. By default, in the./build
subfolder inside the current directory.--seed
- Seed ID of an early generated image.--output-format
- Sets the file export format: jpeg or png. By default, jpeg.--output-quality
- Specifies the percentage quality of the output image. By default, 75.
Sampler generation for all models based on configuration files
You can also copy configurations from the Stable Diffusion UI web interface to the clipboard, after which, using any file editor, you can save these configurations to any folder on your computer.
After you save as many configuration files as you need in a folder, you can call the bin/samplers settings --path
command, passing it the path to that folder:
When the command is run, the script will find all json files in the root of the specified folder (without recursive search), check them for correct filling (incorrect files will be skipped, there will be no errors), and start generating samplers for all models available to the neural network for each of the files.
The sampler table will be generated by the Seed ID taken from the configuration file.
For example, output info:
The target folder will contain the collected sampler files (jpeg or png), as well as a set of configurations for them.
For example:
Available options
--path
- Path to save the generated samples. By default, in the./build
subfolder inside the current directory.--output-format
- Sets the file export format: jpeg or png. By default, jpeg.--output-quality
- Specifies the percentage quality of the output image. By default, 75.
Example
Source
Sampler
Config
The last used configuration for the model is saved to the file
FAQ
Q: Why is it not written in python?
A: I was interested in writing a pet project in one evening and I don't know python 😁
Q: What models do you use?
A: For various purposes, I use the following models:
- Models:
- https://civitai.com/?types=Checkpoint
- https://civitai.com/models/1102/synthwavepunk (version: 3)
- https://civitai.com/models/1259/elldreths-og-4060-mix (version: 1)
- https://civitai.com/models/1116/rpg (version: 1)
- https://civitai.com/models/1186/novel-inkpunk-f222 (version: 1)
- https://civitai.com/models/5/elden-ring-style (version: 3)
- https://civitai.com/models/1377/sidon-architectural-model (version: 1)
- https://rentry.org/sdmodels
- https://civitai.com/?types=Checkpoint
- VAE
License
This package is licensed under the MIT License.
All versions of easy-diffusion-ui-samplers-generator with dependencies
ext-imagick Version *
ext-json Version *
archtechx/enums Version ^0.3.1
dragon-code/simple-dto Version ^2.7
dragon-code/support Version ^6.8
guzzlehttp/guzzle Version ^7.5
illuminate/console Version ^9.45
intervention/image Version ^2.7
nesbot/carbon Version ^2.64
symfony/console Version ^6.2
symfony/dom-crawler Version ^6.2