Image Generation On SaladCloud
How to deploy an image generation service on SaladCloud
SaladCloud is the perfect platform for deploying image generation services. We power some of the most popular image generation sites on the internet, like Civitai and Blend. We’ve done the math and found that SaladCloud is the most cost-effective way to deploy image generation services at scale.
High Level
Regardless of your choice of stable diffusion inference server, models, or extensions, the basic process is as follows:
- Get a docker image that runs your inference server
- Copy any models and extensions you want into the docker image
- Enable some way to access your container group, either through the container gateway, or a job queue.
- Push the new image up to a container registry
- Deploy the image as a SaladCloud container group
Validated Docker Images
Here are some popular stable diffusion inference servers that we’ve verified work on Salad.
Note that you will be interacting with these as an API, and not through their browser user interface.
- ComfyUI
- Guide: ComfyUI
- Git Repo: https://github.com/ai-dock/comfyui
- Docker Image: ghcr.io/ai-dock/comfyui:latest-cuda
- Model Directory:
/opt/ComfyUI/models
- Custom Node Directory:
/opt/ComfyUI/custom_nodes/
- Automatic1111
- Guide: Automatic1111
- Git Repo: https://github.com/SaladTechnologies/stable-diffusion-webui-docker
- Docker Image:
saladtechnologies/a1111:ipv6-latest
- Data Directory:
/data
- Model Directory:
/data/models
- Extension Directory:
/data/config/auto/extensions
- Controlnet Model Directory (If controlnet extension is installed):
/data/config/auto/extensions/sd-webui-controlnet/models
- SD.Next
- Guide: SD.Next
- Git Repo: https://github.com/SaladTechnologies/sdnext
- Docker Image:
saladtechnologies/sdnext:base
- Data Directory:
/webui/data
- Model Directory:
/webui/data/models
- Extension Directory:
/webui/data/extensions
- Controlnet Model Directory:
/webui/extensions-builtin/sd-webui-controlnet/models
- Fooocus:
- Guide: Fooocus
- Git Repo: https://github.com/mrhan1993/Fooocus-API
- Docker Image:
konieshadow/fooocus-api:v0.4.1.1
- Model Directory:
/app/repositories/Fooocus/models
Container Gateway or Job Queue?
When deploying your stable diffusion inference server, you have two options for how to access it:
- Container Gateway: This is the easiest way to access your container group.
- It can be enabled during container group creation, and maps a public https URL to a specific IPv6 port in your container.
- You can optionally enable auth, which then requires your SaladCloud API Token to access the container group.
- Only requires your server to listen on IPv6, no additional binary.
- SaladCloud’s Container Gateway uses least-connection load balancing, so you can scale your container group up or down without needing to change the URL.
- Completed image generations are returned to the client in the response.
- However, it is not suitable for long-running tasks, as the container gateway will time out after 100 seconds, a hard limit imposed by Cloudflare on idle connections. For most image generation workloads, this is not a problem, as generations typically take under 30s.
- Does Not Retry Failed Requests: If a request fails, the client must retry the request.
- Excess load will result in failed requests, as the container gateway will not queue requests.
- Job Queue: This is a more resilient way to access your container group.
- You can submit jobs to a queue, and the container group will process them in order.
- You must add a binary to your container image that connects the job queue to your inference server.
- High utilization of the container group will result in longer queue times, but no failed requests.
- Retries Failed Requests: If a request fails, the job queue will retry the request 3 times.
- Long-Running Tasks: The job queue is suitable for long-running tasks, as it does not have a timeout.
- Fully asynchronous: Completed jobs are not returned to the client, but instead must be fetched from the job queue, or received by a webhook.
There are several factors you should consider when choosing between the two options:
- How predictable is load? Scaling services with many gigabytes of models can not be done instantly, so if you expect sudden spikes in traffic, you may want to use a job queue. On the other hand, if you have very predictable load, the container gateway is easier to use, and you can simply scale your container group up or down as needed with about an hour of lead-time.
- How long do you expect your tasks to take? Most image generation tasks take just a few seconds on a powerful GPU, but if you have a particularly complex multi-modal workflow, and are using older, less expensive GPUs, you may end up with generation times approaching the cloudflare timeout limit of 100s.
- What is the cost structure for your client application? If you are using a serverless architecture, you should check to see if you are billed for CPU time, or clock time for your function execution. If you are billed for clock time, you will likely want to use the job queue, as you will not be billed for time spent waiting in the queue. If you are billed for CPU time, there will be little difference between the two options.
Hardware Requirements
The hardware requirements for your stable diffusion inference server will depend on the type of model you are running, and to some extent your choice of inference server.
- For models based on Stable Diffusion 1.5, you can get by with as little as 8gb of VRAM, and at least 8gb of system RAM. If you are using multiple LoRAs, ControlNets, etc, you may need more VRAM and system RAM. The only way to be certain is to measure real-world performance.
- For models based on Stable Diffusion XL, you will need at least 16gb of VRAM, and 24gb if you intend to use the refiner. For SD.Next and Automatic1111, you will need at least 30gb of system RAM if you intend to use the refiner, and 24gb if you do not. For ComfyUI, you will need at least 16gb of system RAM, but may find better performance with more.
- For models based on Flux1 (fp8 or nf4), you will want 24gb of VRAM and 24gb of system RAM.
Warming up your server
In most situations, the first request to an inference server will be significantly slower than subsequent requests. This
is because the server must load the model into VRAM, and establish its internal kv cache. For this reason, it is often
desirable to send a dummy request to the server before sending real requests. For Salad, this can be done conveniently
via the Startup Probe in the container group configuration. You set the startup probe as a curl
command that submits
an inference request to the server. This results in the server being warmed up before it is added to the load balancer,
and can result in a significant performance improvement.
Was this page helpful?