Deploy from the SaladCloud Portal.
See the Benchmark for performance information.

Overview

This recipe creates an inference API for Stable Diffusion XL, an image generation model by Stability AI. It includes the base and refiner model. Inference is powered by ComfyUI, exposed via a simple HTTP API to facilitate scalable stateless operation. Users can make an HTTP request to the provided endpoints and get back one or more images in base64 encoded form. Optionally, users can receive completed images via a webhook.

Stable Diffusion XL is notable for a number of reasons:

  • High quality images achieved through a multi-model workflow
  • Supports many different art styles
  • Commercial-friendly license

Example Output

curl -X 'POST' \
  "$access_domain_name/workflow/sdxl/txt2img-with-refiner" \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "input": {
    "prompt": "8-bit classic video game art of a salad",
    "negative_prompt": "text, watermark, hands",
    "steps": 25,
    "cfg_scale": 9,
    "sampler_name": "euler",
    "scheduler": "normal",
    "base_start_step": 0,
    "base_end_step": 20,
    "refiner_start_step": 20
  }
}' | jq -r '.images[0]' | base64 -d > image.png

How To Use This Recipe

Authentication

When deploying this recipe, you can optionally enable authentication in the container gateway. If you enable authentication, all requests to your API will need to include your SaladCloud API key in the header Salad-Api-Key. See the documentation for more information about authentication.

Replica Count

The recipe is configured for 3 replicas by default, and we recommend using at least 3 for testing, and at least 5 for production workloads. SaladCloud’s distributed GPU cloud is powered by idle gaming PCs around the world, in private residences, gaming cafes, and esports arenas. A consequence of this unique infrastructure is that all nodes must be considered interruptible without warning. If a Chef (a compute host) decides they want to use their GPU to play a video game, or their dog trips on the power cord, or their Wi-Fi goes out, the instance of your workload running on that node will be interrupted, and a new instance will be allocated to a different node. This means you will want to slightly over-provision the capacity you expect to need in order to have adequate coverage during node reallocations. Don’t worry, we only charge for instances that are actually running.

Logging

SaladCloud offers a simple built-in method to view logs from the portal, to facilitate testing and development. For production workloads, we highly recommend connecting an external logging source, such as Axiom. This can be done during container group creation.

Deploy It And Wait

When you deploy the recipe, SaladCloud will find the desired number of qualified nodes, and begin the process of downloading the container image to the host machine. This particular image is quite large (~15 GB), and it may take up to tens of minutes to download to some machines, depending on the network conditions of that particular node. Remember, these are residential PCs with residential internet connections, and performance will vary across different nodes.

Eventually, you will see instances enter the running state, and show a green checkmark in the “Ready” column, indicating the workload is passing its readiness probe. Once at least 1 instance is running, the container group will be considered running, but for production you will want to wait until an adequate number of nodes have become ready before moving traffic over.

Visit The Docs

Once at least one instance is running, you can navigate to the /docs endpoint at the Access Domain Name provided in the portal. In the above example that URL is https://vinegar-garden-tnar18sx3xz0n4st.salad.cloud/docs . You’ll see the swagger documentation that looks something like this:

Workload Customizations

Hardware Considerations

We recommend at least 24gb of system ram for this workload, with 30gb preferred. We also recommend a 24gb vRAM GPU if you are using the refiner model. You can use a 16gb vRAM GPU if you are only using the base model. Our default configuration uses an RTX 4090 with 24gb vRAM, 30gb system RAM, and 4 vCPU. You should conduct your own performance testing for your specific workload and hardware configuration.

Custom Models And Nodes

To use a different model, you would follow this guide but copy in your custom model instead of the default one, and ensure your warmup workflow references the correct checkpoint name. You’d push up the new image to the image registry of your choice, and edit the container group to reference the new image.

Custom Endpoints

To add custom endpoints or other custom functionality to the API server, you can add javascript or typescript files to the docker image following this example.

API Reference

You can see the full API documentation at the /docs endpoint at the Access Domain Name of your container group. They can also be found in the API Reference.