Run Your Own Private AI Image Generator with Docker and Open WebUI

By • min read

Introduction

Imagine a scenario where you need a few AI-generated images for a project. You open a cloud-based service, and suddenly you’re worrying about what the provider does with your prompts, how many credits remain, or why the safety filter rejected your perfectly innocent request for a dragon in a business suit. What if you could bypass all that and run the entire pipeline on your own hardware, complete with a sleek chat interface?

Run Your Own Private AI Image Generator with Docker and Open WebUI
Source: www.docker.com

That’s exactly what Docker Model Runner now enables. With just a few commands, you can pull a diffusion model, connect it to Open WebUI, and start generating images from a chat interface — entirely local, entirely private, fully under your control.

Let’s build your own private DALL‑E, no subscription required.

What You’ll Need

If you can run docker model version without errors, you’re ready to proceed.

How Docker Model Runner Works with Open WebUI

Before we dive into the steps, here’s the big picture: Docker Model Runner acts as the control plane. It downloads the model, manages the lifecycle of the inference backend, and exposes a 100% OpenAI-compatible API — including the POST /v1/images/generations endpoint that Open WebUI already knows how to talk to.

This means once you launch the combination, Open WebUI can send image generation requests to your local model just as easily as it would call a cloud service — but with zero data leaving your machine.

Step 1: Pull an Image Generation Model

Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute models through Docker Hub, just like any other OCI artifact.

Pull a model to get started:

docker model pull stable-diffusion

You can confirm it’s ready with:

docker model inspect stable-diffusion

This command returns a JSON object with the model’s ID, tags, creation timestamp, and configuration. The “diffusers” section tells you the DDUF file name and layout — for example, stable-diffusion-xl-base-1.0-FP16.dduf.

What’s happening under the hood? The model is stored locally as a DDUF file — a single-file archive that bundles all necessary components of a diffusion model (text encoder, VAE, UNet or DiT, scheduler config) into one portable artifact. Docker Model Runner automatically unpacks it at runtime.

Step 2: Launch Open WebUI

Here’s the magic trick: Docker Model Runner has a built‑in launch command that knows exactly how to wire up Open WebUI against your local inference endpoint:

docker model launch openwebui

That’s it — one command. Under the covers, it starts a container running Open WebUI, configures it to point at the /v1/images/generations endpoint of the local Model Runner, and opens the UI in your browser.

Step 3: Generate Images from the Chat Interface

Open WebUI will automatically detect the image generation capability. Look for a small image icon (camera or picture) near the prompt input area. Click it to reveal options:

Type a prompt like “a majestic dragon wearing a tailored business suit, sitting in a boardroom, photorealistic” and hit Generate. The model processes locally — your data stays on your machine. After a few seconds (depending on your GPU/CPU), the image appears directly in the chat.

Run Your Own Private AI Image Generator with Docker and Open WebUI
Source: www.docker.com

Under the Hood: DDUF and Performance

The DDUF format is key to making this so simple. Each DDUF file contains the entire diffusion pipeline in a single download, eliminating the need to manually fetch separate model weights, tokenizers, or configuration files. Docker Model Runner handles the extraction and loading seamlessly.

Performance will vary by hardware:

You can monitor GPU usage with nvidia-smi (Linux) or Activity Monitor (macOS).

Customizing Your Setup

Once you have the basic pipeline running, you can customize it further:

Conclusion

With Docker Model Runner and Open WebUI, you now have a fully local, private image generation studio running inside a chat interface. No cloud credits, no data leaks, no unpredictable service terms. Just you, your prompts, and a local neural network creating exactly what you imagine.

The best part? It takes only two commands to get started: docker model pull stable-diffusion followed by docker model launch openwebui. Your own private DALL‑E, ready in minutes.

Recommended

Discover More

Beyond Consistency: Why Design Systems Need DialectsAutomating Selenium Driver Setup with WebDriverManager: A Practical Guide5 Key Insights for Starting Django: A Developer's GuideAI-Powered Agents Unlock Scalable Supplier Expertise for Mid-Market ManufacturersMassive E-Bike Savings Flood Market: ENGWE, Lectric, Segway, and Aventon Slash Prices