Skip to main content

3 posts tagged with "GPU Cloud"

GPU cloud infrastructure and compute instances

View All Tags

Generate AI Videos on Your Own GPU — WAN 2.1 + ComfyUI, Pre-Configured

· 9 min read
Dhayabaran V
Barrack AI

AI video generation models like WAN 2.1 are open-source and free to run. The actual barrier is setup — downloading 14 billion parameter model weights, installing ComfyUI, configuring custom nodes, resolving dependency conflicts, and ensuring the correct workflow templates are in place. On a fresh VM, this takes hours.

We built two pre-configured VM images that skip the entire process. Every model, every custom node, and every workflow template is already installed. You deploy the VM, open your browser, load a template, and generate video.

This guide covers both images: one for text-to-video, and one that adds image-to-video on top of it.

Run FLUX, Stable Diffusion, and Train LoRAs — One-Click GPU Setup

· 8 min read
Dhayabaran V
Barrack AI

Setting up an image generation environment with FLUX, Stable Diffusion, and LoRA training involves installing ComfyUI, downloading multiple model checkpoints (each several gigabytes), configuring Kohya for training, and ensuring all dependencies resolve cleanly. Depending on your starting point, this takes one to several hours.

We built a VM image with everything pre-installed. FLUX and Stable Diffusion models are downloaded. ComfyUI is configured. Kohya training tools are ready. You deploy the VM, open port 9093 in your browser, and start generating or training.

Self-Host Qwen 3-32B in Minutes — Zero Configuration Required

· 7 min read
Dhayabaran V
Barrack AI

Running a 32-billion parameter language model on your own GPU typically involves installing drivers, setting up Ollama, downloading model weights, configuring a web interface, and troubleshooting port conflicts. That process takes anywhere from 30 minutes to several hours depending on your familiarity with the tooling.

We built a pre-configured VM image that eliminates all of it. You select a GPU, pick the image, and deploy. The model is already downloaded, Ollama is already running, and OpenWebUI is already serving on port 8080. You open your browser and start using it.

This guide walks through the exact steps.