AI Background Remover
Remove image backgrounds instantly with Edge AI — fully in your browser. Powered by Transformers.js + WebGPU/WASM. No server upload. Complete privacy protection.
How to use
- 1
Wait for the AI model to load in your browser (first visit requires a one-time download of ~175 MB; subsequent visits use the cached model instantly).
- 2
Drag and drop an image onto the upload zone, click "Select Image" to browse your files, or press Ctrl + V to paste directly from your clipboard.
- 3
The AI analyzes the image and separates foreground from background in 1–3 seconds. A before/after comparison slider appears immediately.
- 4
Drag the slider left and right to compare the original and the result. Adjust background color (Transparent, White, Black, or Custom) to match your use case.
- 5
Use the Edge Adjustment sliders to fine-tune edge smoothness, feathering, and sharpness of the mask. Then click "Download PNG" to save the result with full alpha transparency.
Key Features
- ✓Fully client-side Edge AI: the briaai/RMBG-1.4 model runs entirely in your browser using WebGPU (with WASM fallback). No image is ever sent to any server — your photos stay private.
- ✓RMBG-1.4 precision: this state-of-the-art model delivers remove.bg-level quality for both people and objects with accurate edge detection on hair, fur, and fine details.
- ✓Instant before/after comparison: a draggable slider overlays the original and result images so you can evaluate quality at a glance before downloading.
- ✓Transparent background support: the output is a PNG with a full alpha channel — ready for use in design tools, presentations, e-commerce product shots, and more.
- ✓Background color replacement: apply white, black, or any custom color behind the subject in one click, previewed in real time.
- ✓Edge Adjustment controls: three sliders (Smooth, Feather, Refine) let you soften hard edges, add natural-looking blur at transitions, and sharpen or soften the mask contrast.
- ✓Drag & drop + paste: load images by dragging from your desktop or file manager, pasting with Ctrl+V, or using the file picker.
- ✓Model caching: after the first download, the model is cached by the browser and loads in under a second on subsequent visits.
FAQ
Q. Is my image uploaded to a server?
A. No. This tool is a fully static application; there is no server of any kind. The AI model runs entirely in your browser via WebGPU (or WASM on older browsers). Your image is processed in browser memory and never transmitted over the network.
Q. How large is the AI model download?
A. The briaai/RMBG-1.4 ONNX model is approximately 175 MB. It is downloaded once from Hugging Face on first use and then cached by the browser. All subsequent visits load from cache in under one second.
Q. What image formats does it accept?
A. Any format your browser can decode — JPEG, PNG, WebP, GIF, BMP, TIFF, AVIF, and HEIC (on supporting browsers). The output is always PNG with a preserved alpha channel.
Q. Why does it work best on people and objects?
A. RMBG-1.4 was trained on a large, diverse dataset covering portraits, animals, products, vehicles, and general objects. It performs best when the subject has a reasonably defined boundary. Very complex textures or low-contrast subjects may produce less precise masks.
Q. What do the edge adjustment sliders do?
A. Smooth applies a light blur to reduce jagged pixel-level artifacts. Feather applies a stronger blur that creates a soft, natural-looking edge fade — useful for portraits. Refine increases the contrast of the mask transition, making edges crisper and removing semi-transparent halos.
Q. Can I use the result in Photoshop or Figma?
A. Yes. The downloaded PNG file preserves the alpha channel. It can be imported directly into Photoshop, Figma, Sketch, Canva, Affinity Photo, and any other tool that supports transparent PNG.
Technical Deep-dive: Edge AI with Transformers.js and RMBG-1.4
This tool is built on Transformers.js v3, a JavaScript port of Hugging Face's Transformers library. It allows ONNX neural network models to run directly in the browser using WebGPU for GPU-accelerated inference, with automatic WASM fallback for older browsers. The model used is briaai/RMBG-1.4, a state-of-the-art image matting model developed by BRIA AI. It produces high-quality foreground masks that rival cloud-based background removal services like remove.bg.
The entire inference pipeline runs inside a Web Worker so the browser's main thread is never blocked. The worker loads AutoProcessor and AutoModel from @huggingface/transformers, which handles model fetching, ONNX session creation, and WebGPU/WASM backend selection. The model is cached in the browser after the first download, so subsequent visits load in under one second.
When an image is submitted, the Worker receives its raw bytes as an ArrayBuffer transferred without copying. A Blob URL is constructed inside the Worker, and RawImage.fromURL() decodes it into an RGB tensor. The AutoProcessor normalises the image to 1024×1024 and produces a pixel_values tensor ready for the model.
The model outputs a single-channel probability mask at the same spatial resolution. The Worker uses RawImage.fromTensor() and .resize() to scale the mask back to the original image size, then assembles the final RGBA result by combining the original pixel data with the mask values as the alpha channel. This composited data is transferred back to the main thread as a zero-copy ArrayBuffer.
On the main thread, the composited data is drawn to an HTML Canvas element. Background color replacement is achieved by filling the canvas with the selected color and drawing the RGBA image on top using the browser's native alpha compositing (source-over mode). The before/after slider uses CSS clip-path on an absolutely positioned element, giving smooth GPU-accelerated visual transitions without any JavaScript animation loop.
Privacy & Security
This is a static-only application built with Next.js output: export. There is no server, no API endpoint, no database, and no logging of image data. When you load an image, it exists only in your browser's memory as a Blob URL and a typed array. When you close the tab or load a new image, all data is immediately released.
The AI model is downloaded from the official Hugging Face CDN over HTTPS and cached locally in the browser. No third party ever receives your image data. The only external network request is the one-time model download.
Because all processing happens entirely on-device, this tool is safe to use with sensitive images including personal photos, identity documents, medical imagery, and confidential product designs. There is no risk of your images appearing in training data, being stored in a cloud service, or being accessed by any third party.
Examples, Finishing Checks, and Common Mistakes
Examples
- Remove product photo backgrounds for ecommerce pages and documents.
- Clean up profile photos for thumbnails and social media assets.
- Cut out subjects for presentation slides and design mockups.
Cautions
- Hair, transparent objects, and thin edges may still need manual review.
- For commercial use, check the rights and license of the source image.
- Review edges on both light and dark backgrounds after removal.
Common Mistakes
- Publishing the automatic result without checking the edges.
- Starting from a low-resolution image and expecting clean outlines.
- Checking only on a white background and missing bright edge artifacts.