AI Models

This page provides an overview of the various AI models available within Fater, their intended uses, key characteristics, and specific input requirements. Understanding these models will help you choose the best tool for your creative tasks in the Image Editor.

Understanding Model Descriptions

For each model, you'll find information structured as follows:

  • User-Facing Name: The name you'll see in Fater's Model Selector, often followed by its classification (e.g., Medium [Generate/Basic]).

  • Internal ID: The system identifier for the model.

  • Category: The primary group the model belongs to (e.g., Generate, Edit, Upscale).

  • SubCategory: Further classification if applicable (e.g., Basic, Controlled, Special).

  • Control Type: The type of control input it uses if it's a "Controlled" model (e.g., Contour, Depth).

  • Core Purpose / When to Use: A summary of what the model is best for.

  • Key Characteristics:

    • Prompt Input: Whether it primarily uses a text prompt.

    • Negative Prompt Input: Whether it accepts a negative prompt.

    • Mask Usage: How it interacts with user-drawn masks (Required, Ignores, Optional, Generates its own).

    • Control Layer Usage: How it uses Edges or Depth maps from your Control Layers section or if it generates them on-the-fly.

    • Reference Image Input: Whether it accepts external images for style/content guidance.

    • Processed Area: Whether it processes the entire Generation Area or is confined to a mask.

  • Key Parameters (from UI): Lists the main configurable settings exposed in the model's control panel, along with their default values where specified in the UI code.

  • Strengths & Limitations: Specific advantages or things to be aware of.


Generate Models

These models are primarily used for creating new visual content, either from scratch or by significantly transforming existing content with structural guidance.


Medium [Generate/Basic]

  • Internal ID: ideogram-generate

  • Category: Generate

  • SubCategory: Basic

  • Core Purpose / When to Use: For pure text-to-image generation when you need to create a new image or concept from scratch. Offers control over Ideogram-specific model versions, aspect ratio, styles, and color palettes.

  • Key Characteristics:

    • Prompt Input: Yes (primary input).

    • Negative Prompt Input: Yes.

    • Mask Usage: Ignores.

    • Control Layer Usage: Ignores.

    • Reference Image Input: No.

    • Processed Area: The Generation Area defines the output canvas, with dimensions influenced by Aspect Ratio.

  • Key Parameters (from UI):

    • Prompt: Your text description.

    • Negative Prompt: Describe what to exclude. (Default: empty)

    • Aspect Ratio: Select from various Ideogram-specific ratios (e.g., 'ASPECT_16_9', 'ASPECT_1_1'). (Default: 'ASPECT_1_1')

    • Model Version: 'V_2', 'V_2_TURBO', 'V_2A', 'V_2A_TURBO'. Selects the Ideogram model version. (Default: 'V_2')

    • Magic Prompt Option: 'AUTO', 'ON', 'OFF'. Controls automatic prompt enhancement. (Default: 'OFF')

    • Style Type: 'AUTO', 'GENERAL', 'REALISTIC', 'DESIGN', 'RENDER_3D', 'ANIME'. Selects the artistic style. (Default: 'REALISTIC')

    • Color Palette: Optional selection from predefined palettes (e.g., 'EMBER', 'FRESH'). (Default: None)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Simple, direct text-to-image generation with good control over Ideogram-specific features like model version, style, and aspect ratio.

  • Limitations: Not for editing existing images or controlled generation.


Turbo [Generate/Basic]

  • Internal ID: flux-1.1-pro-ultra

  • Category: Generate

  • SubCategory: Basic

  • Core Purpose / When to Use: When you want to generate fresh, realistic images. Excellent for exploring variations of a product or concept if a reference image is provided. Ignores canvas mask and content.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Ignores.

    • Control Layer Usage: Ignores.

    • Reference Image Input: Yes (supports 0 to 1 external reference image).

    • Processed Area: Processes the entire Generation Area, with output dimensions influenced by Aspect Ratio.

  • Key Parameters (from UI):

    • Prompt: Your text description.

    • Ref Image Strength: 0-1. Controls the blend between the text prompt and the influence of the (optional) reference image. (Default: 0.1)

    • Aspect Ratio: Select from various ratios (e.g., '16:9', '1:1', '9:16'). Defines the output image's proportions. (Default: '1:1')

    • Raw: Normal/Raw. If "Raw," generates less processed, more natural-looking images. (Default: Raw/true)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Generates realistic images; great for variations based on a reference image; offers explicit aspect ratio control.

  • Limitations: Does not use canvas content or masks for direct editing. Its "Basic" subcategory might be misleading given its reference image and aspect ratio capabilities.


Medium [Generate/Controlled/Contour]

  • Internal ID: sdxl-canny

  • Category: Generate

  • SubCategory: Controlled

  • Control Type: Contour

  • Core Purpose / When to Use: For generating images guided by the edge structure of your current canvas content. Useful when you want the output to loosely follow existing forms without preparing a separate Edges Control Layer. Uses an on-the-fly Canny edge detection.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: Yes.

    • Mask Usage: Ignores.

    • Control Layer Usage: Ignores app's designated Edges Control Layers. Instead, it creates an estimated Edges map (Contour) on-the-fly from the visible content within the Generation Area.

    • Reference Image Input: No.

    • Processed Area: Processes the entire Generation Area.

  • Key Parameters (from UI):

    • Prompt: Your text description.

    • Negative Prompt: Text description for qualities to avoid. (Default: 'low quality, bad quality, sketches')

    • Steps: 1-500. Number of denoising steps. (Default: 50)

    • Strength (ControlNet Strength): 0-1. Controls how strongly the on-the-fly Canny edges guide the generation. (Default: 0.5)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Offers image-to-image structural guidance based on automatically detected canvas edges.

  • Limitations: Generated edge map is an estimation and may be less precise than user-provided Control Layers; generally lower quality output compared to models using explicit, curated Control Layers.


High [Generate/Controlled/Contour]

  • Internal ID: flux-canny-dev

  • Category: Generate

  • SubCategory: Controlled

  • Control Type: Contour

  • Core Purpose / When to Use: Similar to Medium [Generate/Controlled/Contour], for generating images guided by on-the-fly edge detection from canvas content. The "High" designation may imply different performance or quality characteristics.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Ignores.

    • Control Layer Usage: Ignores app's designated Edges Control Layers. Creates an estimated Edges map (Contour) on-the-fly from visible content in the Generation Area.

    • Reference Image Input: No.

    • Processed Area: Processes the entire Generation Area.

  • Key Parameters (from UI - CannyControlPanel):

    • Prompt: Your text description.

    • Steps: 1-50 (Min steps 1 for this model). Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Provides structural guidance based on immediate canvas content.

  • Limitations: On-the-fly edge detection might be less precise than dedicated Control Layers.


Turbo [Generate/Controlled/Contour]

  • Internal ID: flux-canny-pro

  • Category: Generate

  • SubCategory: Controlled

  • Control Type: Contour

  • Core Purpose / When to Use: Best for high-quality, structurally controlled renders guided by a user-provided Edges map (set as a Control Layer with "edges" type). Ideal when you have a clear outline or sketch defining the desired composition.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Ignores.

    • Control Layer Usage: Uses the app's designated Edges Control Layer (Contour) for guidance if it exists, if it doesnt exists, the model will improvise a quick one on the fly.

    • Reference Image Input: No.

    • Processed Area: Processes the entire Generation Area.

  • Key Parameters (from UI - CannyControlPanel):

    • Prompt: Your text description.

    • Steps: 15-50. Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Delivers precise structural adherence to your Edges map.

  • Limitations: Quality of output heavily depends on the quality and clarity of the provided Edges Control Layer.


High [Generate/Controlled/Depth]

  • Internal ID: flux-depth-dev

  • Category: Generate

  • SubCategory: Controlled

  • Control Type: Depth

  • Core Purpose / When to Use: For generating images with a sense of depth derived from the current canvas content. Useful for quick 3D-like compositions without preparing a separate Depth Control Layer.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Ignores.

    • Control Layer Usage: Ignores app's designated Depth Control Layers. Creates an estimated Depth map on-the-fly from visible content in the Generation Area.

    • Reference Image Input: No.

    • Processed Area: Processes the entire Generation Area.

  • Key Parameters (from UI - DepthControlPanel):

    • Prompt: Your text description.

    • Steps: 1-50 (Min steps 1 for this model). Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Adds depth perception based on immediate canvas content.

  • Limitations: On-the-fly depth estimation might be less accurate or controllable than dedicated Depth Control Layers.


Turbo [Generate/Controlled/Depth]

  • Internal ID: flux-depth-pro

  • Category: Generate

  • SubCategory: Controlled

  • Control Type: Depth

  • Core Purpose / When to Use: Best model for high-quality, depth-aware controlled renders. Use when you have a Depth map (set as a Control Layer with "depth" type) to guide the 3D spatial arrangement of the scene. Particularly effective when the control image defines volume clearly, even without fine visual details.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Ignores.

    • Control Layer Usage: Uses the app's designated Depth Control Layer for guidance if it exists, if it doesnt exists, the model will improvise a quick one on the fly.

    • Reference Image Input: No.

    • Processed Area: Processes the entire Generation Area.

  • Key Parameters (from UI - DepthControlPanel):

    • Prompt: Your text description.

    • Steps: 15-50. Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Excellent for creating scenes with strong 3D perspective and accurate spatial relationships based on your Depth map.

  • Limitations: Output realism is tied to the quality and accuracy of the provided Depth Control Layer.


Relight [Generate/Special]

  • Internal ID: iclight

  • Category: Generate

  • SubCategory: Special

  • Core Purpose / When to Use: For advanced use cases involving relighting. This model can rework individual image layers, redrawing them with different lighting conditions.

  • Key Characteristics:

    • Prompt Input: May use prompts to describe lighting.

    • Negative Prompt Input: Not specified.

    • Mask Usage: Not typically used with the main canvas mask; operates on specified layers.

    • Control Layer Usage: No.

    • Reference Image Input: Not specified.

    • Processed Area: Operates on the content of specified layers.

  • Key Parameters (from UI): (UI Panel not provided for this model)

  • Strengths: Unique capability to relight elements non-destructively on their own layers.

  • Limitations: Very niche; understanding how it interprets layers and lighting prompts is key. UI parameters are currently undefined.


Reimagine [Generate/Special]

  • Internal ID: gpt-image-1-edit

  • Category: Generate

  • SubCategory: Special

  • Core Purpose / When to Use: An image-to-image model that subtly redraws the entire image (within the Generation Area) based on the original and a prompt. It can incorporate multiple additional reference images. While categorized as "Generate," its parameters suggest strong editing capabilities.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Takes the mask into account as a suggestion or area of focus (controlled by "Destroy Fill Area Behind Mask" and "Mask Alpha Radius"), but may alter unmasked areas as it processes the entire Generation Area.

    • Control Layer Usage: No.

    • Reference Image Input: Yes (supports 0 to 9 additional reference images).

    • Processed Area: Processes the entire Generation Area, with mask influencing the focus of edits.

  • Key Parameters (from UI):

    • Prompt: Your text description guiding the reimaging/editing.

    • Destroy Fill Area Behind Mask: Yes/No. If Yes, ignores original pixels under the mask. (Default: No)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders if a mask is used. (Default: 16)

    • Quality: 'auto', 'high', 'medium', 'low'. Selects the quality level; lower is faster. (Default: 'auto')

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Good for overall subtle changes, style adjustments, or incorporating elements from reference images across the scene. Supports multiple references.

  • Limitations: Less precise for targeted inpainting compared to dedicated "Edit" models, as it tends to affect the whole image. The "Generate" classification might be confusing given its edit-focused parameters.


Edit Models

These models are designed for modifying or enhancing existing images, often within a masked area (inpainting/outpainting) or based on specific instructions.


Lite [Edit/Basic]

  • Internal ID: flux-fill-pro

  • Category: Edit

  • SubCategory: Basic

  • Core Purpose / When to Use: A reliable inpainting model for editing images where a precise mask defines the area to be modified by the prompt. Supports a specific list of LoRAs.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Required (precise mask).

    • Control Layer Usage: No.

    • Reference Image Input: No (but supports LoRA selection as "Custom Dataset").

    • Processed Area: Operates within the masked area, using surrounding context.

  • Key Parameters (from UI):

    • Prompt: Your text description for the masked area.

    • Steps: 1-50. Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Custom Dataset (LoRA Model): 'None', 'Printemps', 'Aigle', 'Wfr-2', 'table dion-sade'. (Default: 'None')

    • LoRA Strength: -1 to 3. Controls influence of the selected LoRA. (Appears if LoRA is selected). (Default: 1)

    • Destroy Fill Area Behind Mask: Yes/No. If Yes, ignores original pixels under the mask. (Default: Yes)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 16)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Solid performance for standard inpainting tasks with specific LoRA support.

  • Limitations: Effectiveness depends on mask quality and prompt clarity. LoRA list is fixed for this model.


Medium [Edit/Basic]

  • Internal ID: flux-fill-dev

  • Category: Edit

  • SubCategory: Basic

  • Core Purpose / When to Use: Inpainting model similar to Lite [Edit/Basic], but with support for a dynamic list of LoRA (Low-Rank Adaptation) finetunes (type 'FluxDev'), allowing for more specialized stylistic outputs.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Required (precise mask).

    • Control Layer Usage: No.

    • Reference Image Input: No (but supports LoRA selection as "Custom Dataset").

    • Processed Area: Operates within the masked area.

    • Special Feature: Supports a dynamic list of LoRAs (type 'FluxDev') for finetuned styles.

  • Key Parameters (from UI):

    • Prompt: Your text description for the masked area.

    • Steps: 1-50. Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Custom Dataset (LoRA Model): 'None' or selected 'FluxDev' type LoRA. (Default: 'None')

    • LoRA Strength: -1 to 3. Controls influence of the selected LoRA. (Appears if LoRA is selected). (Default: 1)

    • Destroy Fill Area Behind Mask: Yes/No. If Yes, ignores original pixels under the mask. (Default: Yes)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 16)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Flexible inpainting with potential for highly specific styles via dynamic LoRAs.

  • Limitations: LoRA availability and selection process are key.


High [Edit/Basic]

  • Internal ID: ideogram-edit

  • Category: Edit

  • SubCategory: Basic

  • Core Purpose / When to Use: A good standard model for general inpainting tasks using Ideogram's 'V_2' engine. Reliable for filling or modifying masked areas, with style selection.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in this specific panel).

    • Mask Usage: Required.

    • Control Layer Usage: No.

    • Reference Image Input: No.

    • Processed Area: Operates within the masked area (plus Mask Extra Radius); can sometimes draw a few pixels beyond the mask boundary for better blending.

  • Key Parameters (from UI):

    • Prompt: Your text description for the masked area.

    • Style Type: 'AUTO', 'GENERAL', 'REALISTIC', 'DESIGN', 'RENDER_3D', 'ANIME'. Selects the artistic style for the inpainting. (Default: 'REALISTIC')

    • Destroy Fill Area Behind Mask: Yes/No. If Yes, ignores original pixels under the mask. (Default: Yes)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 6)

    • Mask Extra Radius: 0-24. Inflates the mask before cutting the result, helping with blending. (Default: 16)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

    • (Model Version): Fixed to 'V_2'.

    • (Magic Prompt Option): Internal parameter, not user-configurable. (Default: 'OFF')

  • Strengths: Dependable for common editing needs with Ideogram 'V_2' model and style controls.

  • Limitations: Does not support different Ideogram model versions or Magic Prompt through its UI. No negative prompt input in this panel. May be superseded by newer models (Ultra [Edit/Basic]) for features like reference images or more advanced Ideogram engine versions.


Ultra [Edit/Basic]

  • Internal ID: ideogram3-edit

  • Category: Edit

  • SubCategory: Basic

  • Core Purpose / When to Use: The top-tier model for achieving ultra-realistic edits within a semi-precisely masked area, especially when you want to guide the generation with an external reference image. Supports various style outputs.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Required (semi-precise mask is often sufficient).

    • Control Layer Usage: No.

    • Reference Image Input: Yes (supports 0 to 4 external reference images for visual guidance).

    • Processed Area: Operates within the masked area (plus Mask Extra Radius); can draw a few pixels around the mask for seamless integration.

  • Key Parameters (from UI):

    • Prompt: Your text description for the masked area.

    • Style Type: 'AUTO', 'GENERAL', 'REALISTIC', 'DESIGN', 'RENDER_3D', 'ANIME'. Selects the artistic style. (Default: 'REALISTIC')

    • Destroy Fill Area Behind Mask: Yes/No. If Yes, ignores original pixels under the mask. (Default: Yes)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 6)

    • Mask Extra Radius: 0-24. Inflates the mask before cutting the result, helping with blending. (Default: 16)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

    • (Magic Prompt Option): Internal parameter, not user-configurable in this panel. (Default: 'OFF')

  • Strengths: Produces ultra-realistic results; excellent with reference images; offers style selection.

  • Limitations: Demands good quality inputs (image, mask, reference) for best results. Its "Basic" subcategory might be misleading given its reference image and style capabilities.


High [Edit/Controlled/Contour]

  • Internal ID: flux-fill-canny

  • Category: Edit

  • SubCategory: Controlled

  • Control Type: Contour

  • Core Purpose / When to Use: For inpainting tasks where the generated content within the mask needs to adhere to a specific structure defined by an Edges Control Layer. Only use if edge guidance is critical for the edit.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Required (precise mask).

    • Control Layer Usage: Requires and uses the app's designated Edges Control Layer (Contour) to influence the infilled content.

    • Reference Image Input: No (but supports LoRA selection as "Custom Dataset").

    • Processed Area: Operates within the masked area, guided by the Control Layer.

  • Key Parameters (from UI):

    • Prompt: Your text description for the masked area.

    • Steps: 1-100. Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Custom Dataset (LoRA Model): 'None' or selected LoRA. Allows use of finetuned models for specific styles. (Default: 'None')

    • Masked Area Noise Level (Strength): 0.01-1. Controls how much original masked content influences the result. 1 = original content ignored. (Default: 1)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 10)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Allows structural control over inpainted content using Edges maps and supports stylistic LoRAs.

  • Limitations: Requires a well-defined mask and a relevant Edges Control Layer.


High [Edit/Controlled/Depth]

  • Internal ID: flux-fill-depth

  • Category: Edit

  • SubCategory: Controlled

  • Control Type: Depth

  • Core Purpose / When to Use: For inpainting tasks where the generated content within the mask needs to respect spatial relationships defined by a Depth Control Layer. Use if 3D form is important for the inpainted element.

  • Key Characteristics:

    • Prompt Input: Yes.

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Required (precise mask).

    • Control Layer Usage: Requires and uses the app's designated Depth Control Layer to influence the infilled content.

    • Reference Image Input: No.

    • Processed Area: Operates within the masked area, guided by the Control Layer.

  • Key Parameters (from UI):

    • Prompt: Your text description for the masked area.

    • Steps: 1-50. Controls detail. (Default: 40)

    • Guidance: 0-100. Controls prompt adherence. (Default: 30)

    • Strength (ControlNet Strength): 0-1. Controls how strongly the Depth Control Layer influences the generation. (Default: 1.0)

    • Destroy Fill Area Behind Mask: Yes/No. If Yes, ignores original pixels under the mask. (Default: No)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 10)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Ensures inpainted elements align with the scene's depth using a Depth Control Layer.

  • Limitations: Needs an accurate mask and a meaningful Depth Control Layer. The Destroy Fill Area Behind Mask and Strength parameters interact to control how original pixels and the depth map influence the result.


Reference [Edit/Special]

  • Internal ID: fx-fill-ref

  • Category: Edit

  • SubCategory: Special

  • Core Purpose / When to Use: Specifically designed to inpaint a product or a distinct object from a reference image directly into the masked area of your scene. Generation is driven entirely by the reference image.

  • Key Characteristics:

    • Prompt Input: No (not user-configurable; model uses an empty prompt internally).

    • Negative Prompt Input: No.

    • Mask Usage: Required (precise mask defining placement and general shape).

    • Control Layer Usage: No.

    • Reference Image Input: Yes (Requires exactly 1 external image. This is the primary input for the content to be filled). The entire reference image content will be attempted to fit/transform into the mask area.

    • Processed Area: The masked area.

  • Key Parameters (from UI):

    • Steps: 1-50. Controls detail. (Default: 25)

    • Guidance: 0-100. May influence adherence to reference image features. (Default: 30)

    • Strength (Image-to-Image Strength): 0-1. Primary control for how much the reference image transforms the masked area. (Default: 1.0)

    • Masked Area Noise Level: 0.01-1. Controls how much original masked content (if any) influences the result. 1 = original content ignored. (Default: 1.0)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 5)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

    • (Destroy Fill Area Behind Mask): Defaults to false (original masked pixels can influence if Noise Level < 1).

  • Strengths: Excellent for inserting specific objects from well-defined reference images without prompt engineering.

  • Limitations: Requires a clean, well-focused reference image of the subject and a precise mask. Success depends heavily on the compatibility of the reference object's shape with the mask and the Strength setting.


Reference 2 [Edit/Special]

  • Internal ID: flux-fill-ace

  • Category: Edit

  • SubCategory: Special

  • Core Purpose / When to Use: A high-quality inpainting model that can take an external image reference. It also offers options for on-the-fly structural guidance (Canny/Depth) derived from within the masked area context. Use for inpainting specific products or elements when other models fail, but note it can be very slow.

  • Key Characteristics:

    • Prompt Input: Yes (supports direct instructions).

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Required (precise mask).

    • Control Layer Usage: No (uses internal on-the-fly preprocessing selected via "Preprocessed Type").

    • Reference Image Input: Yes (supports 0 to 1 external reference image).

    • Processed Area: Operates within the masked area.

  • Key Parameters (from UI):

    • Target Prompt: Text description or instructions for the output. (Default: 'Restore a partial image from {image} that aligns with this ')

    • Preprocessed Type: 'None', 'Canny', 'Depth', 'All'. Selects structural guidance type applied within the mask. (Default: 'Canny')

    • Steps: 1-100. Controls detail. (Default: 50)

    • Guidance: 0-100. Controls prompt adherence. (Default: 50)

    • Mask Alpha Radius: 0-40. Controls transparency feathering at result borders. (Default: 5)

    • Reference size (Keep Pixels Rate): 0-1. Percentage of reference image height to fit in the target masked area. (Default: 0.7)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

    • (Destroy Fill Area Behind Mask): Defaults to true (original masked pixels ignored).

  • Strengths: Can produce good results for specific product inpainting using references, with added internal structural guidance options.

  • Limitations: Very slow generation time. Success depends on the interplay of prompt, reference image, and selected preprocessing type.


Instructions [Edit/Special]

  • Internal ID: hidream-E1

  • Category: Edit

  • SubCategory: Special

  • Core Purpose / When to Use: For editing an image or adding new elements using natural language instructions in the prompt, especially when creating a precise manual mask is difficult or tedious. This model generates its own internal mask based on your prompt and the image content. Efficient for changing the color of something without altering anything else.

  • Key Characteristics:

    • Prompt Input: Yes (crucial for providing direct instructions on what to change and where).

    • Negative Prompt Input: No (not exposed in its panel).

    • Mask Usage: Ignores any user-drawn mask on the canvas; it generates its own mask on-the-fly based on the prompt.

    • Control Layer Usage: No.

    • Reference Image Input: No.

    • Processed Area: Processes the Generation Area and intelligently determines where to apply edits based on its interpretation of the prompt and the internal mask it generates.

    • Ratio: Works way better with square ratio.

  • Key Parameters (from UI):

    • Prompt: Your direct instructions on what to change in the image.

    • Steps: 4-80. Controls the coherence of the generated image. (Default: 35)

    • Guidance (Prompt Guidance): 1.1-10. Controls how closely the result matches your prompt instructions. (Default: 2.0)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

    • (Image Guidance): Internal parameter, not user-configurable. (Default: 1)

  • Strengths: Very convenient for edits where precise manual masking is challenging; allows for more intuitive instruction-based editing.

  • Limitations: Less direct control over the exact area of modification compared to mask-based inpainting. Success depends on the model's ability to accurately interpret the prompt and segment the image correctly.


Upscale Models

These models are used to increase the resolution of your images, enhance details, and improve overall quality.


Medium [Upscale]

  • Internal ID: clarity-upscaler

  • Category: Upscale

  • Core Purpose / When to Use: A good general-purpose upscaler for enhancing image resolution and overall quality. It can also make further enhancements or modifications based on a text prompt.

  • Key Characteristics:

    • Prompt Input: Yes (optional, can be used to guide enhancements or style during upscaling).

    • Negative Prompt Input: Yes.

    • Mask Usage: Supports Mask Alpha Radius for blending if a mask is present, though typically used for full image upscaling.

    • Control Layer Usage: No.

    • Reference Image Input: No.

    • Processed Area: Processes the content within the Generation Area.

  • Key Parameters (from UI):

    • Prompt: Your text description for desired qualities. (Default: 'masterpiece, best quality, highres')

    • Negative Prompt: Text description for qualities to avoid. (Default: '(worst quality, low quality, normal quality:2)')

    • Steps: 4-100. Controls coherence of details. (Default: 18)

    • Creativity: 0-1. How much new detail can be invented. (Default: 0.3)

    • Resemblance: 0-3. How much original shapes are preserved. (Default: 1.5)

    • Scale: 1-10. Upscaling factor. (Default: 2)

    • Output Mega Pixels: 0.1-64. Target output resolution in millions of pixels. (Default: 3.0)

    • Sharpen: 0-10. Additional sharpening. (Default: 0)

    • Dynamic (HDR): 1-50. Influences High Dynamic Range. (Default: 4)

    • Tile Size: 16-256. Internal processing tile size. (Default: 256)

    • Checkpoint: 'Juggernaut' or 'Epic Realism'. Selects the base upscaling model. (Default: 'Juggernaut')

    • More details: 0-1. Higher values attempt to add more fine details. (Default: 0.5)

    • Hand fix: Enabled/Disabled. Activates a module for improving hand rendering. (Default: Disabled)

    • Mask Alpha Radius: 0-64. Controls transparency feathering at result borders if a mask is used. (Default: 24)

    • Seed: Controls randomness (typically -1 for random). (Default: -1)

  • Strengths: Versatile upscaling with optional prompt-based refinement and detailed controls.

  • Limitations: Can degrade or distort text within images; may struggle with accurately rendering people at very low source resolutions.


High [Upscale]

  • Internal ID: topaz

  • Category: Upscale

  • Core Purpose / When to Use: The preferred model for high-fidelity upscaling when the goal is to enhance resolution and correct image issues without altering the original content or style. Offers detailed control over enhancement types, upscaling factors, and face processing.

  • Key Characteristics:

    • Prompt Input: No (does not use text prompts).

    • Negative Prompt Input: No.

    • Mask Usage: Not typically used for full image upscaling.

    • Control Layer Usage: No.

    • Reference Image Input: No.

    • Processed Area: Processes the content within the Generation Area.

  • Key Parameters (from UI):

    • Enhance Model: 'Standard V2', 'Low Resolution V2', 'CGI', 'High Fidelity V2', 'Text Refine'. Selects the core enhancement algorithm. (Default: 'Standard V2')

    • Upscale Factor: 'None', '2x', '4x', '6x'. Determines the upscaling multiplier. (Default: '4x')

    • Subject Detection: 'None', 'All', 'Foreground', 'Background'. Mode for detecting subjects for differential processing. (Default: 'None')

    • Output Mega Pixels: 0.1-64. Target output resolution in millions of pixels. (Default: 32.0)

    • Face Enhancement: On/Off. Enables specialized face processing. (Default: Off)

    • Face Enhancement Strength: 0-1. (Appears if Face Enhancement is On). Controls sharpness of enhanced faces. (Default: 0.8)

    • Face Enhancement Creativity: 0-1. (Appears if Face Enhancement is On). Level of creative change for face enhancement. (Default: 0.0)

    • Seed: Controls randomness (typically -1 for random, though less impactful for deterministic upscalers). (Default: -1)

  • Strengths: Excellent for clean, artifact-free upscaling that preserves original image integrity, with granular control over the enhancement process.

  • Limitations: No creative control via prompting; purely focuses on enhancement and resolution increase based on selected parameters.


This directory will be updated as new models are added or existing ones are refined. Always refer to the Left Sidebar in the Fater Editor for the most current list of available models and their specific parameters

Last updated