Specialized Models

Beyond standard text-to-image, inpainting, controlled generation, and upscaling, Fater's Image Editor includes a selection of Specialized AI Models. These models often have unique purposes, input requirements (like multiple reference images or specific layer setups), or produce distinct artistic effects.

You can typically find these models under the "Generate" or "Edit" categories in the Model Selector, often within a "Special" subcategory.


Examples of Specialized Model Capabilities

(Note: The exact models and their capabilities will be detailed in the AI Models. Below are conceptual examples of what specialized models might achieve.)

  1. Relighting Scenes:

    • Purpose: To change the lighting conditions of an existing image or add new light sources realistically.

    • How it work:

      • You provide your base image layers.

      • The specialized relighting model might take additional inputs like a layer defining light positions/colors, or a text prompt describing the desired lighting ("golden hour sunset," "studio softbox lighting").

      • The model generates a new version of your image with the updated lighting.

    • Key Parameters: Might include controls for light intensity, color, direction, or type of light source.

  2. Reference-Based Editing / Style Transfer:

    • Purpose: To apply the artistic style of one or more reference images to your target image, or to heavily influence the content generation based on visual examples.

    • How it work:

      • You provide your base image/layers.

      • You upload one or more External Reference Images using the image input in the Left Sidebar.

      • A text prompt might further guide the content or the degree of stylization.

      • The AI model (e.g., a model from the "Edit/Special" category) attempts to blend the style/content of the reference images with your base image and prompt.

    • Key Parameters: Could include "Reference Strength," "Style Weight," or controls for how different aspects of the reference (color, texture, form) are applied.

  3. Instruction-Guided Image Manipulation:

    • Purpose: To perform complex image edits based on natural language instructions rather than just a descriptive prompt.

    • How it work:

      • You provide your base image.

      • In the prompt, you give an instruction like "make the door red," "add a hat to the person on the left," or "change the background to a beach scene."

      • The specialized model (e.g., a model from "Generate/Special" or "Edit/Special") interprets the instruction and attempts to apply the edit.

    • Key Parameters: Often simpler, relying heavily on the quality of the instruction. Ignores the mask, but respect the generation area.

  4. Other Unique Effects:

    • This category can include models for specific artistic effects, abstract generations, or other novel image manipulations that don't fit neatly into standard categories.


General Workflow for Specialized Models

  1. Prepare Your Inputs: This is crucial. Ensure you have the correct base image layers, reference images, masks, or control layers as required by the specific specialized model.

  2. Select the Specialized Model: Use the Model Selector, navigating to the "Generate" or "Edit" category, and then likely the "Special" subcategory, to find the model.

  3. Configure Model-Specific Parameters: The Left Sidebar will display parameters unique to this specialized model. Carefully review and adjust them.

  4. Write Your Prompt / Instructions: Provide the necessary text input. For instruction-guided models, phrase your request clearly.

  5. Set Generation Area & Seed.

  6. Initiate Generation: Click the Generate button (✨) in the Floating Prompt Area.

  7. Review Results: The output will appear as a new layer and in the Generation Task List.


Always consult the AI Model Directory for detailed information on each specialized model, its specific input requirements, parameters, and expected output. Experimentation is key to understanding their unique capabilities.

Last updated