Image-to-Image & Inpainting

One of Fater's most powerful capabilities in the Image Editor is Image-to-Image transformation and Inpainting. This allows you to take an existing image (or parts of it), define an area with a mask, and then use AI with a text prompt to modify, replace, or fill in that specific area.

This is ideal for tasks like object removal, adding new elements seamlessly, changing textures or materials, or retouching specific parts of an image.


Core Workflow for Inpainting & Image Editing

  1. Prepare Your Base Image & Mask:

    • Load/Create Base Image and create a precise Mask over the area to change.

    • Adjust the Generation Area Bounding Box to tightly fit your mask and relevant surrounding context (including light sources if possible).

  2. Select an "Edit" Category AI Model:

    • Use the ModelSelector to choose an appropriate "Edit" model.

  3. Configure Model-Specific Parameters:

    • Adjust Prompt, Steps, Guidance, Seed, Style Type, Destroy Fill Area Behind Mask, Mask Alpha Radius, Mask Extra Radius, etc.

  4. Write Your Editing Prompt:

    • In the Floating Prompt Area, describe the desired content for the masked region.

  5. Initiate Generation:

    • Click the Generate button (✨).

  6. Review and Integrate Results:

    • The first result is added as a new layer. Use the Generation Task List to swap results.


Advanced Inpainting Techniques & Tricks

Beyond the basic workflow, you can use these creative techniques to achieve more complex or controlled results:

1. Off-Mask Visual Referencing:

  • Concept: Provide a visual example for the AI without directly including it in the generation area or as a formal "Reference Image" parameter.

  • How:

    1. Add an image layer that looks similar to what you want to generate (e.g., a specific type of texture, an object style).

    2. Position this "visual cue" layer on your canvas outside your active masked area but inside the Generation Area Bounding Box if possible, or at least visible nearby on the canvas if your Generation Area is small.

    3. In your text prompt, explicitly tell the AI to take inspiration from it. For example: "In the masked area, generate a stone texture similar to the sample image on the left."

  • Benefit: Can help guide the AI towards a specific style or visual characteristic when direct parameter control isn't enough, relying on the AI's ability to "see" nearby context within its processing window. The effectiveness varies by model.

2. Masked Seeding with Partial Strength:

  • Concept: Place an image that resembles your desired outcome directly under your mask, and then use a partial "Denoising Strength" (or equivalent setting) to have the AI modify it rather than completely replace it.

  • How:

    1. Add an image layer (your "seed image") that has the general form, color, or texture you want.

    2. Position this seed image layer so it's underneath the area you intend to mask and inpaint.

    3. Create your mask over this area.

    4. In the selected "Edit" model's parameters:

      • Set "Destroy Fill Area Behind Mask" to "No" (or if a "Denoising Strength" parameter is available, set it to a value less than 1.0, e.g., 0.5 - 0.75).

    5. Write a prompt that guides the modification (e.g., "refine the dog's fur, make it fluffier").

  • Benefit: Allows you to heavily influence the core structure or color palette of the inpainted area, giving the AI a strong starting point to build upon. Great for stylizing, subtle changes, or guided refinements or simply having the element integrated with correct light levels and shadows orientations.

3. Segmented Generation (Piece-by-Piece):

  • Concept: For complex objects like characters, break down the generation into multiple steps, focusing on one part at a time to achieve higher detail or combine different elements.

  • How (Example: Generating a Character):

    1. Upper Body:

      • Have a source image layer (e.g., a character reference or a previously generated body).

      • Mask only the upper body area.

      • Prompt for the desired upper body (e.g., "wearing a futuristic silver jacket").

      • Generate. Let's call this new layer "UpperBody_V1".

    2. Lower Body:

      • Make "UpperBody_V1" visible. Hide or delete the original source layer if it's no longer needed for leg reference.

      • Now, mask the area where the legs should be, potentially overlapping slightly with the bottom of "UpperBody_V1" for a good blend.

      • Ensure the Generation Area Bounding Box covers this new mask and the relevant part of "UpperBody_V1" for context.

      • Prompt for the desired lower body (e.g., "matching futuristic silver pants, combat boots").

      • Generate. This creates "Legs_V1".

    3. Combine & Refine: You now have separate layers for the upper and lower body, which you can refine, mask further, or merge.

  • Benefit: This "split technique" allows you to:

    • Focus the AI's attention (and pixel budget) on smaller, more manageable parts, often leading to better detail.

    • Iterate on individual components without regenerating the entire object.

    • Combine elements from different generation attempts or styles.


Tips for All Inpainting/Editing:

  • Precise Masks are Key.

  • Context is Important (Generation Area & Surrounding Pixels).

  • Iterate on Prompts and Parameters.

Experimenting with these advanced techniques can unlock even more creative possibilities with Fater's image editing and inpainting capabilities.

Last updated