top of page

Exploring Comfyui Workflows from AI expert Bora Özkut

  • Writer: GridMarkets.AI
    GridMarkets.AI
  • Aug 19
  • 6 min read

Updated: Aug 26

Factors & Inspiration to Become an AI Expert


My journey into generative AI began as a natural evolution of my passion for technology, storytelling, and automation. With over a decade of experience in digital marketing and technical direction, I was constantly exploring new tools to enhance creative workflows. Around 2022, I started seeing generative AI not just as a novelty, but as a transformative force — one that could reshape how we think about content, design, and communication.


The spark came during my time at Alaaddin Adworks, where I led the integration of AI tools like GPT-4 into content pipelines. What began as an experiment in automation quickly evolved into a full creative revolution. My background in communication design from Bilkent University gave me a solid aesthetic foundation, while my technical experience gave me the means to build systems around it. Key influences include open-source AI communities and trailblazing artists experimenting with neural networks. Platforms like Hugging Face, RunwayML, and Stability AI opened the floodgates for independent creators like me.






Created using ComfyUi
Bora's profile created using ComfyUi
Lora Model trained Boraozkut.Safetensors
Lora Model trained Boraozkut.Safetensors

What were your first steps into AI?

The early days were chaotic, exciting, and full of trial and error. I started with tools like Midjourney and Automatic1111, but it was ComfyUI that unlocked the full potential of visual programming and AI creativity. My first generative project was building prompt-based marketing visuals, but soon I was training LoRA models and fine-tuning Stable Diffusion for custom client needs. One of the main challenges was bridging the gap between technical complexity and creative output. ComfyUI’s node-based workflow required a learning curve, but once I understood its modularity, I felt completely in control of my creative process.


What's your story?

My career has been a blend of strategy, design, and innovation. After earning a degree in Communication and Design from Bilkent University, I transitioned into leadership roles in digital agencies, eventually becoming Senior Technical Lead Director at Alaaddin Adworks. There, I helped deploy AI-enhanced SEO and automated content systems using LLMs and image generation tools. I also work as an AI instructor with Datalent (Azerbaijan), where I teach professionals how to use large language models, prompt engineering, and visual AI tools like ComfyUI. This dual role — both as a practitioner and educator — keeps me on the edge of innovation while staying grounded in real-world applications.


Give me a big win & a totally flopped moment!

Amazing Experience: One highlight was leading a team to develop AI-generated visuals for a multinational campaign — blending LoRA-trained models with real-time asset generation. It wasn’t just the visuals that impressed, but how we reduced turnaround time by over 60%. Seeing traditional creatives embrace AI workflows was incredibly rewarding.

Flop: Early in my AI journey, I attempted to generate a series of stylized visuals for a brand using a poorly fine-tuned model. The outputs were inconsistent and unusable. It forced me to dive deeper into dataset curation, parameter tuning, and ethical AI practices. That failure was essential — it taught me the importance of patience, iteration, and respecting model boundaries.


When and Why Did You Move to AI? Why ComfyUI?

My transition into AI was gradual, starting with GPT and Stable Diffusion. I tested tools like Midjourney and Automatic1111, but I found them limiting for advanced workflows. ComfyUI gave me unmatched flexibility — its node-based logic empowered me to build precise, modular systems for visual generation. I adopted ComfyUI in early 2023, impressed by its open architecture, customizability, and the active community behind it.


Why do you customize ComfyUI workflows with an AI expert?

Customizing ComfyUI was a natural step as my projects grew more complex. I use fine-tuning techniques like LoRA training to create domain-specific models, especially for architecture, portraiture, and campaign visuals. My goal is always clarity, efficiency, and creative control. These customizations allow me to deliver unique results while maintaining brand consistency, whether it's for a commercial client or a personal art project.



Porsche – Real to Synthetic Background Integration

Compositing Multiple Realistic Cars into a Generated Scene.

In this project, I 've combined two static images of Porsche models into a synthetically generated urban storefront environment. The goal was to simulate a luxury photoshoot (without relying on location-based photography or expensive set design).


Original images

Workflow Summary:

  • Generated a realistic storefront background using AI.

  • Isolated Porsche images and composited them into the scene.

  • Added and adjusted shadows to match environmental light direction.

  • Fine-tuned scale, reflections, and blending for realism.


AI generated scenes

Challenges & Fixes:

Early versions had mismatched perspectives and unrealistic shadows. I corrected these by adjusting vanishing points and applying depth-aware shadow gradients.

Outcome:

ComfyUI is maintaining visual integrity and commercial quality.


AI-generated scenes can reduce costs, increase flexibility, and allow brands to iterate and localize campaigns quickly.


Hug – Emotional Composition Using Flux Kontext

Combining Photos and Simulating Movement.

This test focused on Flux Kontext, an advanced generative AI model that understands spatial and emotional relationships in multi-subject scenes. I merged two photos of different people to create a cohesive "hugging" interaction and applied motion elements for a dynamic result.

Flux Kontext represents a milestone in generative AI by contextualizing spatial relationships—allowing believable, emotionally intelligent compositions. It reduces visual dissonance in scenes involving human interaction, which is a challenge in multi-subject synthesis

Separate portraits

Workflow Summary:

  • Input separate portraits of two characters.

  • Used pose-guided generation to align body positioning.

  • Applied subtle animation (head tilt, hand motion).

  • Enhanced facial expression consistency


Technical Strengths:

Flux Kontext enables more believable multi-person composition, useful in advertising, storytelling, or creating emotionally driven short-form content.

Outcome:

The result was a visually cohesive and emotionally resonant scene—made possible by Flux Kontext’s ability to generate meaningful interaction between separate sources.


Marketing – AI Character Interacting with Real Products

Simulating Product Engagement in a Static Scene.

In this experiment, I composed a real product image with an AI-generated character that appears to interact with the product (via gaze, posture, and expression).

Interacting with real-world products using AI characters is becoming a key technique in AI-powered marketing. It enables scalable, dynamic brand storytelling without reshooting content. Virtual ambassadors can now "engage" with physical products in unique ways for various regions, personas, or social platforms.

AI Character Interacting with Real Products

Workflow Summary:

  • Added product into a base canvas.

  • Generated character using prompts and pose control to simulate interaction.

  • Adjusted gaze direction, hand pose, and facial expression for engagement.

  • Blended lighting to unify both elements.

AI Character Interacting with Real Products

Outcome:

The result was an engaging static image where the character and product appear naturally connected. A crucial strategy in consumer-focused AI marketing.


Empowering creators, marketers, and technologists to iterate faster



You recently collaborated with GridMarkets and created a ComfyUI Flux GGUF + ControlNet Canny workflow.

Can you walk us through?

  • Loading a photorealistic base model with Flux GGUF.

  • Using ControlNet with Canny edge detection to guide structural outputs.

  • Iteratively refining parameters for lighting, texture, and realism.

  • Exporting high-resolution sequences for integration into video or web media.


This process shows how technical precision and creative intent can align beautifully in AI-generated architecture from a Comfyui Workflows from AI expert.

an




What keeps you inspired?

I’m continually inspired by pioneers blending code with creativity — people, content creators like Pixaroma, Olivio Sarikas, PurzBeats, Mickmumpitz, and the broader AI artist community pushing generative design. I also draw heavily from architecture, cinematography, and surrealism, which often influence the tone and mood of my visuals.



About GridMarkets

I discovered GridMarkets through the AI artist community on LinkedIn. After sharing a showreel of my work with ComfyUI, Rose Altimiras reached out to me directly, which led to a deeper conversation about creative rendering workflows. GridMarkets’ rendering infrastructure has been a game-changer, especially for high-resolution outputs and batch processing. It’s allowed me to scale my experimentation, test more complex workflows, and deliver client projects with much greater efficiency.


Any final thoughts?

We’re at the beginning of a creative renaissance powered by AI. My advice to new artists: don’t be afraid of the tech. Learn the tools, embrace experimentation, and keep your creative intuition at the centre.

Looking forward, I aim to publish educational content around AI workflows, contribute to open-source ComfyUI tools, and collaborate globally on generative design projects that merge art, architecture, and human-computer interaction.

Live Portrait Workflow Comfyui
Live Portrait Workflow
Live Portrait

Do you have questions? Do you need a workflow?


Please contact Bora: linkedin.com/in/boraozkut or send us an email: ai@gridmarkets.com



bottom of page