• 8 games have pushed publishing dates in response to Silksong

    At least eight developers have decided to push their projects' publishing dates after Team Cherry announced that Hollow Knight: Silksong is releasing on September 4.Exactly a week ago, the studio shared the news via its own YouTube channel. During the past seven days, multiple developers announced delays for their game and demo releases, directly mentioning Silksong's popularity.At the time of writing, this includes Aeterna Lucis by Aeternum Game Studios, Stomp and the Sword of Miracles by Frogteam Games, CloverPit by Panik Arcade, Demonschool by Necrosoft Games, Little Witch in the Woods by Sunny Side Up, Faeland by Talegames, Megabonk by Vedinad, and Baby Steps, which is being published by Devolver Digital.The release of Little Witch in the Woods initially matched Silksong's. In the delay announcement, Sunny Side Up said it's moving the launch to September 15, given the "immense influence" of Team Cherry's game. "We fear that launching Little Witch in the Woods on the same day would not only dishearten our dedicated team but also disappoint our devoted audience," the studio wrote.'We would not be doing our game any favors by wading into waters we can clearly see are blood red'Both CloverPit and Demonschool, now releasing on September 26 and November 19, respectively, were initially locked in for a launch on September 3, the day before Silksong's release.Related:"Silksong is the most anticipated and wishlisted game on all of Steam and we think people will love this game and play it right at launchbut that also means it will overshadow all games launching close to it," Panik Arcade wrote. Via GamesIndustryBiz, almost 5 million people have wishlisted Silksong. "So if we stick to our original date we would risk the launch of CloverPit a fair bit."Ysbryd Games, the publisher of Demonschool, called the process behind the decision an "anguished consideration," saying that it's "reasonably qualified to say that at any point of 2025 on balance, has been or will be as brutal as market conditions can get" when it comes to picking a release date."Crueler still, that we should find out with such short notice that Hollow Knight: Silksong will launch just one day after our planned release for Demonschool," the publisher added.Via Bluesky, Necrosoft Games said that the delay "was not our choice," but that the studio "understands why the choice was made." Necrosoft said that Ysbryd is paying for the delay. "Dropping the GTA of indies with 2 weeks notice makes everyone freak," the developer added.Related:"We have to remind ourselves that gaining visibility for Demonschool is our main goal," continues the publisher's statement. "Thus, the Ysbryd team strongly believes we would not be doing our game any favors by wading into waters we can clearly see are blood red."The other four games were all slated to release sometime in September, but decided to move either later in the month or directly into 2026, as is the case for Aeterna Lucis.  Out of all developers, Stomp and the Sword of Miracles and Faeland are notable examples, considering both games also fall into the metroidvania genreFrogteam Games planned to release a demo for Stomp and the Sword of the Miracles on August 29, with a Kickstarter campaign launching on September 12. Now, the developer is unsure about when it'll resume these plans."Trying to market an indie game is already really, really hard," Frogteam wrote. "It's the task of trying to get attention in a deep sea of other amazing games. In the case of Silksong, however, I feel like a little krill trying not to get eaten by a blue whale. Tiny devs like me rely on word of mouth and streamers to bring in visibility, and everyone's gonna be busy with Silksong for quite a while."Related:
    #games #have #pushed #publishing #dates
    8 games have pushed publishing dates in response to Silksong
    At least eight developers have decided to push their projects' publishing dates after Team Cherry announced that Hollow Knight: Silksong is releasing on September 4.Exactly a week ago, the studio shared the news via its own YouTube channel. During the past seven days, multiple developers announced delays for their game and demo releases, directly mentioning Silksong's popularity.At the time of writing, this includes Aeterna Lucis by Aeternum Game Studios, Stomp and the Sword of Miracles by Frogteam Games, CloverPit by Panik Arcade, Demonschool by Necrosoft Games, Little Witch in the Woods by Sunny Side Up, Faeland by Talegames, Megabonk by Vedinad, and Baby Steps, which is being published by Devolver Digital.The release of Little Witch in the Woods initially matched Silksong's. In the delay announcement, Sunny Side Up said it's moving the launch to September 15, given the "immense influence" of Team Cherry's game. "We fear that launching Little Witch in the Woods on the same day would not only dishearten our dedicated team but also disappoint our devoted audience," the studio wrote.'We would not be doing our game any favors by wading into waters we can clearly see are blood red'Both CloverPit and Demonschool, now releasing on September 26 and November 19, respectively, were initially locked in for a launch on September 3, the day before Silksong's release.Related:"Silksong is the most anticipated and wishlisted game on all of Steam and we think people will love this game and play it right at launchbut that also means it will overshadow all games launching close to it," Panik Arcade wrote. Via GamesIndustryBiz, almost 5 million people have wishlisted Silksong. "So if we stick to our original date we would risk the launch of CloverPit a fair bit."Ysbryd Games, the publisher of Demonschool, called the process behind the decision an "anguished consideration," saying that it's "reasonably qualified to say that at any point of 2025 on balance, has been or will be as brutal as market conditions can get" when it comes to picking a release date."Crueler still, that we should find out with such short notice that Hollow Knight: Silksong will launch just one day after our planned release for Demonschool," the publisher added.Via Bluesky, Necrosoft Games said that the delay "was not our choice," but that the studio "understands why the choice was made." Necrosoft said that Ysbryd is paying for the delay. "Dropping the GTA of indies with 2 weeks notice makes everyone freak," the developer added.Related:"We have to remind ourselves that gaining visibility for Demonschool is our main goal," continues the publisher's statement. "Thus, the Ysbryd team strongly believes we would not be doing our game any favors by wading into waters we can clearly see are blood red."The other four games were all slated to release sometime in September, but decided to move either later in the month or directly into 2026, as is the case for Aeterna Lucis.  Out of all developers, Stomp and the Sword of Miracles and Faeland are notable examples, considering both games also fall into the metroidvania genreFrogteam Games planned to release a demo for Stomp and the Sword of the Miracles on August 29, with a Kickstarter campaign launching on September 12. Now, the developer is unsure about when it'll resume these plans."Trying to market an indie game is already really, really hard," Frogteam wrote. "It's the task of trying to get attention in a deep sea of other amazing games. In the case of Silksong, however, I feel like a little krill trying not to get eaten by a blue whale. Tiny devs like me rely on word of mouth and streamers to bring in visibility, and everyone's gonna be busy with Silksong for quite a while."Related: #games #have #pushed #publishing #dates
    8 games have pushed publishing dates in response to Silksong
    www.gamedeveloper.com
    At least eight developers have decided to push their projects' publishing dates after Team Cherry announced that Hollow Knight: Silksong is releasing on September 4.Exactly a week ago, the studio shared the news via its own YouTube channel. During the past seven days, multiple developers announced delays for their game and demo releases, directly mentioning Silksong's popularity.At the time of writing, this includes Aeterna Lucis by Aeternum Game Studios, Stomp and the Sword of Miracles by Frogteam Games, CloverPit by Panik Arcade, Demonschool by Necrosoft Games, Little Witch in the Woods by Sunny Side Up, Faeland by Talegames, Megabonk by Vedinad, and Baby Steps, which is being published by Devolver Digital.The release of Little Witch in the Woods initially matched Silksong's. In the delay announcement, Sunny Side Up said it's moving the launch to September 15, given the "immense influence" of Team Cherry's game. "We fear that launching Little Witch in the Woods on the same day would not only dishearten our dedicated team but also disappoint our devoted audience," the studio wrote.'We would not be doing our game any favors by wading into waters we can clearly see are blood red'Both CloverPit and Demonschool, now releasing on September 26 and November 19, respectively, were initially locked in for a launch on September 3, the day before Silksong's release.Related:"Silksong is the most anticipated and wishlisted game on all of Steam and we think people will love this game and play it right at launch (including us) but that also means it will overshadow all games launching close to it," Panik Arcade wrote. Via GamesIndustryBiz, almost 5 million people have wishlisted Silksong. "So if we stick to our original date we would risk the launch of CloverPit a fair bit."Ysbryd Games, the publisher of Demonschool, called the process behind the decision an "anguished consideration," saying that it's "reasonably qualified to say that at any point of 2025 on balance, has been or will be as brutal as market conditions can get" when it comes to picking a release date."Crueler still, that we should find out with such short notice that Hollow Knight: Silksong will launch just one day after our planned release for Demonschool," the publisher added.Via Bluesky, Necrosoft Games said that the delay "was not our choice," but that the studio "understands why the choice was made." Necrosoft said that Ysbryd is paying for the delay. "Dropping the GTA of indies with 2 weeks notice makes everyone freak [out]," the developer added.Related:"We have to remind ourselves that gaining visibility for Demonschool is our main goal," continues the publisher's statement. "Thus, the Ysbryd team strongly believes we would not be doing our game any favors by wading into waters we can clearly see are blood red."The other four games were all slated to release sometime in September, but decided to move either later in the month or directly into 2026, as is the case for Aeterna Lucis.  Out of all developers, Stomp and the Sword of Miracles and Faeland are notable examples, considering both games also fall into the metroidvania genre (similar to Silksong.)Frogteam Games planned to release a demo for Stomp and the Sword of the Miracles on August 29, with a Kickstarter campaign launching on September 12. Now, the developer is unsure about when it'll resume these plans."Trying to market an indie game is already really, really hard," Frogteam wrote. "It's the task of trying to get attention in a deep sea of other amazing games. In the case of Silksong, however, I feel like a little krill trying not to get eaten by a blue whale. Tiny devs like me rely on word of mouth and streamers to bring in visibility, and everyone's gonna be busy with Silksong for quite a while."Related:
    Like
    Love
    Wow
    Sad
    Angry
    64
    · 2 Comments ·0 Shares
  • Pro-Style GameCube Controller For Switch 1/2 & PC Is Only $40 This Weekend

    NYXI Warrior Lite Wireless Controller for Switch 2See NYXI Warrior Wireless Controller for Switch 2| Adds support for original GameCube hardware See A few of the best GameCube-inspired controllers for Nintendo Switch, Switch 2, and PC are on sale for their best prices of the year . NYXI's Warrior and Wizard wireless controllers combine the GameCube form factor with modern features like Hall Effect sticks, remappable back buttons, microswitch triggers, and other customization options. The Warrior Lite Bluetooth Controller released earlier this year with the GameCube's iconic purple color scheme. Normally you can grab this versatile gamepad for which is the best price yet.If you also want 2.4GHz Wireless support for original GameCube and Wii hardware--and PC with the included adapter--you can step up to the Warrior for.Continue Reading at GameSpot
    #prostyle #gamecube #controller #switch #ampamp
    Pro-Style GameCube Controller For Switch 1/2 & PC Is Only $40 This Weekend
    NYXI Warrior Lite Wireless Controller for Switch 2See NYXI Warrior Wireless Controller for Switch 2| Adds support for original GameCube hardware See A few of the best GameCube-inspired controllers for Nintendo Switch, Switch 2, and PC are on sale for their best prices of the year . NYXI's Warrior and Wizard wireless controllers combine the GameCube form factor with modern features like Hall Effect sticks, remappable back buttons, microswitch triggers, and other customization options. The Warrior Lite Bluetooth Controller released earlier this year with the GameCube's iconic purple color scheme. Normally you can grab this versatile gamepad for which is the best price yet.If you also want 2.4GHz Wireless support for original GameCube and Wii hardware--and PC with the included adapter--you can step up to the Warrior for.Continue Reading at GameSpot #prostyle #gamecube #controller #switch #ampamp
    Pro-Style GameCube Controller For Switch 1/2 & PC Is Only $40 This Weekend
    www.gamespot.com
    NYXI Warrior Lite Wireless Controller for Switch 2 $40 (was $50) See at Amazon NYXI Warrior Wireless Controller for Switch 2 $55 (was $69) | Adds support for original GameCube hardware See at Amazon A few of the best GameCube-inspired controllers for Nintendo Switch, Switch 2, and PC are on sale for their best prices of the year at Amazon. NYXI's Warrior and Wizard wireless controllers combine the GameCube form factor with modern features like Hall Effect sticks, remappable back buttons, microswitch triggers, and other customization options. The Warrior Lite Bluetooth Controller released earlier this year with the GameCube's iconic purple color scheme. Normally $50, you can grab this versatile gamepad for $40, which is the best price yet.If you also want 2.4GHz Wireless support for original GameCube and Wii hardware--and PC with the included adapter--you can step up to the Warrior for $55 (was $69).Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    26
    · 2 Comments ·0 Shares
  • Drop Into the Battle: ‘Gears of War: Reloaded’ Launches on GeForce NOW

    Brace yourself, COGs — the Locusts aren’t the only thing rising up. The Coalition’s legendary shooter Gears of War: Reloaded is launching day one on GeForce NOW.
    But that’s just the start. This GFN Thursday, seven games join the GeForce NOW library, including Ubisoft’s The Rogue Prince of Persia, the electrifying 2D roguelike action-platformer.
    More Grit, More Gears
    Never skip leg day.
    Chainsaws — check. Grizzled one-liners — absolutely. Gears of War: Reloaded is back, buffed and primed, remastered from the ground up in Unreal Engine 5. It’s the classic curb-stomping action gamers remember, now with visuals sharp enough to make the Locust run for cover. Form up and get loud.
    Dive into battle with Marcus, Dom and the rest of Delta Squad to fight tooth and chainsaw to save humanity from the subterranean Locust Horde. Carve through the epic campaign solo or tag in friends for online co-op mode. The remastered version packs every blast, chainsaw duel and bro fist from the original — plus bonus campaign missions, multiplayer maps and more. Tackle battles with modern controls for franchise newcomers or classic controls for veterans — no grunt left behind.
    Stream Gears of War: Reloaded on GeForce NOW and witness Unreal Engine’s best visuals without upgrading hardware. Run multiplayer with the lowest latency with an Ultimate membership, cross-play with the squad and see every crumbling wall and flying chunk — all from the cloud, effortlessly.
    Greatest Leap Yet
    Kick first, ask questions later.
    The Rogue Prince of Persia 1.0 marks the game’s full release after months of early access, bringing refined parkour, polished combat, fresh content and the complete story of the rogue heir racing to reclaim his kingdom. Sprint, vault and wall-run through a reimagined Persia as the prince battles to undo a deadly curse and stop the invading Huns.
    Each run is a new fight for survival, blending fluid platforming with swift, acrobatic combat. Leap over traps, chain stylish moves and wield an ever-expanding arsenal while unlocking medallions, upgrading gear and uncovering the truth behind the prince’s fall — and his shot at redemption.
    On GeForce NOW, the adventure shines at its best with up to 4K 120 frames-per-second streaming. Land every parkour move with perfect timing thanks to ultralow latency and take the prince’s fight anywhere, instantly, on nearly any device.
    Let’s Play Today
    Catbots > Brainblobs.
    Make sure to check out Chip ‘n Clawz vs. The Brainioids, a quirky action-strategy hybrid from X-COM creator Julian Gollop, where players control a clever inventor and his robo-cat to fight off an invasion of bizarre Brainioid aliens. Mix third-person action with real-time strategy while building bases, commanding bot armies and squishing rogue brains in solo and co-op modes, all wrapped in a colorful, comic-book world. Couch and online multiplayer, player vs. player battles and a humorous campaign make this a fresh, approachable take on the strategy genre.
    In addition, members can look for the following:

    Gears of War: ReloadedChip ‘n Clawz vs. The BrainioidsMake WayAmong Us 3DGatekeeperKnighticaNo Sleep for Kaname Date – From AI: THE SOMNIUM FILESWhat are you planning to play this weekend? Let us know on X or in the comments below.

    This is a duo appreciation post.
    Who are you locking in with?— NVIDIA GeForce NOWAugust 27, 2025
    #drop #into #battle #gears #war
    Drop Into the Battle: ‘Gears of War: Reloaded’ Launches on GeForce NOW
    Brace yourself, COGs — the Locusts aren’t the only thing rising up. The Coalition’s legendary shooter Gears of War: Reloaded is launching day one on GeForce NOW. But that’s just the start. This GFN Thursday, seven games join the GeForce NOW library, including Ubisoft’s The Rogue Prince of Persia, the electrifying 2D roguelike action-platformer. More Grit, More Gears Never skip leg day. Chainsaws — check. Grizzled one-liners — absolutely. Gears of War: Reloaded is back, buffed and primed, remastered from the ground up in Unreal Engine 5. It’s the classic curb-stomping action gamers remember, now with visuals sharp enough to make the Locust run for cover. Form up and get loud. Dive into battle with Marcus, Dom and the rest of Delta Squad to fight tooth and chainsaw to save humanity from the subterranean Locust Horde. Carve through the epic campaign solo or tag in friends for online co-op mode. The remastered version packs every blast, chainsaw duel and bro fist from the original — plus bonus campaign missions, multiplayer maps and more. Tackle battles with modern controls for franchise newcomers or classic controls for veterans — no grunt left behind. Stream Gears of War: Reloaded on GeForce NOW and witness Unreal Engine’s best visuals without upgrading hardware. Run multiplayer with the lowest latency with an Ultimate membership, cross-play with the squad and see every crumbling wall and flying chunk — all from the cloud, effortlessly. Greatest Leap Yet Kick first, ask questions later. The Rogue Prince of Persia 1.0 marks the game’s full release after months of early access, bringing refined parkour, polished combat, fresh content and the complete story of the rogue heir racing to reclaim his kingdom. Sprint, vault and wall-run through a reimagined Persia as the prince battles to undo a deadly curse and stop the invading Huns. Each run is a new fight for survival, blending fluid platforming with swift, acrobatic combat. Leap over traps, chain stylish moves and wield an ever-expanding arsenal while unlocking medallions, upgrading gear and uncovering the truth behind the prince’s fall — and his shot at redemption. On GeForce NOW, the adventure shines at its best with up to 4K 120 frames-per-second streaming. Land every parkour move with perfect timing thanks to ultralow latency and take the prince’s fight anywhere, instantly, on nearly any device. Let’s Play Today Catbots > Brainblobs. Make sure to check out Chip ‘n Clawz vs. The Brainioids, a quirky action-strategy hybrid from X-COM creator Julian Gollop, where players control a clever inventor and his robo-cat to fight off an invasion of bizarre Brainioid aliens. Mix third-person action with real-time strategy while building bases, commanding bot armies and squishing rogue brains in solo and co-op modes, all wrapped in a colorful, comic-book world. Couch and online multiplayer, player vs. player battles and a humorous campaign make this a fresh, approachable take on the strategy genre. In addition, members can look for the following: Gears of War: ReloadedChip ‘n Clawz vs. The BrainioidsMake WayAmong Us 3DGatekeeperKnighticaNo Sleep for Kaname Date – From AI: THE SOMNIUM FILESWhat are you planning to play this weekend? Let us know on X or in the comments below. This is a duo appreciation post. Who are you locking in with?— NVIDIA GeForce NOWAugust 27, 2025 #drop #into #battle #gears #war
    Drop Into the Battle: ‘Gears of War: Reloaded’ Launches on GeForce NOW
    blogs.nvidia.com
    Brace yourself, COGs — the Locusts aren’t the only thing rising up. The Coalition’s legendary shooter Gears of War: Reloaded is launching day one on GeForce NOW. But that’s just the start. This GFN Thursday, seven games join the GeForce NOW library, including Ubisoft’s The Rogue Prince of Persia, the electrifying 2D roguelike action-platformer. More Grit, More Gears Never skip leg day. Chainsaws — check. Grizzled one-liners — absolutely. Gears of War: Reloaded is back, buffed and primed, remastered from the ground up in Unreal Engine 5. It’s the classic curb-stomping action gamers remember, now with visuals sharp enough to make the Locust run for cover. Form up and get loud. Dive into battle with Marcus, Dom and the rest of Delta Squad to fight tooth and chainsaw to save humanity from the subterranean Locust Horde. Carve through the epic campaign solo or tag in friends for online co-op mode. The remastered version packs every blast, chainsaw duel and bro fist from the original — plus bonus campaign missions, multiplayer maps and more. Tackle battles with modern controls for franchise newcomers or classic controls for veterans — no grunt left behind. Stream Gears of War: Reloaded on GeForce NOW and witness Unreal Engine’s best visuals without upgrading hardware. Run multiplayer with the lowest latency with an Ultimate membership, cross-play with the squad and see every crumbling wall and flying chunk — all from the cloud, effortlessly. Greatest Leap Yet Kick first, ask questions later. The Rogue Prince of Persia 1.0 marks the game’s full release after months of early access, bringing refined parkour, polished combat, fresh content and the complete story of the rogue heir racing to reclaim his kingdom. Sprint, vault and wall-run through a reimagined Persia as the prince battles to undo a deadly curse and stop the invading Huns. Each run is a new fight for survival, blending fluid platforming with swift, acrobatic combat. Leap over traps, chain stylish moves and wield an ever-expanding arsenal while unlocking medallions, upgrading gear and uncovering the truth behind the prince’s fall — and his shot at redemption. On GeForce NOW, the adventure shines at its best with up to 4K 120 frames-per-second streaming. Land every parkour move with perfect timing thanks to ultralow latency and take the prince’s fight anywhere, instantly, on nearly any device. Let’s Play Today Catbots > Brainblobs. Make sure to check out Chip ‘n Clawz vs. The Brainioids, a quirky action-strategy hybrid from X-COM creator Julian Gollop, where players control a clever inventor and his robo-cat to fight off an invasion of bizarre Brainioid aliens. Mix third-person action with real-time strategy while building bases, commanding bot armies and squishing rogue brains in solo and co-op modes, all wrapped in a colorful, comic-book world. Couch and online multiplayer, player vs. player battles and a humorous campaign make this a fresh, approachable take on the strategy genre. In addition, members can look for the following: Gears of War: Reloaded (New release on Steam and Xbox, available on PC Game Pass, Aug. 26) Chip ‘n Clawz vs. The Brainioids (New release on Steam, Aug. 26) Make Way (Free, new release on Epic Games Store, Aug. 28) Among Us 3D (Steam) Gatekeeper (Steam) Knightica (Steam) No Sleep for Kaname Date – From AI: THE SOMNIUM FILES (Steam) What are you planning to play this weekend? Let us know on X or in the comments below. This is a duo appreciation post. Who are you locking in with? (tag them) — NVIDIA GeForce NOW (@NVIDIAGFN) August 27, 2025
    Like
    Love
    Wow
    Sad
    Angry
    397
    · 2 Comments ·0 Shares
  • 25 Years Later, If You Can Only Play One Spider-Man Game, Let It Be Neversoft’s Spider-Man

    Over a dozen Spider-Man games had been released before it, but 25 years later, Neversoft and Activision’s Spider-Man is perhaps historically one of the most formative and enduring in terms of its boundless charm and lasting impact. Indeed, this Spider-Man, one of many games to stick solely with the character’s name, was the birth of a new age for Marvel video games and enjoyed its momentous heyday with immediate sequels, as well as a playable cameo for this iteration of the wall-crawler in X-Men: Mutant Academy 2.
    #years #later #you #can #only
    25 Years Later, If You Can Only Play One Spider-Man Game, Let It Be Neversoft’s Spider-Man
    Over a dozen Spider-Man games had been released before it, but 25 years later, Neversoft and Activision’s Spider-Man is perhaps historically one of the most formative and enduring in terms of its boundless charm and lasting impact. Indeed, this Spider-Man, one of many games to stick solely with the character’s name, was the birth of a new age for Marvel video games and enjoyed its momentous heyday with immediate sequels, as well as a playable cameo for this iteration of the wall-crawler in X-Men: Mutant Academy 2. #years #later #you #can #only
    25 Years Later, If You Can Only Play One Spider-Man Game, Let It Be Neversoft’s Spider-Man
    gamerant.com
    Over a dozen Spider-Man games had been released before it, but 25 years later, Neversoft and Activision’s Spider-Man is perhaps historically one of the most formative and enduring in terms of its boundless charm and lasting impact. Indeed, this Spider-Man, one of many games to stick solely with the character’s name, was the birth of a new age for Marvel video games and enjoyed its momentous heyday with immediate sequels, as well as a playable cameo for this iteration of the wall-crawler in X-Men: Mutant Academy 2.
    Like
    Love
    Wow
    Angry
    Sad
    584
    · 2 Comments ·0 Shares
  • Every Sakamoto Days Main Character's Ages, Heights, And Birthdays

    Yuto Suzuki's Sakamoto Days is one of the best shonen manga of the 2020s, and it is only going from strength to strength. Along with great art and creative fight sequences, the series is also known for fantastic characters, be they awesomeheroes or intimidatingvillains.
    #every #sakamoto #days #main #character039s
    Every Sakamoto Days Main Character's Ages, Heights, And Birthdays
    Yuto Suzuki's Sakamoto Days is one of the best shonen manga of the 2020s, and it is only going from strength to strength. Along with great art and creative fight sequences, the series is also known for fantastic characters, be they awesomeheroes or intimidatingvillains. #every #sakamoto #days #main #character039s
    Every Sakamoto Days Main Character's Ages, Heights, And Birthdays
    gamerant.com
    Yuto Suzuki's Sakamoto Days is one of the best shonen manga of the 2020s, and it is only going from strength to strength. Along with great art and creative fight sequences, the series is also known for fantastic characters, be they awesome (and funny) heroes or intimidating (and funny) villains.
    Like
    Love
    Wow
    Sad
    Angry
    247
    · 2 Comments ·0 Shares
  • Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum

    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins
    #designing #atmospheric #wwi #plane #crash
    Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum
    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins #designing #atmospheric #wwi #plane #crash
    Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum
    80.lv
    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins
    Like
    Love
    Wow
    Sad
    Angry
    469
    · 2 Comments ·0 Shares
  • Romancing SaGa 2 Gets Major Price Cut For PS5 And Switch

    Romancing SaGa 2: Revenge of the Seven for PS5See See at Best Buy Romancing SaGa 2: Revenge of the Seven for Nintendo SwitchSee See at Best Buy Romancing SaGa 2: Revenge of the Seven is on sale for only for Nintendo Switch and PS5 this week. Though not technically part of Amazon's Labor Day Sale, the retailer is matching deals offered in competitor promotions, and the fully rebuilt remake of this Square Enix classic is one of them.Nintendo Switch 2 owners can buy the Switch physical edition for and purchase the official upgrade from the eShop for The Switch 2 edition is on the eShop, so you're still saving 10 bucks overall with the physical edition.Continue Reading at GameSpot
    #romancing #saga #gets #major #price
    Romancing SaGa 2 Gets Major Price Cut For PS5 And Switch
    Romancing SaGa 2: Revenge of the Seven for PS5See See at Best Buy Romancing SaGa 2: Revenge of the Seven for Nintendo SwitchSee See at Best Buy Romancing SaGa 2: Revenge of the Seven is on sale for only for Nintendo Switch and PS5 this week. Though not technically part of Amazon's Labor Day Sale, the retailer is matching deals offered in competitor promotions, and the fully rebuilt remake of this Square Enix classic is one of them.Nintendo Switch 2 owners can buy the Switch physical edition for and purchase the official upgrade from the eShop for The Switch 2 edition is on the eShop, so you're still saving 10 bucks overall with the physical edition.Continue Reading at GameSpot #romancing #saga #gets #major #price
    Romancing SaGa 2 Gets Major Price Cut For PS5 And Switch
    www.gamespot.com
    Romancing SaGa 2: Revenge of the Seven for PS5 $30 (was $50) See at Amazon See at Best Buy Romancing SaGa 2: Revenge of the Seven for Nintendo Switch $30 (was $50) See at Amazon See at Best Buy Romancing SaGa 2: Revenge of the Seven is on sale for only $30 for Nintendo Switch and PS5 at Amazon this week. Though not technically part of Amazon's Labor Day Sale, the retailer is matching deals offered in competitor promotions, and the fully rebuilt remake of this Square Enix classic is one of them.Nintendo Switch 2 owners can buy the Switch physical edition for $30 and purchase the official upgrade from the eShop for $10. The Switch 2 edition is $50 on the eShop, so you're still saving 10 bucks overall with the physical edition.Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    Angry
    952
    · 2 Comments ·0 Shares
  • John Wick: Ballerina 4K Steelbook Edition Preorders Restocked For $30 At Walmart

    Ballerina: World of John Wick - Limited Edition Steelbook| Exclusive to Walmart / Restocked August 27 Preorder at Walmart The retailer-exclusive Steelbook Editions of Ballerina: From the World of John Wick have been difficult to find in stock at Walmart and Amazon since preorders first opened in early June. But with only two weeks until the John Wick spin-off pirouettes onto 4K Blu-ray, Walmart has restocked its exclusive collectible edition. Walmart's Ballerina Limited Edition Steelbook is available to preorder for only ahead of its September 9 release.Ballerina on 4K Blu-ray:Walmart's Steelbook Edition -- | In stockAmazon's Steelbook Edition -- | Sold outStandard Edition --The Amazon-exclusive Ballerina Steelbook was actually back in stock for a very brief window while we were writing this story, but it sold out again. We'd still recommend checking the store page to see if it returns. Tastes differ, but the design of Walmart's exclusive looks cooler, in part because it has a picture frame-inspired slipcover.Continue Reading at GameSpot
    #john #wick #ballerina #steelbook #edition
    John Wick: Ballerina 4K Steelbook Edition Preorders Restocked For $30 At Walmart
    Ballerina: World of John Wick - Limited Edition Steelbook| Exclusive to Walmart / Restocked August 27 Preorder at Walmart The retailer-exclusive Steelbook Editions of Ballerina: From the World of John Wick have been difficult to find in stock at Walmart and Amazon since preorders first opened in early June. But with only two weeks until the John Wick spin-off pirouettes onto 4K Blu-ray, Walmart has restocked its exclusive collectible edition. Walmart's Ballerina Limited Edition Steelbook is available to preorder for only ahead of its September 9 release.Ballerina on 4K Blu-ray:Walmart's Steelbook Edition -- | In stockAmazon's Steelbook Edition -- | Sold outStandard Edition --The Amazon-exclusive Ballerina Steelbook was actually back in stock for a very brief window while we were writing this story, but it sold out again. We'd still recommend checking the store page to see if it returns. Tastes differ, but the design of Walmart's exclusive looks cooler, in part because it has a picture frame-inspired slipcover.Continue Reading at GameSpot #john #wick #ballerina #steelbook #edition
    John Wick: Ballerina 4K Steelbook Edition Preorders Restocked For $30 At Walmart
    www.gamespot.com
    Ballerina: World of John Wick - Limited Edition Steelbook (4K Blu-ray) $30 | Exclusive to Walmart / Restocked August 27 Preorder at Walmart The retailer-exclusive Steelbook Editions of Ballerina: From the World of John Wick have been difficult to find in stock at Walmart and Amazon since preorders first opened in early June. But with only two weeks until the John Wick spin-off pirouettes onto 4K Blu-ray, Walmart has restocked its exclusive collectible edition. Walmart's Ballerina Limited Edition Steelbook is available to preorder for only $30 ahead of its September 9 release.Ballerina on 4K Blu-ray:Walmart's Steelbook Edition -- $30 | In stockAmazon's Steelbook Edition -- $34.35 | Sold outStandard Edition -- $28 ($43)The Amazon-exclusive Ballerina Steelbook was actually back in stock for a very brief window while we were writing this story, but it sold out again. We'd still recommend checking the store page to see if it returns. Tastes differ, but the design of Walmart's exclusive looks cooler, in part because it has a picture frame-inspired slipcover.Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 2 Comments ·0 Shares
  • Romeo is a Dead Man: A sneak peak of what to expect

    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes.

    Play Video

    OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”.

    Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts.

    And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before.

    As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off!

    Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year!

    Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission.

    That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time!
    #romeo #dead #man #sneak #peak
    Romeo is a Dead Man: A sneak peak of what to expect
    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes. Play Video OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”. Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts. And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before. As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off! Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year! Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission. That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time! #romeo #dead #man #sneak #peak
    Romeo is a Dead Man: A sneak peak of what to expect
    blog.playstation.com
    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes. Play Video OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”. Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts. And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before. As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off! Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year! Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission. That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time!
    Like
    Love
    Wow
    Sad
    Angry
    773
    · 2 Comments ·0 Shares
  • Lumines Arise launches Nov 11, PS5 demo available now

    Since we first announced Lumines Arise during the State of Play in June, we’ve been inundated with the same question from fans: When will the demo be available?! And the answer is…right now! You can play the limited-time Lumines Arise Demo on PlayStation 5 now through September 3 and try out three single-player stages and help us network test the all-new multiplayer Burst Battle mode.

    We also have a release date for the full game—November 11, 2025. Pre-orders start today—go to the PS Store page for that and to download the demo.

    Lumines for all

    Never played a Lumines game before? Or forgot how it works? Or never “got it” in the first place? Good news: Arise is incredibly easy for anyone to get into, thanks to an excellent interactive tutorial that walks you through everything, step-by-step.The Demo only features one difficulty, but you’ll also find robust options to fit every play style under “Accessibility” in the Options menu. Want to just groove to the music and not worry about time pressure, or a “Game Over” when you top out? Try the “No Stress Lumines” options for that! Want to strip away the visual flourishes to focus more on the gameplay? There’s options for that! Or playing on your PlayStation Portal and want to zoom in to get the most out of your portable screen real estate? There’s options for that, too!

    Play Video

    An all-new multiplayer experience

    Burst Battle represents a complete reinvention of multiplayer Lumines, borrowing from the competitive-puzzle-game greats, but adding a twist all its own.

    Now, both players have an entire playfield to themselves and can send garbage blocks to attack their opponent. You generate these attacks by clearing 2×2Squares, or by triggering the all-new Burst mechanic. The bigger the Burst, the larger the deluge your opponent will face! Meanwhile, garbage blocks can pile up on the sides, shrinking the available playfield—only matching blocks adjacent to garbage will clear it out. This ebb and flow can get super tense and really fun, I hope you try it out!

    The Demo features a taste of Burst Battle via matchmaking, but the full version of the game will offer friend / CPU matches, custom matches, and local play. And you’ll get to select your favorite stage music / block-visuals that you unlocked in the single-player Journey mode to use in multiplayer; it’s kind of like having your own theme song as you head into battle!

    Everyone’s here—including Astro Bot?

    Starting today, you can pre-order the Standard or Digital Deluxe Edition of Lumines Arise on PlayStation Store. And as mentioned above, PS Plus members get a 10% discount on the pre-order.

    The Digital Deluxe Editionincludes the full game and four exclusive Loomii in-game avatars. You can customize your Loomii in-game to match your personality, and the set in the Digital Deluxe Edition includes skins based on Tetris Effect: Connected, Rez Infinite, Humanity, and, what’s this—Astro Bot is appearing as a guest as well! A big thank you to our friends at Team Asobi for making this crossover possible. The image above is just a preview—the final look of these avatars will be revealed soon.

    Also, because it wouldn’t be Lumines Arise news without some new music, a new single from the soundtrack has been released. Hydelic’s hypnotically thumping anthem “Dreamland” is the sonic backdrop of the Chameleon Groove stage from the Demo, and is available now on Bandcamp with a release soon on your favorite streaming services. We know that after you play the demo, you’ll want to add this to your favorite daily playlist.

    A quick note for PS VR2 owners: unfortunately VR mode couldn’t make it in time for this demo, but we can confirm it will be available at launch on November 11! Thank you for all your passion and excitement for VR, and in this case, for your patience.We hope you’ll check out the Demo, tell us what you think, and get ready for the launch of the full game on November 11.
    #lumines #arise #launches #nov #ps5
    Lumines Arise launches Nov 11, PS5 demo available now
    Since we first announced Lumines Arise during the State of Play in June, we’ve been inundated with the same question from fans: When will the demo be available?! And the answer is…right now! You can play the limited-time Lumines Arise Demo on PlayStation 5 now through September 3 and try out three single-player stages and help us network test the all-new multiplayer Burst Battle mode. We also have a release date for the full game—November 11, 2025. Pre-orders start today—go to the PS Store page for that and to download the demo. Lumines for all Never played a Lumines game before? Or forgot how it works? Or never “got it” in the first place? Good news: Arise is incredibly easy for anyone to get into, thanks to an excellent interactive tutorial that walks you through everything, step-by-step.The Demo only features one difficulty, but you’ll also find robust options to fit every play style under “Accessibility” in the Options menu. Want to just groove to the music and not worry about time pressure, or a “Game Over” when you top out? Try the “No Stress Lumines” options for that! Want to strip away the visual flourishes to focus more on the gameplay? There’s options for that! Or playing on your PlayStation Portal and want to zoom in to get the most out of your portable screen real estate? There’s options for that, too! Play Video An all-new multiplayer experience Burst Battle represents a complete reinvention of multiplayer Lumines, borrowing from the competitive-puzzle-game greats, but adding a twist all its own. Now, both players have an entire playfield to themselves and can send garbage blocks to attack their opponent. You generate these attacks by clearing 2×2Squares, or by triggering the all-new Burst mechanic. The bigger the Burst, the larger the deluge your opponent will face! Meanwhile, garbage blocks can pile up on the sides, shrinking the available playfield—only matching blocks adjacent to garbage will clear it out. This ebb and flow can get super tense and really fun, I hope you try it out! The Demo features a taste of Burst Battle via matchmaking, but the full version of the game will offer friend / CPU matches, custom matches, and local play. And you’ll get to select your favorite stage music / block-visuals that you unlocked in the single-player Journey mode to use in multiplayer; it’s kind of like having your own theme song as you head into battle! Everyone’s here—including Astro Bot? Starting today, you can pre-order the Standard or Digital Deluxe Edition of Lumines Arise on PlayStation Store. And as mentioned above, PS Plus members get a 10% discount on the pre-order. The Digital Deluxe Editionincludes the full game and four exclusive Loomii in-game avatars. You can customize your Loomii in-game to match your personality, and the set in the Digital Deluxe Edition includes skins based on Tetris Effect: Connected, Rez Infinite, Humanity, and, what’s this—Astro Bot is appearing as a guest as well! A big thank you to our friends at Team Asobi for making this crossover possible. The image above is just a preview—the final look of these avatars will be revealed soon. Also, because it wouldn’t be Lumines Arise news without some new music, a new single from the soundtrack has been released. Hydelic’s hypnotically thumping anthem “Dreamland” is the sonic backdrop of the Chameleon Groove stage from the Demo, and is available now on Bandcamp with a release soon on your favorite streaming services. We know that after you play the demo, you’ll want to add this to your favorite daily playlist. A quick note for PS VR2 owners: unfortunately VR mode couldn’t make it in time for this demo, but we can confirm it will be available at launch on November 11! Thank you for all your passion and excitement for VR, and in this case, for your patience.We hope you’ll check out the Demo, tell us what you think, and get ready for the launch of the full game on November 11. #lumines #arise #launches #nov #ps5
    Lumines Arise launches Nov 11, PS5 demo available now
    blog.playstation.com
    Since we first announced Lumines Arise during the State of Play in June, we’ve been inundated with the same question from fans: When will the demo be available?! And the answer is…right now! You can play the limited-time Lumines Arise Demo on PlayStation 5 now through September 3 and try out three single-player stages and help us network test the all-new multiplayer Burst Battle mode. We also have a release date for the full game—November 11, 2025. Pre-orders start today (and include a 10% discount for PS Plus subscribers!)—go to the PS Store page for that and to download the demo. Lumines for all Never played a Lumines game before? Or forgot how it works? Or never “got it” in the first place? Good news: Arise is incredibly easy for anyone to get into, thanks to an excellent interactive tutorial that walks you through everything, step-by-step. (And even old pros won’t wanna miss the intro to new mechanics like Burst!) The Demo only features one difficulty (Easy – the final game will have four different levels), but you’ll also find robust options to fit every play style under “Accessibility” in the Options menu. Want to just groove to the music and not worry about time pressure, or a “Game Over” when you top out? Try the “No Stress Lumines” options for that! Want to strip away the visual flourishes to focus more on the gameplay? There’s options for that! Or playing on your PlayStation Portal and want to zoom in to get the most out of your portable screen real estate? There’s options for that, too! Play Video An all-new multiplayer experience Burst Battle represents a complete reinvention of multiplayer Lumines, borrowing from the competitive-puzzle-game greats, but adding a twist all its own. Now, both players have an entire playfield to themselves and can send garbage blocks to attack their opponent. You generate these attacks by clearing 2×2 (or larger) Squares, or by triggering the all-new Burst mechanic (where you have a few Timeline passes to build a single color match as large as possible). The bigger the Burst, the larger the deluge your opponent will face! Meanwhile, garbage blocks can pile up on the sides, shrinking the available playfield—only matching blocks adjacent to garbage will clear it out. This ebb and flow can get super tense and really fun, I hope you try it out! The Demo features a taste of Burst Battle via matchmaking, but the full version of the game will offer friend / CPU matches, custom matches, and local play. And you’ll get to select your favorite stage music / block-visuals that you unlocked in the single-player Journey mode to use in multiplayer; it’s kind of like having your own theme song as you head into battle! Everyone’s here—including Astro Bot? Starting today, you can pre-order the Standard or Digital Deluxe Edition of Lumines Arise on PlayStation Store. And as mentioned above, PS Plus members get a 10% discount on the pre-order. The Digital Deluxe Edition (also available as an upgrade to the Standard Edition) includes the full game and four exclusive Loomii in-game avatars. You can customize your Loomii in-game to match your personality, and the set in the Digital Deluxe Edition includes skins based on Tetris Effect: Connected, Rez Infinite, Humanity, and, what’s this—Astro Bot is appearing as a guest as well! A big thank you to our friends at Team Asobi for making this crossover possible. The image above is just a preview—the final look of these avatars will be revealed soon. Also, because it wouldn’t be Lumines Arise news without some new music, a new single from the soundtrack has been released. Hydelic’s hypnotically thumping anthem “Dreamland” is the sonic backdrop of the Chameleon Groove stage from the Demo, and is available now on Bandcamp with a release soon on your favorite streaming services. We know that after you play the demo, you’ll want to add this to your favorite daily playlist. A quick note for PS VR2 owners: unfortunately VR mode couldn’t make it in time for this demo, but we can confirm it will be available at launch on November 11! Thank you for all your passion and excitement for VR, and in this case, for your patience. (And maybe you’ll get a glimpse of Arise in VR somewhere sometime before launch after all…?) We hope you’ll check out the Demo, tell us what you think, and get ready for the launch of the full game on November 11.
    Like
    Love
    Wow
    Sad
    Angry
    679
    · 2 Comments ·0 Shares
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Comments ·0 Shares
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Comments ·0 Shares
More Results
ollo https://www.ollo.ws