• Genshin Impact Version Luna I, Song of the Welkin Moon: Segue launches September 10

    Greetings, Travelers! We’re excited to announce that Genshin Impact Version Luna I, Song of the Welkin Moon: Segue, will be released on September 10. This update, which marks the beginning of the year-long Song of the Welkin Moon version series, will also introduce a major new region, Nod-Krai.

    The new version series will tie together past story clues and characters, answer some long-standing questions, and reveal more about the secrets of the moon. The stage for these events is Nod-Krai, a new self-governing region at the edge of Teyvat. Blessed by pure moonlight, Nod-Krai is a land rich with a unique and powerful energy which has attracted various individuals and factions, all with their own goals and ambitions which often lead them to conflict. However, it is also this very same energy that will be crucial to your journey, guiding you as you explore the region and solve its mysteries.

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    We invite all Travelers who have finished the Mondstadt Archon Quest “Prologue: Act III – Song of the Dragon and Freedom” to enter Nod-Krai. You can access the region via a Teleport Waypoint at its entrance. If you have also completed “Chapter I Act III – A New Star Approaches” in Liyue, you will be able to start the new Archon Quests in Nod-Krai immediately.

    Gather under the moonlight

    Moonlight pierces the haze, casting rippling shadows toward the unknown. To uncover a new Fatui scheme related to the “Ancient Moon’s Remnants,” Travelers must set sail from Natlan, heading north to the self-governing region of Nod-Krai located at the southernmost part of Snezhnaya. Nod-Krai is a land where an ancient elemental energy called Kuuvahki can be found. This unique power influences the lives of its people, but it also draws factions with competing goals, leading to constant conflict. Walk the paths of this land, Traveler, and see with your own eyes how these factions seek and harness this potent energy.

    In Version Luna I, Travelers can explore three distinct island areas within Nod-Krai. Your journey begins in Nasha Town, where the residents have woven the ancient energy of Kuuvahki into their everyday lives. Here, street signs hum with a gentle energy, and clever devices retain the traces of ancient power. Head west, and you will find the Clink-Clank Krumkake Craftshop, home to the whimsical inventors Aino and Ineffa. Continue westward and you will find yourself at Hiisi Island, where you will enter the lands of the Frostmoon Scions. The devotees here, who brook no blasphemy towards the moon, worship the moon goddess Kuutar while fiercely guarding the purity of their faith. Yet, in the shadows, unseen forces are stirring up a crisis. Further north along the coast, the Fatui’s Kuuvahki Experimental Design Bureau stands over ancient ruins. The Fatui have come here to build war engines fueled by Kuuvahki. You can choose to face them head-on or infiltrate quietly to uncover their secrets. As you explore Nod-Krai, you may also encounter the Wild Hunt, a dangerous force emerging from the Abyss that leaves a trail of destruction in its wake. Standing against this darkness are the Lightkeepers, an ancient order that defends the freedom and peace of the land. They carry special lanterns that change color to warn of the kind of danger that is coming.

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    Meanwhile, Columbina, who once served as the Fatui’s Damselette, has left them and is now lingering in her homeland of Nod-Krai. Here, she is revered as the Moon Maiden. Interestingly, the statues of the Idol of the New Moon found throughout Nod-Krai bear a striking resemblance to Columbina’s appearance before she joined the Fatui, hinting at her past identity. Now, she has her very own haven, the Sanctuary of the Silver Moon Court, located near Hiisi Island. Walk there by the moonlight, fulfill her requests, and you may earn her assistance on your journey.

    In the new Archon Quest, the Traveler will assist Lauma, the Moonchanter of the Frostmoon Scions. Join her to confront the Fatui and uncover the secrets behind their faith in the moon goddess. You will also see for yourself the unwavering resolve of the Lightkeepers as you fight alongside Flins, a warrior of their order, to defend the land. This adventure will also introduce you to new allies, such as the merchants of Nasha Town, while also presenting a menacing debut of the Fatui Harbinger Sandrone, the Marionette.

    Fight alongside new allies and unleash the power of the Moon

    In Nod-Krai, certain characters blessed by the moon can channel the ancient energy of Kuuvahki in combat through their Moon Wheels. This unique ability triggers special Lunar reactions. Both Lauma and Flins are among the characters who have this ability.

    As a Moonchanter of the Frostmoon Scions, Lauma listens to her people, soothes their worries, and safeguards their covenant with the land. Gentle, wise, and deeply connected to nature, she has earned the trust of her companions. Even the small creatures influenced by Kuuvahki seem to share this trust, approaching and interacting with her whenever she is near. Lauma can even transform into a cervitaur, allowing her to dash swiftly across the land.

    In battle, this 5-star Dendro Catalyst user can give her team a decisive edge with a new Elemental Reaction: Lunar-Bloom. Once the Bloom reaction is triggered with Lauma on the team, she can convert it into a Lunar-Bloom reaction. This creates a valuable combat resource called Verdant Dew, which Lauma can then use to deal significant Lunar-Bloom damage with the potential for CRIT Hits. In combat, Lauma can periodically deal AoE Dendro DMG and further support the team by lowering enemies’ Dendro and Hydro RES. Her combat style is focused on collecting Verdant Dew from Lunar-Bloom reactions and converting it into more Lunar-Bloom damage.

    Flins is a 5-star Electro Polearm user known for his noble and aristocratic bearing. This veteran Lightkeeper who has long dedicated himself to defending Nod-Krai’s freedom and peace can even hear the whispers of the Wild Hunt while on exploration.

    In battle, Flins deals powerful Lunar-Charged DMG and can adjust his strategy on the fly. For situations that require multiple instances of Lunar-Charged DMG in a short period, he can use his standard Elemental Burst. In fast-paced, high-tempo combat, Flins can enter another state with his Elemental Skill which allows him to unleash a special Elemental Burst that costs less energy on a more frequent basis.

    In addition, Aino, the Whiz-Kid Mechanic, is a 4-star Hydro Claymore user who can be invited to the Traveler’s party for free via the new Archon Quest. In combat, Aino can also attack enemies with Hydro DMG.

    Beyond their roles in battle, these three new friends have their own day-to-day lives to live, and small, everyday joys to savor. With a new feature called Meeting Points, you can chat with them, help build these points throughout Nod-Krai, and unlock more of their stories by progressing through certain events. Visit the Clink-Clank Krumkake Craftshop where Aino and Ineffa are always hard at work, or step into the Frostmoon Enclave to read the legends among the Frostmoon Scions together with Lauma. Or, stop by Flins’s home to listen to the long-guarded tales of the Lightkeepers.

    In the first half of Version Luna I, Lauma will make her debut alongside a rerun for Nahida. The Chronicled Wish will also feature the return of several Sumeru characters. In the second half of the update, Flins, Yelan, and Aino will join the Event Wishes.

    Wander through the Moon-blessed land

    As you gaze at the sky as blue turns to black and the pale moon rises, you may wonder why this ancient energy attracts so many to its power. Traveler, when you enter Nod-Krai, you will find traces of Kuuvahki everywhere throughout the land.

    As you walk through Nod-Krai, you will notice strange objects shimmering with the power of Kuuvahki, typically glowing with either a red or blue unipolar field. These fields behave in a very curious way: two of the same color will push each other away, while different colors are drawn together. Step close enough to one of these fields, and you may enter an empowered state. In this state, you can solve puzzles, collect items, and even gain an advantage in combat. Sometimes, standing near a special field will even allow you to perform a special jump, revealing paths that would otherwise be out of reach.

    Nod-Krai is home to a special elemental creature with various forms known as the Kuuhenki. When you are near certain plants, like the Moonshine Violet, you can gain Kuuvahki energy, which allows you to harness the power of the Kuuhenki and move freely. In this state, you can drift near the mist patches in the area. Patches of a different color will draw you in, while patches of the same color will push you away.

    As the Kuuhenki glides through the air, it leaves behind bright trails called Moonlanes. Hitch a ride on these lanes to move quickly, switch paths, and interact with nearby objects.

    Deal wisely with the enemy

    The dangers and challenges of Nod-Krai are a constant reminder to the Traveler to act with caution. However, one should also be brave enough to face danger head-on at the same time. In the region’s other areas, you will encounter enemies that demand careful handling, as many are influenced by the ancient energy of Kuuvahki. For example, when you face the cave-dwelling boss, the Radiant Moonfly, you can temporarily boost your abilities by healing your HP to its maximum.

    Not all challenges involve fighting belligerent foes. You can hone your battle skills using Knuckle Duckle, a duck-shaped combat machine built by Aino. Effects like Electro-Charged or Lunar-Charged attacks can provide you with a combat efficiency boost. Interestingly, when Aino or Ineffa interacts with the machine, it can trigger extra dialogue options. You might even obtain special items that change their appearance.

    What’s more, there will also be more seasonal events and optimizations which will bring you bountiful rewards and make your adventure an easier one. In celebration of the fifth anniversary of Genshin Impact, we have prepared various gifts to thank our Travelers, including 10 Intertwined Fates, 1,600 Primogems, two exclusive gadgets, and a free 5-star standard banner character of your choice.

    Know that when you see the signs of the new moon, the story of Version Luna I is about to be unveiled. Let us set out for the new region, Nod-Krai, as we turn a new page in the story, and bask in its cold moonlight.
    #genshin #impact #version #luna #song
    Genshin Impact Version Luna I, Song of the Welkin Moon: Segue launches September 10
    Greetings, Travelers! We’re excited to announce that Genshin Impact Version Luna I, Song of the Welkin Moon: Segue, will be released on September 10. This update, which marks the beginning of the year-long Song of the Welkin Moon version series, will also introduce a major new region, Nod-Krai. The new version series will tie together past story clues and characters, answer some long-standing questions, and reveal more about the secrets of the moon. The stage for these events is Nod-Krai, a new self-governing region at the edge of Teyvat. Blessed by pure moonlight, Nod-Krai is a land rich with a unique and powerful energy which has attracted various individuals and factions, all with their own goals and ambitions which often lead them to conflict. However, it is also this very same energy that will be crucial to your journey, guiding you as you explore the region and solve its mysteries. View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image We invite all Travelers who have finished the Mondstadt Archon Quest “Prologue: Act III – Song of the Dragon and Freedom” to enter Nod-Krai. You can access the region via a Teleport Waypoint at its entrance. If you have also completed “Chapter I Act III – A New Star Approaches” in Liyue, you will be able to start the new Archon Quests in Nod-Krai immediately. Gather under the moonlight Moonlight pierces the haze, casting rippling shadows toward the unknown. To uncover a new Fatui scheme related to the “Ancient Moon’s Remnants,” Travelers must set sail from Natlan, heading north to the self-governing region of Nod-Krai located at the southernmost part of Snezhnaya. Nod-Krai is a land where an ancient elemental energy called Kuuvahki can be found. This unique power influences the lives of its people, but it also draws factions with competing goals, leading to constant conflict. Walk the paths of this land, Traveler, and see with your own eyes how these factions seek and harness this potent energy. In Version Luna I, Travelers can explore three distinct island areas within Nod-Krai. Your journey begins in Nasha Town, where the residents have woven the ancient energy of Kuuvahki into their everyday lives. Here, street signs hum with a gentle energy, and clever devices retain the traces of ancient power. Head west, and you will find the Clink-Clank Krumkake Craftshop, home to the whimsical inventors Aino and Ineffa. Continue westward and you will find yourself at Hiisi Island, where you will enter the lands of the Frostmoon Scions. The devotees here, who brook no blasphemy towards the moon, worship the moon goddess Kuutar while fiercely guarding the purity of their faith. Yet, in the shadows, unseen forces are stirring up a crisis. Further north along the coast, the Fatui’s Kuuvahki Experimental Design Bureau stands over ancient ruins. The Fatui have come here to build war engines fueled by Kuuvahki. You can choose to face them head-on or infiltrate quietly to uncover their secrets. As you explore Nod-Krai, you may also encounter the Wild Hunt, a dangerous force emerging from the Abyss that leaves a trail of destruction in its wake. Standing against this darkness are the Lightkeepers, an ancient order that defends the freedom and peace of the land. They carry special lanterns that change color to warn of the kind of danger that is coming. View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image Meanwhile, Columbina, who once served as the Fatui’s Damselette, has left them and is now lingering in her homeland of Nod-Krai. Here, she is revered as the Moon Maiden. Interestingly, the statues of the Idol of the New Moon found throughout Nod-Krai bear a striking resemblance to Columbina’s appearance before she joined the Fatui, hinting at her past identity. Now, she has her very own haven, the Sanctuary of the Silver Moon Court, located near Hiisi Island. Walk there by the moonlight, fulfill her requests, and you may earn her assistance on your journey. In the new Archon Quest, the Traveler will assist Lauma, the Moonchanter of the Frostmoon Scions. Join her to confront the Fatui and uncover the secrets behind their faith in the moon goddess. You will also see for yourself the unwavering resolve of the Lightkeepers as you fight alongside Flins, a warrior of their order, to defend the land. This adventure will also introduce you to new allies, such as the merchants of Nasha Town, while also presenting a menacing debut of the Fatui Harbinger Sandrone, the Marionette. Fight alongside new allies and unleash the power of the Moon In Nod-Krai, certain characters blessed by the moon can channel the ancient energy of Kuuvahki in combat through their Moon Wheels. This unique ability triggers special Lunar reactions. Both Lauma and Flins are among the characters who have this ability. As a Moonchanter of the Frostmoon Scions, Lauma listens to her people, soothes their worries, and safeguards their covenant with the land. Gentle, wise, and deeply connected to nature, she has earned the trust of her companions. Even the small creatures influenced by Kuuvahki seem to share this trust, approaching and interacting with her whenever she is near. Lauma can even transform into a cervitaur, allowing her to dash swiftly across the land. In battle, this 5-star Dendro Catalyst user can give her team a decisive edge with a new Elemental Reaction: Lunar-Bloom. Once the Bloom reaction is triggered with Lauma on the team, she can convert it into a Lunar-Bloom reaction. This creates a valuable combat resource called Verdant Dew, which Lauma can then use to deal significant Lunar-Bloom damage with the potential for CRIT Hits. In combat, Lauma can periodically deal AoE Dendro DMG and further support the team by lowering enemies’ Dendro and Hydro RES. Her combat style is focused on collecting Verdant Dew from Lunar-Bloom reactions and converting it into more Lunar-Bloom damage. Flins is a 5-star Electro Polearm user known for his noble and aristocratic bearing. This veteran Lightkeeper who has long dedicated himself to defending Nod-Krai’s freedom and peace can even hear the whispers of the Wild Hunt while on exploration. In battle, Flins deals powerful Lunar-Charged DMG and can adjust his strategy on the fly. For situations that require multiple instances of Lunar-Charged DMG in a short period, he can use his standard Elemental Burst. In fast-paced, high-tempo combat, Flins can enter another state with his Elemental Skill which allows him to unleash a special Elemental Burst that costs less energy on a more frequent basis. In addition, Aino, the Whiz-Kid Mechanic, is a 4-star Hydro Claymore user who can be invited to the Traveler’s party for free via the new Archon Quest. In combat, Aino can also attack enemies with Hydro DMG. Beyond their roles in battle, these three new friends have their own day-to-day lives to live, and small, everyday joys to savor. With a new feature called Meeting Points, you can chat with them, help build these points throughout Nod-Krai, and unlock more of their stories by progressing through certain events. Visit the Clink-Clank Krumkake Craftshop where Aino and Ineffa are always hard at work, or step into the Frostmoon Enclave to read the legends among the Frostmoon Scions together with Lauma. Or, stop by Flins’s home to listen to the long-guarded tales of the Lightkeepers. In the first half of Version Luna I, Lauma will make her debut alongside a rerun for Nahida. The Chronicled Wish will also feature the return of several Sumeru characters. In the second half of the update, Flins, Yelan, and Aino will join the Event Wishes. Wander through the Moon-blessed land As you gaze at the sky as blue turns to black and the pale moon rises, you may wonder why this ancient energy attracts so many to its power. Traveler, when you enter Nod-Krai, you will find traces of Kuuvahki everywhere throughout the land. As you walk through Nod-Krai, you will notice strange objects shimmering with the power of Kuuvahki, typically glowing with either a red or blue unipolar field. These fields behave in a very curious way: two of the same color will push each other away, while different colors are drawn together. Step close enough to one of these fields, and you may enter an empowered state. In this state, you can solve puzzles, collect items, and even gain an advantage in combat. Sometimes, standing near a special field will even allow you to perform a special jump, revealing paths that would otherwise be out of reach. Nod-Krai is home to a special elemental creature with various forms known as the Kuuhenki. When you are near certain plants, like the Moonshine Violet, you can gain Kuuvahki energy, which allows you to harness the power of the Kuuhenki and move freely. In this state, you can drift near the mist patches in the area. Patches of a different color will draw you in, while patches of the same color will push you away. As the Kuuhenki glides through the air, it leaves behind bright trails called Moonlanes. Hitch a ride on these lanes to move quickly, switch paths, and interact with nearby objects. Deal wisely with the enemy The dangers and challenges of Nod-Krai are a constant reminder to the Traveler to act with caution. However, one should also be brave enough to face danger head-on at the same time. In the region’s other areas, you will encounter enemies that demand careful handling, as many are influenced by the ancient energy of Kuuvahki. For example, when you face the cave-dwelling boss, the Radiant Moonfly, you can temporarily boost your abilities by healing your HP to its maximum. Not all challenges involve fighting belligerent foes. You can hone your battle skills using Knuckle Duckle, a duck-shaped combat machine built by Aino. Effects like Electro-Charged or Lunar-Charged attacks can provide you with a combat efficiency boost. Interestingly, when Aino or Ineffa interacts with the machine, it can trigger extra dialogue options. You might even obtain special items that change their appearance. What’s more, there will also be more seasonal events and optimizations which will bring you bountiful rewards and make your adventure an easier one. In celebration of the fifth anniversary of Genshin Impact, we have prepared various gifts to thank our Travelers, including 10 Intertwined Fates, 1,600 Primogems, two exclusive gadgets, and a free 5-star standard banner character of your choice. Know that when you see the signs of the new moon, the story of Version Luna I is about to be unveiled. Let us set out for the new region, Nod-Krai, as we turn a new page in the story, and bask in its cold moonlight. #genshin #impact #version #luna #song
    Genshin Impact Version Luna I, Song of the Welkin Moon: Segue launches September 10
    blog.playstation.com
    Greetings, Travelers! We’re excited to announce that Genshin Impact Version Luna I, Song of the Welkin Moon: Segue, will be released on September 10. This update, which marks the beginning of the year-long Song of the Welkin Moon version series, will also introduce a major new region, Nod-Krai. The new version series will tie together past story clues and characters, answer some long-standing questions, and reveal more about the secrets of the moon. The stage for these events is Nod-Krai, a new self-governing region at the edge of Teyvat. Blessed by pure moonlight, Nod-Krai is a land rich with a unique and powerful energy which has attracted various individuals and factions, all with their own goals and ambitions which often lead them to conflict. However, it is also this very same energy that will be crucial to your journey, guiding you as you explore the region and solve its mysteries. View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image We invite all Travelers who have finished the Mondstadt Archon Quest “Prologue: Act III – Song of the Dragon and Freedom” to enter Nod-Krai. You can access the region via a Teleport Waypoint at its entrance. If you have also completed “Chapter I Act III – A New Star Approaches” in Liyue, you will be able to start the new Archon Quests in Nod-Krai immediately. Gather under the moonlight Moonlight pierces the haze, casting rippling shadows toward the unknown. To uncover a new Fatui scheme related to the “Ancient Moon’s Remnants,” Travelers must set sail from Natlan, heading north to the self-governing region of Nod-Krai located at the southernmost part of Snezhnaya. Nod-Krai is a land where an ancient elemental energy called Kuuvahki can be found. This unique power influences the lives of its people, but it also draws factions with competing goals, leading to constant conflict. Walk the paths of this land, Traveler, and see with your own eyes how these factions seek and harness this potent energy. In Version Luna I, Travelers can explore three distinct island areas within Nod-Krai. Your journey begins in Nasha Town, where the residents have woven the ancient energy of Kuuvahki into their everyday lives. Here, street signs hum with a gentle energy, and clever devices retain the traces of ancient power. Head west, and you will find the Clink-Clank Krumkake Craftshop, home to the whimsical inventors Aino and Ineffa. Continue westward and you will find yourself at Hiisi Island, where you will enter the lands of the Frostmoon Scions. The devotees here, who brook no blasphemy towards the moon, worship the moon goddess Kuutar while fiercely guarding the purity of their faith. Yet, in the shadows, unseen forces are stirring up a crisis. Further north along the coast, the Fatui’s Kuuvahki Experimental Design Bureau stands over ancient ruins. The Fatui have come here to build war engines fueled by Kuuvahki. You can choose to face them head-on or infiltrate quietly to uncover their secrets. As you explore Nod-Krai, you may also encounter the Wild Hunt, a dangerous force emerging from the Abyss that leaves a trail of destruction in its wake. Standing against this darkness are the Lightkeepers, an ancient order that defends the freedom and peace of the land. They carry special lanterns that change color to warn of the kind of danger that is coming. View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image Meanwhile, Columbina, who once served as the Fatui’s Damselette, has left them and is now lingering in her homeland of Nod-Krai. Here, she is revered as the Moon Maiden. Interestingly, the statues of the Idol of the New Moon found throughout Nod-Krai bear a striking resemblance to Columbina’s appearance before she joined the Fatui, hinting at her past identity. Now, she has her very own haven, the Sanctuary of the Silver Moon Court, located near Hiisi Island. Walk there by the moonlight, fulfill her requests, and you may earn her assistance on your journey. In the new Archon Quest, the Traveler will assist Lauma, the Moonchanter of the Frostmoon Scions. Join her to confront the Fatui and uncover the secrets behind their faith in the moon goddess. You will also see for yourself the unwavering resolve of the Lightkeepers as you fight alongside Flins, a warrior of their order, to defend the land. This adventure will also introduce you to new allies, such as the merchants of Nasha Town, while also presenting a menacing debut of the Fatui Harbinger Sandrone, the Marionette. Fight alongside new allies and unleash the power of the Moon In Nod-Krai, certain characters blessed by the moon can channel the ancient energy of Kuuvahki in combat through their Moon Wheels. This unique ability triggers special Lunar reactions. Both Lauma and Flins are among the characters who have this ability. As a Moonchanter of the Frostmoon Scions, Lauma listens to her people, soothes their worries, and safeguards their covenant with the land. Gentle, wise, and deeply connected to nature, she has earned the trust of her companions. Even the small creatures influenced by Kuuvahki seem to share this trust, approaching and interacting with her whenever she is near. Lauma can even transform into a cervitaur, allowing her to dash swiftly across the land. In battle, this 5-star Dendro Catalyst user can give her team a decisive edge with a new Elemental Reaction: Lunar-Bloom. Once the Bloom reaction is triggered with Lauma on the team, she can convert it into a Lunar-Bloom reaction. This creates a valuable combat resource called Verdant Dew, which Lauma can then use to deal significant Lunar-Bloom damage with the potential for CRIT Hits. In combat, Lauma can periodically deal AoE Dendro DMG and further support the team by lowering enemies’ Dendro and Hydro RES. Her combat style is focused on collecting Verdant Dew from Lunar-Bloom reactions and converting it into more Lunar-Bloom damage. Flins is a 5-star Electro Polearm user known for his noble and aristocratic bearing. This veteran Lightkeeper who has long dedicated himself to defending Nod-Krai’s freedom and peace can even hear the whispers of the Wild Hunt while on exploration. In battle, Flins deals powerful Lunar-Charged DMG and can adjust his strategy on the fly. For situations that require multiple instances of Lunar-Charged DMG in a short period, he can use his standard Elemental Burst. In fast-paced, high-tempo combat, Flins can enter another state with his Elemental Skill which allows him to unleash a special Elemental Burst that costs less energy on a more frequent basis. In addition, Aino, the Whiz-Kid Mechanic, is a 4-star Hydro Claymore user who can be invited to the Traveler’s party for free via the new Archon Quest. In combat, Aino can also attack enemies with Hydro DMG. Beyond their roles in battle, these three new friends have their own day-to-day lives to live, and small, everyday joys to savor. With a new feature called Meeting Points, you can chat with them, help build these points throughout Nod-Krai, and unlock more of their stories by progressing through certain events. Visit the Clink-Clank Krumkake Craftshop where Aino and Ineffa are always hard at work, or step into the Frostmoon Enclave to read the legends among the Frostmoon Scions together with Lauma. Or, stop by Flins’s home to listen to the long-guarded tales of the Lightkeepers. In the first half of Version Luna I, Lauma will make her debut alongside a rerun for Nahida. The Chronicled Wish will also feature the return of several Sumeru characters. In the second half of the update, Flins, Yelan, and Aino will join the Event Wishes. Wander through the Moon-blessed land As you gaze at the sky as blue turns to black and the pale moon rises, you may wonder why this ancient energy attracts so many to its power. Traveler, when you enter Nod-Krai, you will find traces of Kuuvahki everywhere throughout the land. As you walk through Nod-Krai, you will notice strange objects shimmering with the power of Kuuvahki, typically glowing with either a red or blue unipolar field. These fields behave in a very curious way: two of the same color will push each other away, while different colors are drawn together. Step close enough to one of these fields, and you may enter an empowered state. In this state, you can solve puzzles, collect items, and even gain an advantage in combat. Sometimes, standing near a special field will even allow you to perform a special jump, revealing paths that would otherwise be out of reach. Nod-Krai is home to a special elemental creature with various forms known as the Kuuhenki. When you are near certain plants, like the Moonshine Violet, you can gain Kuuvahki energy, which allows you to harness the power of the Kuuhenki and move freely. In this state, you can drift near the mist patches in the area. Patches of a different color will draw you in, while patches of the same color will push you away. As the Kuuhenki glides through the air, it leaves behind bright trails called Moonlanes. Hitch a ride on these lanes to move quickly, switch paths, and interact with nearby objects. Deal wisely with the enemy The dangers and challenges of Nod-Krai are a constant reminder to the Traveler to act with caution. However, one should also be brave enough to face danger head-on at the same time. In the region’s other areas, you will encounter enemies that demand careful handling, as many are influenced by the ancient energy of Kuuvahki. For example, when you face the cave-dwelling boss, the Radiant Moonfly, you can temporarily boost your abilities by healing your HP to its maximum. Not all challenges involve fighting belligerent foes. You can hone your battle skills using Knuckle Duckle, a duck-shaped combat machine built by Aino. Effects like Electro-Charged or Lunar-Charged attacks can provide you with a combat efficiency boost. Interestingly, when Aino or Ineffa interacts with the machine, it can trigger extra dialogue options. You might even obtain special items that change their appearance. What’s more, there will also be more seasonal events and optimizations which will bring you bountiful rewards and make your adventure an easier one. In celebration of the fifth anniversary of Genshin Impact, we have prepared various gifts to thank our Travelers, including 10 Intertwined Fates, 1,600 Primogems, two exclusive gadgets, and a free 5-star standard banner character of your choice. Know that when you see the signs of the new moon, the story of Version Luna I is about to be unveiled. Let us set out for the new region, Nod-Krai, as we turn a new page in the story, and bask in its cold moonlight.
    Like
    Love
    Wow
    21
    · 2 Comments ·0 Shares
  • « Assimiler l’IA à de la triche dessert l’enseignement et enferme chacun dans un jeu de rôle désespérant »

    Comment le système éducatif peut-il tenir encore la barre face au raz de marée des intelligences artificiellesgénératives en langage naturelcomme Le Chat ou ChatGPT ? Quand la version GPT-5.0, dévoilée le 7 août, exhibe un mode d’échange appelé « Prof pédago » qui fleure bon l’euphémisme pour « laisse-moi faire tes devoirs » ? La priorité me semble de sortir ces LLM du régime de la transgression auquel nos élèves et étudiants les associent, et que malheureusement nos discours alarmistes entérinent. De l’imaginaire pirate toujours latent avec l’offreexcitante de ce qu’est devenu l’Internet. A l’adolescence, c’est évidemment une offre irrésistible de pulsionnalité tournée contre la société réglée des adultes. L’assimilation de l’IA générative à de la triche dessert alors l’enseignement, et enferme chacun dans un jeu de rôle désespérant. Alors, oui, la simplicité fulgurante de cette triche par IA est ahurissante pour celles et ceux qui ont pratiqué antisèches en papier ou formules dans la calculatrice : à la simple photo d’un énoncé, ces LLM répondent en une seconde par un corrigé intégral ! Du collège aux concours des grandes écoles, des saisies croissantes dévoilent, comme pour la drogue, l’extension de la pratique. ChatGPT relance d’ailleurs sans gêne les échanges avec des pratiques de dealeur : « Veux-tu un document complet en PDF à montrer à ton prof ? » Lire aussi | Article réservé à nos abonnés « L’IA ne doit pas être un prétexte à l’abandon de l’écriture » Cette normalisation ne peut passer que par une intégration officielle, dont il faut déterminer les modalités et une charte d’usage claire. Il va ainsi falloir repenser les modalités d’évaluation, mieux valoriser encore l’oral en classe, et favoriser le retour à l’atelier de vrais « travaux dirigés ». Et surtout, me semble-t-il, malgré nos propres réserves, insérer cet outil dans une démarche académique, c’est-à-dire dans nos cours. Non pas sur le mode honteux de la cigarette partagée, et pas exclusivement sur celui d’une analyse critique qui paraîtrait trop de mauvaise foi pour dénigrer l’outil, mais pour le défaire de cette dimension transgressive. Ce qui n’implique pas du tout de se soumettre à une divinité pédagogique que l’IA n’est pas. Il vous reste 48.51% de cet article à lire. La suite est réservée aux abonnés.
    #assimiler #lia #triche #dessert #lenseignement
    « Assimiler l’IA à de la triche dessert l’enseignement et enferme chacun dans un jeu de rôle désespérant »
    Comment le système éducatif peut-il tenir encore la barre face au raz de marée des intelligences artificiellesgénératives en langage naturelcomme Le Chat ou ChatGPT ? Quand la version GPT-5.0, dévoilée le 7 août, exhibe un mode d’échange appelé « Prof pédago » qui fleure bon l’euphémisme pour « laisse-moi faire tes devoirs » ? La priorité me semble de sortir ces LLM du régime de la transgression auquel nos élèves et étudiants les associent, et que malheureusement nos discours alarmistes entérinent. De l’imaginaire pirate toujours latent avec l’offreexcitante de ce qu’est devenu l’Internet. A l’adolescence, c’est évidemment une offre irrésistible de pulsionnalité tournée contre la société réglée des adultes. L’assimilation de l’IA générative à de la triche dessert alors l’enseignement, et enferme chacun dans un jeu de rôle désespérant. Alors, oui, la simplicité fulgurante de cette triche par IA est ahurissante pour celles et ceux qui ont pratiqué antisèches en papier ou formules dans la calculatrice : à la simple photo d’un énoncé, ces LLM répondent en une seconde par un corrigé intégral ! Du collège aux concours des grandes écoles, des saisies croissantes dévoilent, comme pour la drogue, l’extension de la pratique. ChatGPT relance d’ailleurs sans gêne les échanges avec des pratiques de dealeur : « Veux-tu un document complet en PDF à montrer à ton prof ? » Lire aussi | Article réservé à nos abonnés « L’IA ne doit pas être un prétexte à l’abandon de l’écriture » Cette normalisation ne peut passer que par une intégration officielle, dont il faut déterminer les modalités et une charte d’usage claire. Il va ainsi falloir repenser les modalités d’évaluation, mieux valoriser encore l’oral en classe, et favoriser le retour à l’atelier de vrais « travaux dirigés ». Et surtout, me semble-t-il, malgré nos propres réserves, insérer cet outil dans une démarche académique, c’est-à-dire dans nos cours. Non pas sur le mode honteux de la cigarette partagée, et pas exclusivement sur celui d’une analyse critique qui paraîtrait trop de mauvaise foi pour dénigrer l’outil, mais pour le défaire de cette dimension transgressive. Ce qui n’implique pas du tout de se soumettre à une divinité pédagogique que l’IA n’est pas. Il vous reste 48.51% de cet article à lire. La suite est réservée aux abonnés. #assimiler #lia #triche #dessert #lenseignement
    « Assimiler l’IA à de la triche dessert l’enseignement et enferme chacun dans un jeu de rôle désespérant »
    www.lemonde.fr
    Comment le système éducatif peut-il tenir encore la barre face au raz de marée des intelligences artificielles (IA) génératives en langage naturel (LLM) comme Le Chat ou ChatGPT ? Quand la version GPT-5.0, dévoilée le 7 août, exhibe un mode d’échange appelé « Prof pédago » qui fleure bon l’euphémisme pour « laisse-moi faire tes devoirs » ? La priorité me semble de sortir ces LLM du régime de la transgression auquel nos élèves et étudiants les associent, et que malheureusement nos discours alarmistes entérinent. De l’imaginaire pirate toujours latent avec l’offre (neurologiquement) excitante de ce qu’est devenu l’Internet (pornographie à volonté, réseaux sociaux et vidéos sans modération – et dark Net comme arrière-monde hors de contrôle). A l’adolescence, c’est évidemment une offre irrésistible de pulsionnalité tournée contre la société réglée des adultes. L’assimilation de l’IA générative à de la triche dessert alors l’enseignement, et enferme chacun dans un jeu de rôle désespérant. Alors, oui, la simplicité fulgurante de cette triche par IA est ahurissante pour celles et ceux qui ont pratiqué antisèches en papier ou formules dans la calculatrice : à la simple photo d’un énoncé, ces LLM répondent en une seconde par un corrigé intégral ! Du collège aux concours des grandes écoles, des saisies croissantes dévoilent, comme pour la drogue, l’extension de la pratique. ChatGPT relance d’ailleurs sans gêne les échanges avec des pratiques de dealeur : « Veux-tu un document complet en PDF à montrer à ton prof ? » Lire aussi | Article réservé à nos abonnés « L’IA ne doit pas être un prétexte à l’abandon de l’écriture » Cette normalisation ne peut passer que par une intégration officielle, dont il faut déterminer les modalités et une charte d’usage claire. Il va ainsi falloir repenser les modalités d’évaluation (disqualification de quasiment tout devoir à la maison), mieux valoriser encore l’oral en classe, et favoriser le retour à l’atelier de vrais « travaux dirigés ». Et surtout, me semble-t-il, malgré nos propres réserves, insérer cet outil dans une démarche académique, c’est-à-dire dans nos cours. Non pas sur le mode honteux de la cigarette partagée, et pas exclusivement sur celui d’une analyse critique qui paraîtrait trop de mauvaise foi pour dénigrer l’outil (les « hallucinations », inventions par la machine par exemple de faux textes, sont pourtant une cible facile et réjouissante), mais pour le défaire de cette dimension transgressive. Ce qui n’implique pas du tout de se soumettre à une divinité pédagogique que l’IA n’est pas. Il vous reste 48.51% de cet article à lire. La suite est réservée aux abonnés.
    Like
    Love
    Wow
    Angry
    Sad
    432
    · 2 Comments ·0 Shares
  • « L’IA ne doit pas être un prétexte à l’abandon de l’écriture »

    Comme beaucoup de mes collègues universitaires, j’ai terminé l’année avec un sentiment mitigé. D’un côté, la joie de transmettre des savoirs, de voir des étudiants s’approprier des lectures exigeantes et s’enflammer dans des discussions qui dépassent le cadre du cours. De l’autre, la difficulté croissante d’évaluer des devoirs écrits dont je ne sais plus s’ils sont le fruit d’un travail personnel ou d’une intelligence artificielle. En mai, j’ai reçu, dans des proportions inédites, des copies dont le style lisse et impersonnel, la maîtrise surprenante de certaines références pointues trahissaient le recours à l’IA. J’ai aussi entendu, en entretien, des étudiants incapables de résumer avec leurs mots ce qu’ils venaient de m’écrire. Mais j’ai vu dans le même temps des usages plus prometteurs : certains avaient utilisé un logiciel pour générer un contre-argument ou tester la solidité de leurs propres idées. Voilà pourquoi l’interdiction pure et simple du recours à l’IA ne me paraît ni possible ni souhaitable. Chaque grande innovation technique a suscité une sorte de panique morale. Dans l’Antiquité, Platon voyait dans l’écriture une menace pour la mémorisation. Dans les années 1930, la radio fut accusée d’abrutir les masses. La télévision devint, dans les années 1960, le symbole d’un abaissement culturel. Internet fut dépeint comme un espace de désinformation, et Wikipédia comme la fin de l’expertise savante. A chaque fois, ces craintes étaient exagérées : ces technologies n’ont pas provoqué la décadence annoncée. L’IA s’inscrit dans cette lignée. Partenaire plutôt que substitut Le problème n’est pas que les étudiants « trichent » mais que nos dispositifs d’évaluation s’en trouvent fragilisés. Si une dissertation standard peut être produite en quelques secondes, c’est peut-être le signe que ce format n’est plus adéquat pour mesurer l’effort intellectuel attendu. Dans mes cours, j’ai vu cette ambivalence. Lorsqu’ils analysent un discours politique ou un texte scientifique, certains recourent à l’IA pour en produire un résumé. Pris isolément, ce résumé est toujours impeccable, mais souvent plat et sans relief. En revanche, quand les étudiants l’utilisent comme point de départ pour discuter le texte et en repérer les limites, le résultat s’avère stimulant. L’IA devient alors un Il vous reste 59.14% de cet article à lire. La suite est réservée aux abonnés.
    #lia #doit #pas #êtreun #prétexte
    « L’IA ne doit pas être un prétexte à l’abandon de l’écriture »
    Comme beaucoup de mes collègues universitaires, j’ai terminé l’année avec un sentiment mitigé. D’un côté, la joie de transmettre des savoirs, de voir des étudiants s’approprier des lectures exigeantes et s’enflammer dans des discussions qui dépassent le cadre du cours. De l’autre, la difficulté croissante d’évaluer des devoirs écrits dont je ne sais plus s’ils sont le fruit d’un travail personnel ou d’une intelligence artificielle. En mai, j’ai reçu, dans des proportions inédites, des copies dont le style lisse et impersonnel, la maîtrise surprenante de certaines références pointues trahissaient le recours à l’IA. J’ai aussi entendu, en entretien, des étudiants incapables de résumer avec leurs mots ce qu’ils venaient de m’écrire. Mais j’ai vu dans le même temps des usages plus prometteurs : certains avaient utilisé un logiciel pour générer un contre-argument ou tester la solidité de leurs propres idées. Voilà pourquoi l’interdiction pure et simple du recours à l’IA ne me paraît ni possible ni souhaitable. Chaque grande innovation technique a suscité une sorte de panique morale. Dans l’Antiquité, Platon voyait dans l’écriture une menace pour la mémorisation. Dans les années 1930, la radio fut accusée d’abrutir les masses. La télévision devint, dans les années 1960, le symbole d’un abaissement culturel. Internet fut dépeint comme un espace de désinformation, et Wikipédia comme la fin de l’expertise savante. A chaque fois, ces craintes étaient exagérées : ces technologies n’ont pas provoqué la décadence annoncée. L’IA s’inscrit dans cette lignée. Partenaire plutôt que substitut Le problème n’est pas que les étudiants « trichent » mais que nos dispositifs d’évaluation s’en trouvent fragilisés. Si une dissertation standard peut être produite en quelques secondes, c’est peut-être le signe que ce format n’est plus adéquat pour mesurer l’effort intellectuel attendu. Dans mes cours, j’ai vu cette ambivalence. Lorsqu’ils analysent un discours politique ou un texte scientifique, certains recourent à l’IA pour en produire un résumé. Pris isolément, ce résumé est toujours impeccable, mais souvent plat et sans relief. En revanche, quand les étudiants l’utilisent comme point de départ pour discuter le texte et en repérer les limites, le résultat s’avère stimulant. L’IA devient alors un Il vous reste 59.14% de cet article à lire. La suite est réservée aux abonnés. #lia #doit #pas #êtreun #prétexte
    « L’IA ne doit pas être un prétexte à l’abandon de l’écriture »
    www.lemonde.fr
    Comme beaucoup de mes collègues universitaires, j’ai terminé l’année avec un sentiment mitigé. D’un côté, la joie de transmettre des savoirs, de voir des étudiants s’approprier des lectures exigeantes et s’enflammer dans des discussions qui dépassent le cadre du cours. De l’autre, la difficulté croissante d’évaluer des devoirs écrits dont je ne sais plus s’ils sont le fruit d’un travail personnel ou d’une intelligence artificielle (IA). En mai, j’ai reçu, dans des proportions inédites, des copies dont le style lisse et impersonnel, la maîtrise surprenante de certaines références pointues trahissaient le recours à l’IA. J’ai aussi entendu, en entretien, des étudiants incapables de résumer avec leurs mots ce qu’ils venaient de m’écrire. Mais j’ai vu dans le même temps des usages plus prometteurs : certains avaient utilisé un logiciel pour générer un contre-argument ou tester la solidité de leurs propres idées. Voilà pourquoi l’interdiction pure et simple du recours à l’IA ne me paraît ni possible ni souhaitable. Chaque grande innovation technique a suscité une sorte de panique morale. Dans l’Antiquité, Platon voyait dans l’écriture une menace pour la mémorisation. Dans les années 1930, la radio fut accusée d’abrutir les masses. La télévision devint, dans les années 1960, le symbole d’un abaissement culturel. Internet fut dépeint comme un espace de désinformation, et Wikipédia comme la fin de l’expertise savante. A chaque fois, ces craintes étaient exagérées : ces technologies n’ont pas provoqué la décadence annoncée. L’IA s’inscrit dans cette lignée. Partenaire plutôt que substitut Le problème n’est pas que les étudiants « trichent » mais que nos dispositifs d’évaluation s’en trouvent fragilisés. Si une dissertation standard peut être produite en quelques secondes, c’est peut-être le signe que ce format n’est plus adéquat pour mesurer l’effort intellectuel attendu. Dans mes cours, j’ai vu cette ambivalence. Lorsqu’ils analysent un discours politique ou un texte scientifique, certains recourent à l’IA pour en produire un résumé. Pris isolément, ce résumé est toujours impeccable, mais souvent plat et sans relief. En revanche, quand les étudiants l’utilisent comme point de départ pour discuter le texte et en repérer les limites, le résultat s’avère stimulant. L’IA devient alors un Il vous reste 59.14% de cet article à lire. La suite est réservée aux abonnés.
    Like
    Love
    Wow
    Sad
    Angry
    323
    · 2 Comments ·0 Shares
  • Nintendo's stinginess on Switch 2 dev kits, layoffs at Crystal Dynamics, and Diablo developers unionize - Patch Notes #20

    Welcome back to another edition of Patch Notes, here on my secondsub-in for its usual author, senior news editor Chris Kerr. This week, we had more layoff news, unfortunately, this time hitting studios like Embracer-owned Crystal Dynamics and Rec Room. Crystal Dynamics, in particular, is suffering its third round of layoffs since being snatched up by Embracer in 2022.Elsewhere, we have some news on Switch 2 development kits, union efforts for many folks working on the Diablo franchise, Atari snatched up a couple of interesting older Ubisoft properties, and we ponder the creative core of  weirdo indie social games.Report: Nintendo reportedly withholds Switch 2 dev kits, directs devs toward OG Switch development insteadVia Game Developer/Digital Foundry // Freshly returned from Gamescom, Digital Foundry senior staff writer and video editor John Linneman spoke with "a lot of developers" that said they're unable to get Switch 2 development kits. The demand is there: plenty of teams want to be crafting games for the Switch 2, but they are reportedly being told to ship for the original Switch and rely on the new console's backward compatibility features.Atari Acquires Ubisoft IP, Including Child of Eden, Grow Home, Cold FearRelated:Via This Week in Video Games // Atari snatched up a number of intriguing Ubisoft properties this week, including the how-on-earth-was-this-a-whole-decade-ago climber Grow Home and its sequel Grow Up, Child of Eden, Cold Fear, and I am Alive. In the post, Atari VP of new business Deborah Papiernik said “Atari has a rich gaming legacy and deep appreciation for these classic titles... We’re excited to see how they’ll evolve and connect with players in fresh, meaningful ways.”Embracer studio Crystal Dynamics is laying off more staffVia Game Developer // As noted above, news of yet another round of layoffs at Tomb Raider studio Crystal Dynamics came in hot this week. Game Developer contributing news editor Diego Arguello writes "This is the third round of layoffs since Embracer Group acquired the studio in 2022. As Embracer issued an absurd number of mass layoffs, project cancellations, and divestments across its portfolio, Crystal Dynamics conducted job cuts in 2023 and early this year.450 Diablo developers vote to unionize under CWAVia  Game Developer // Senior editor Bryant Francis covered the news today with one of my favorite opening lines in Game Developer news history: "The forces of hell are unionizing. Today the Communication Workers of America has announced that over 450 game developers at Blizzard Entertainment who work on the Diablo series have voted to unionize under the CWA."Related:What developers can learn from the indie social co-op games topping the Steam chartsVia Game Developer // In this interview piece from news columnist Nicole Carpenter, the core appeal and breakout success of "weirdo social games"is examined in great detail by some of the folks working in the genre today.
    #nintendo039s #stinginess #switch #dev #kits
    Nintendo's stinginess on Switch 2 dev kits, layoffs at Crystal Dynamics, and Diablo developers unionize - Patch Notes #20
    Welcome back to another edition of Patch Notes, here on my secondsub-in for its usual author, senior news editor Chris Kerr. This week, we had more layoff news, unfortunately, this time hitting studios like Embracer-owned Crystal Dynamics and Rec Room. Crystal Dynamics, in particular, is suffering its third round of layoffs since being snatched up by Embracer in 2022.Elsewhere, we have some news on Switch 2 development kits, union efforts for many folks working on the Diablo franchise, Atari snatched up a couple of interesting older Ubisoft properties, and we ponder the creative core of  weirdo indie social games.Report: Nintendo reportedly withholds Switch 2 dev kits, directs devs toward OG Switch development insteadVia Game Developer/Digital Foundry // Freshly returned from Gamescom, Digital Foundry senior staff writer and video editor John Linneman spoke with "a lot of developers" that said they're unable to get Switch 2 development kits. The demand is there: plenty of teams want to be crafting games for the Switch 2, but they are reportedly being told to ship for the original Switch and rely on the new console's backward compatibility features.Atari Acquires Ubisoft IP, Including Child of Eden, Grow Home, Cold FearRelated:Via This Week in Video Games // Atari snatched up a number of intriguing Ubisoft properties this week, including the how-on-earth-was-this-a-whole-decade-ago climber Grow Home and its sequel Grow Up, Child of Eden, Cold Fear, and I am Alive. In the post, Atari VP of new business Deborah Papiernik said “Atari has a rich gaming legacy and deep appreciation for these classic titles... We’re excited to see how they’ll evolve and connect with players in fresh, meaningful ways.”Embracer studio Crystal Dynamics is laying off more staffVia Game Developer // As noted above, news of yet another round of layoffs at Tomb Raider studio Crystal Dynamics came in hot this week. Game Developer contributing news editor Diego Arguello writes "This is the third round of layoffs since Embracer Group acquired the studio in 2022. As Embracer issued an absurd number of mass layoffs, project cancellations, and divestments across its portfolio, Crystal Dynamics conducted job cuts in 2023 and early this year.450 Diablo developers vote to unionize under CWAVia  Game Developer // Senior editor Bryant Francis covered the news today with one of my favorite opening lines in Game Developer news history: "The forces of hell are unionizing. Today the Communication Workers of America has announced that over 450 game developers at Blizzard Entertainment who work on the Diablo series have voted to unionize under the CWA."Related:What developers can learn from the indie social co-op games topping the Steam chartsVia Game Developer // In this interview piece from news columnist Nicole Carpenter, the core appeal and breakout success of "weirdo social games"is examined in great detail by some of the folks working in the genre today. #nintendo039s #stinginess #switch #dev #kits
    Nintendo's stinginess on Switch 2 dev kits, layoffs at Crystal Dynamics, and Diablo developers unionize - Patch Notes #20
    www.gamedeveloper.com
    Welcome back to another edition of Patch Notes, here on my second (and final) sub-in for its usual author, senior news editor Chris Kerr. This week, we had more layoff news, unfortunately, this time hitting studios like Embracer-owned Crystal Dynamics and Rec Room. Crystal Dynamics, in particular, is suffering its third round of layoffs since being snatched up by Embracer in 2022.Elsewhere, we have some news on Switch 2 development kits, union efforts for many folks working on the Diablo franchise, Atari snatched up a couple of interesting older Ubisoft properties, and we ponder the creative core of  weirdo indie social games.Report: Nintendo reportedly withholds Switch 2 dev kits, directs devs toward OG Switch development insteadVia Game Developer/Digital Foundry // Freshly returned from Gamescom, Digital Foundry senior staff writer and video editor John Linneman spoke with "a lot of developers" that said they're unable to get Switch 2 development kits. The demand is there: plenty of teams want to be crafting games for the Switch 2, but they are reportedly being told to ship for the original Switch and rely on the new console's backward compatibility features.Atari Acquires Ubisoft IP, Including Child of Eden, Grow Home, Cold FearRelated:Via This Week in Video Games // Atari snatched up a number of intriguing Ubisoft properties this week, including the how-on-earth-was-this-a-whole-decade-ago climber Grow Home and its sequel Grow Up, Child of Eden, Cold Fear, and I am Alive. In the post, Atari VP of new business Deborah Papiernik said “Atari has a rich gaming legacy and deep appreciation for these classic titles... We’re excited to see how they’ll evolve and connect with players in fresh, meaningful ways.”Embracer studio Crystal Dynamics is laying off more staffVia Game Developer // As noted above, news of yet another round of layoffs at Tomb Raider studio Crystal Dynamics came in hot this week. Game Developer contributing news editor Diego Arguello writes "This is the third round of layoffs since Embracer Group acquired the studio in 2022. As Embracer issued an absurd number of mass layoffs, project cancellations, and divestments across its portfolio, Crystal Dynamics conducted job cuts in 2023 and early this year.450 Diablo developers vote to unionize under CWAVia  Game Developer // Senior editor Bryant Francis covered the news today with one of my favorite opening lines in Game Developer news history: "The forces of hell are unionizing. Today the Communication Workers of America has announced that over 450 game developers at Blizzard Entertainment who work on the Diablo series have voted to unionize under the CWA."Related:What developers can learn from the indie social co-op games topping the Steam chartsVia Game Developer // In this interview piece from news columnist Nicole Carpenter, the core appeal and breakout success of "weirdo social games" (or, if you please, "friendslop") is examined in great detail by some of the folks working in the genre today.
    Like
    Love
    Wow
    Sad
    Angry
    503
    · 2 Comments ·0 Shares
  • NetEase shuts down Rich Vogel-led T-Minus Zero Entertainment

    Game Developer can confirm that Chinese publisher, developer, and studio investor NetEase has shut down T-Minus Zero Entertainment, the game studio founded by BioWare alumni Rich Vogel in 2023.Vogel initially posted on the company's LinkedIn page about NetEase' decision to cease its partnership with T-Minus Zero. "We deeply appreciate NetEase for providing us with both ample runway and support - from helping us find potential investors to giving us the time and budget to develop our game into a fully playable hands-on demo. It has generated a lot of interest."Days later, he told Game Developer that NetEase has shut down T-Minus Zero.T-Minus Zero had been working on a "third-person online multiplayer action game set in a sci-fi universe," according to the company's founding announcement. It appears the project was well-liked by high-level NetEase employees. Former NetEase president of global investments and partnerships Simon Zhu commented on the company's post, stating that the game "deliversgreat fantasy of fighting against 15th floor kaiju to protect the city you care about."Meanwhile NetEase head of brand/publishing for North America & Europe Cisco Maldonado called it a "super great concept anda solid market fit" in a post on Vogel's page.Related:A spokesperson for NetEase initially told Game Developer that the company is "actively working with the studiofind a new publishing home." They added that the company "cannot confirm any layoffs," and that it was "working with the full studio in terms of this transition and future publishing plans."Said spokesperson offered a follow-up comment after we queried the company about Vogel's statement that the company is shutting down T-Minus Zero. They stated that NetEase has made the "difficult decision" to discontinue funding for the company. "This decision was made with careful consideration, as we have been inspired by our partnership with the studio and their bold vision," they said. "However, we have had to reassess our business priorities and are now working closely with the studio to provide support and explore next steps."NetEase is reversing course on millions of dollars worth of studio investmentsNetEase has spent 2025 unwinding a number of international investments in studios like Skybox Labs, Ouka Studios and Jark of Sparks, also laying off US-based developers working on live service megahit Marvel Rivals. This hasn't been a complete withdrawal. Rebel Wolves and Anchor Point made statements saying they were not affected by a business pivot reported on by Bloomberg News.Related:Update 8/29: This story has been updated with additional comment from NetEase.
    #netease #shuts #down #rich #vogelled
    NetEase shuts down Rich Vogel-led T-Minus Zero Entertainment
    Game Developer can confirm that Chinese publisher, developer, and studio investor NetEase has shut down T-Minus Zero Entertainment, the game studio founded by BioWare alumni Rich Vogel in 2023.Vogel initially posted on the company's LinkedIn page about NetEase' decision to cease its partnership with T-Minus Zero. "We deeply appreciate NetEase for providing us with both ample runway and support - from helping us find potential investors to giving us the time and budget to develop our game into a fully playable hands-on demo. It has generated a lot of interest."Days later, he told Game Developer that NetEase has shut down T-Minus Zero.T-Minus Zero had been working on a "third-person online multiplayer action game set in a sci-fi universe," according to the company's founding announcement. It appears the project was well-liked by high-level NetEase employees. Former NetEase president of global investments and partnerships Simon Zhu commented on the company's post, stating that the game "deliversgreat fantasy of fighting against 15th floor kaiju to protect the city you care about."Meanwhile NetEase head of brand/publishing for North America & Europe Cisco Maldonado called it a "super great concept anda solid market fit" in a post on Vogel's page.Related:A spokesperson for NetEase initially told Game Developer that the company is "actively working with the studiofind a new publishing home." They added that the company "cannot confirm any layoffs," and that it was "working with the full studio in terms of this transition and future publishing plans."Said spokesperson offered a follow-up comment after we queried the company about Vogel's statement that the company is shutting down T-Minus Zero. They stated that NetEase has made the "difficult decision" to discontinue funding for the company. "This decision was made with careful consideration, as we have been inspired by our partnership with the studio and their bold vision," they said. "However, we have had to reassess our business priorities and are now working closely with the studio to provide support and explore next steps."NetEase is reversing course on millions of dollars worth of studio investmentsNetEase has spent 2025 unwinding a number of international investments in studios like Skybox Labs, Ouka Studios and Jark of Sparks, also laying off US-based developers working on live service megahit Marvel Rivals. This hasn't been a complete withdrawal. Rebel Wolves and Anchor Point made statements saying they were not affected by a business pivot reported on by Bloomberg News.Related:Update 8/29: This story has been updated with additional comment from NetEase. #netease #shuts #down #rich #vogelled
    NetEase shuts down Rich Vogel-led T-Minus Zero Entertainment
    www.gamedeveloper.com
    Game Developer can confirm that Chinese publisher, developer, and studio investor NetEase has shut down T-Minus Zero Entertainment, the game studio founded by BioWare alumni Rich Vogel in 2023.Vogel initially posted on the company's LinkedIn page about NetEase' decision to cease its partnership with T-Minus Zero. "We deeply appreciate NetEase for providing us with both ample runway and support - from helping us find potential investors to giving us the time and budget to develop our game into a fully playable hands-on demo. It has generated a lot of interest."Days later, he told Game Developer that NetEase has shut down T-Minus Zero.T-Minus Zero had been working on a "third-person online multiplayer action game set in a sci-fi universe," according to the company's founding announcement. It appears the project was well-liked by high-level NetEase employees. Former NetEase president of global investments and partnerships Simon Zhu commented on the company's post, stating that the game "delivers [the] great fantasy of fighting against 15th floor kaiju to protect the city you care about."Meanwhile NetEase head of brand/publishing for North America & Europe Cisco Maldonado called it a "super great concept and [in my opinion] a solid market fit" in a post on Vogel's page.Related:A spokesperson for NetEase initially told Game Developer that the company is "actively working with the studio [to] find a new publishing home." They added that the company "cannot confirm any layoffs," and that it was "working with the full studio in terms of this transition and future publishing plans."Said spokesperson offered a follow-up comment after we queried the company about Vogel's statement that the company is shutting down T-Minus Zero. They stated that NetEase has made the "difficult decision" to discontinue funding for the company. "This decision was made with careful consideration, as we have been inspired by our partnership with the studio and their bold vision," they said. "However, we have had to reassess our business priorities and are now working closely with the studio to provide support and explore next steps."NetEase is reversing course on millions of dollars worth of studio investmentsNetEase has spent 2025 unwinding a number of international investments in studios like Skybox Labs, Ouka Studios and Jark of Sparks, also laying off US-based developers working on live service megahit Marvel Rivals. This hasn't been a complete withdrawal. Rebel Wolves and Anchor Point made statements saying they were not affected by a business pivot reported on by Bloomberg News.Related:Update 8/29: This story has been updated with additional comment from NetEase.
    Like
    Love
    Wow
    Sad
    Angry
    316
    · 2 Comments ·0 Shares
  • EA SPORTS FC™ 26 Expands Global Footprint With New Partners, Leagues, and Stadiums In-Game

    August 28, 2025

    Powered by authenticity and community feedback, EA SPORTS FC 26 delivers the world’s game like never before — The Club is Yours.

    PRE-ORDER EA SPORTS FC 26 TODAY
    REDWOOD CITY, Calif.----
    Today, Electronic Arts Inc.confirmed a series of new and exclusive multi-year partnerships and in-game integrations for EA SPORTS FC™ 26, delivering 20,000+ athletes across 750+ clubs and national teams, 120+ stadiums, and 35+ leagues — powered by over 300+ global football partners. Spanning both men’s and women’s football, highlights include a new marketing partnership with the English FA, featuring a renewed license with the reigning UEFA Women's EURO 2025 champions and the men’s national team, as well as a new partnership with one of the world’s most iconic clubs, FC Bayern.Allianz Arena in EA SPORTS FC 26“Everything we’ve added to FC 26 has been shaped by the voices of our community,” said James Taylor, Director of Football Partnerships at EA SPORTS FC. “From the return of iconic stadiums and clubs fans have called for, to expanding representation in women’s football at the highest level — every new addition reflects our commitment to authenticity and to celebrating the incredible diversity of the world’s game.”EA SPORTS FC 26 is powered by authenticity and shaped by the voices of the community with several iconic stadiums being called for, including the Stadio Diego Armando Maradona giving Neapolitans and fans around the world a long-awaited addition. Also joining the lineup are two more fan favourites: Allianz Arena, home of FC Bayern, and Tüpraş Stadyumu, the stadium of Beşiktaş.Michael Diederich, Executive Vice Chairman of FC Bayern said: "Passion, commitment and emotion are the hallmarks of football, both on the pitch and in the virtual world - and the gaming community is growing with every move. We at FC Bayern are all excited to share these attributes with our new partner EA SPORTS FC. Together, we want to further expand our interaction with the next generation of football fans: on the pitch and on the game console."These updates reflect EA SPORTS FC’s ongoing commitment to elevating both men’s and women’s football. Renewed licensing agreements include the Barclays Women’s Super League, Arkema Première Ligue, UEFA Women’s Champions League, and UEFA Women’s EURO — alongside continued partnerships with top clubs such as OL Lyonnes. New additions, including Chelsea Women and FC Bayern Women, further strengthen EA SPORTS FC’s global representation of elite women’s football.“The power of EA SPORTS FC and its utopian world of football has brought the BWSL, our clubs and our players to a whole new audience” said Zarah Al Kudcy, Chief Revenue Officer. “Our league and our clubs are already one of the most played in women’s football within the game and we are excited to keep building together."Other leagues and club partnership and license renewals include Atletico Madrid, River Plate, Racing Club, Scottish Premier League, and Belgian Pro League, among others.Additional new stadiums in FC 26 include the brand-new Hill Dickinson Stadium, Holstein-Stadion, Stade de la Beaujoire, Son Moix Stadium, Red Bull Arena, Wankdorf Stadium, St. Jakob Park, as well as others.Pre-orders are now available for EA SPORTS FC 26, which will launch on PlayStation®5, PlayStation®4, Xbox Series X|S, Xbox One, PC, Amazon Luna, Nintendo Switch, and Nintendo Switch 2. EA SPORTS FC 26 will be available worldwide to play on September 26, 2025, with early access through the Ultimate Edition beginning September 19, 2025*.EA Play** members, the Club is Yours in EA SPORTS FC™ 26 with the EA Play 10-hour early access trial, starting September 19, 2025. Members also score member rewards including seasonal Ultimate Team™ Draft Tokens and club rewards, as well as receive 10% off EA digital content including pre-orders, game downloads, FC Points, and DLC. For more information on EA Play please visit ea.com/ea-play.For more information on EA SPORTS FC 26, please visit ea.com/fc26 and ensure you’re following our global social channels for all the latest upcoming news and announcements for EA SPORTS FC.*Conditions and restrictions apply. Offers may not be available on all platforms and/or all territories. See ea.com/games/ea-sports-fc/fc-26/game-disclaimers for details.**Conditions, limitations and exclusions apply. See EA Play Terms for details.About Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.Category: EA Sports

    EA SPORTS FC NewsroomSource: Electronic Arts Inc.

    Multimedia Files:
    #sports #expands #global #footprint #with
    EA SPORTS FC™ 26 Expands Global Footprint With New Partners, Leagues, and Stadiums In-Game
    August 28, 2025 Powered by authenticity and community feedback, EA SPORTS FC 26 delivers the world’s game like never before — The Club is Yours. PRE-ORDER EA SPORTS FC 26 TODAY REDWOOD CITY, Calif.---- Today, Electronic Arts Inc.confirmed a series of new and exclusive multi-year partnerships and in-game integrations for EA SPORTS FC™ 26, delivering 20,000+ athletes across 750+ clubs and national teams, 120+ stadiums, and 35+ leagues — powered by over 300+ global football partners. Spanning both men’s and women’s football, highlights include a new marketing partnership with the English FA, featuring a renewed license with the reigning UEFA Women's EURO 2025 champions and the men’s national team, as well as a new partnership with one of the world’s most iconic clubs, FC Bayern.Allianz Arena in EA SPORTS FC 26“Everything we’ve added to FC 26 has been shaped by the voices of our community,” said James Taylor, Director of Football Partnerships at EA SPORTS FC. “From the return of iconic stadiums and clubs fans have called for, to expanding representation in women’s football at the highest level — every new addition reflects our commitment to authenticity and to celebrating the incredible diversity of the world’s game.”EA SPORTS FC 26 is powered by authenticity and shaped by the voices of the community with several iconic stadiums being called for, including the Stadio Diego Armando Maradona giving Neapolitans and fans around the world a long-awaited addition. Also joining the lineup are two more fan favourites: Allianz Arena, home of FC Bayern, and Tüpraş Stadyumu, the stadium of Beşiktaş.Michael Diederich, Executive Vice Chairman of FC Bayern said: "Passion, commitment and emotion are the hallmarks of football, both on the pitch and in the virtual world - and the gaming community is growing with every move. We at FC Bayern are all excited to share these attributes with our new partner EA SPORTS FC. Together, we want to further expand our interaction with the next generation of football fans: on the pitch and on the game console."These updates reflect EA SPORTS FC’s ongoing commitment to elevating both men’s and women’s football. Renewed licensing agreements include the Barclays Women’s Super League, Arkema Première Ligue, UEFA Women’s Champions League, and UEFA Women’s EURO — alongside continued partnerships with top clubs such as OL Lyonnes. New additions, including Chelsea Women and FC Bayern Women, further strengthen EA SPORTS FC’s global representation of elite women’s football.“The power of EA SPORTS FC and its utopian world of football has brought the BWSL, our clubs and our players to a whole new audience” said Zarah Al Kudcy, Chief Revenue Officer. “Our league and our clubs are already one of the most played in women’s football within the game and we are excited to keep building together."Other leagues and club partnership and license renewals include Atletico Madrid, River Plate, Racing Club, Scottish Premier League, and Belgian Pro League, among others.Additional new stadiums in FC 26 include the brand-new Hill Dickinson Stadium, Holstein-Stadion, Stade de la Beaujoire, Son Moix Stadium, Red Bull Arena, Wankdorf Stadium, St. Jakob Park, as well as others.Pre-orders are now available for EA SPORTS FC 26, which will launch on PlayStation®5, PlayStation®4, Xbox Series X|S, Xbox One, PC, Amazon Luna, Nintendo Switch, and Nintendo Switch 2. EA SPORTS FC 26 will be available worldwide to play on September 26, 2025, with early access through the Ultimate Edition beginning September 19, 2025*.EA Play** members, the Club is Yours in EA SPORTS FC™ 26 with the EA Play 10-hour early access trial, starting September 19, 2025. Members also score member rewards including seasonal Ultimate Team™ Draft Tokens and club rewards, as well as receive 10% off EA digital content including pre-orders, game downloads, FC Points, and DLC. For more information on EA Play please visit ea.com/ea-play.For more information on EA SPORTS FC 26, please visit ea.com/fc26 and ensure you’re following our global social channels for all the latest upcoming news and announcements for EA SPORTS FC.*Conditions and restrictions apply. Offers may not be available on all platforms and/or all territories. See ea.com/games/ea-sports-fc/fc-26/game-disclaimers for details.**Conditions, limitations and exclusions apply. See EA Play Terms for details.About Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.Category: EA Sports EA SPORTS FC NewsroomSource: Electronic Arts Inc. Multimedia Files: #sports #expands #global #footprint #with
    EA SPORTS FC™ 26 Expands Global Footprint With New Partners, Leagues, and Stadiums In-Game
    news.ea.com
    August 28, 2025 Powered by authenticity and community feedback, EA SPORTS FC 26 delivers the world’s game like never before — The Club is Yours. PRE-ORDER EA SPORTS FC 26 TODAY REDWOOD CITY, Calif.--(BUSINESS WIRE)-- Today, Electronic Arts Inc. (NASDAQ: EA) confirmed a series of new and exclusive multi-year partnerships and in-game integrations for EA SPORTS FC™ 26, delivering 20,000+ athletes across 750+ clubs and national teams, 120+ stadiums, and 35+ leagues — powered by over 300+ global football partners. Spanning both men’s and women’s football, highlights include a new marketing partnership with the English FA, featuring a renewed license with the reigning UEFA Women's EURO 2025 champions and the men’s national team, as well as a new partnership with one of the world’s most iconic clubs, FC Bayern.Allianz Arena in EA SPORTS FC 26“Everything we’ve added to FC 26 has been shaped by the voices of our community,” said James Taylor, Director of Football Partnerships at EA SPORTS FC. “From the return of iconic stadiums and clubs fans have called for, to expanding representation in women’s football at the highest level — every new addition reflects our commitment to authenticity and to celebrating the incredible diversity of the world’s game.”EA SPORTS FC 26 is powered by authenticity and shaped by the voices of the community with several iconic stadiums being called for, including the Stadio Diego Armando Maradona giving Neapolitans and fans around the world a long-awaited addition. Also joining the lineup are two more fan favourites: Allianz Arena, home of FC Bayern, and Tüpraş Stadyumu, the stadium of Beşiktaş.Michael Diederich, Executive Vice Chairman of FC Bayern said: "Passion, commitment and emotion are the hallmarks of football, both on the pitch and in the virtual world - and the gaming community is growing with every move. We at FC Bayern are all excited to share these attributes with our new partner EA SPORTS FC. Together, we want to further expand our interaction with the next generation of football fans: on the pitch and on the game console."These updates reflect EA SPORTS FC’s ongoing commitment to elevating both men’s and women’s football. Renewed licensing agreements include the Barclays Women’s Super League, Arkema Première Ligue, UEFA Women’s Champions League, and UEFA Women’s EURO — alongside continued partnerships with top clubs such as OL Lyonnes. New additions, including Chelsea Women and FC Bayern Women, further strengthen EA SPORTS FC’s global representation of elite women’s football.“The power of EA SPORTS FC and its utopian world of football has brought the BWSL, our clubs and our players to a whole new audience” said Zarah Al Kudcy, Chief Revenue Officer. “Our league and our clubs are already one of the most played in women’s football within the game and we are excited to keep building together."Other leagues and club partnership and license renewals include Atletico Madrid, River Plate, Racing Club, Scottish Premier League, and Belgian Pro League, among others.Additional new stadiums in FC 26 include the brand-new Hill Dickinson Stadium (Everton), Holstein-Stadion (Holstein Kiel), Stade de la Beaujoire (FC Nantes), Son Moix Stadium (RCD Mallorca), Red Bull Arena (RB Salzburg), Wankdorf Stadium (BSC Young Boys), St. Jakob Park (FC Basel), as well as others.Pre-orders are now available for EA SPORTS FC 26, which will launch on PlayStation®5, PlayStation®4, Xbox Series X|S, Xbox One, PC, Amazon Luna, Nintendo Switch, and Nintendo Switch 2. EA SPORTS FC 26 will be available worldwide to play on September 26, 2025, with early access through the Ultimate Edition beginning September 19, 2025*.EA Play** members, the Club is Yours in EA SPORTS FC™ 26 with the EA Play 10-hour early access trial, starting September 19, 2025. Members also score member rewards including seasonal Ultimate Team™ Draft Tokens and club rewards, as well as receive 10% off EA digital content including pre-orders, game downloads, FC Points, and DLC. For more information on EA Play please visit ea.com/ea-play.For more information on EA SPORTS FC 26, please visit ea.com/fc26 and ensure you’re following our global social channels for all the latest upcoming news and announcements for EA SPORTS FC.*Conditions and restrictions apply. Offers may not be available on all platforms and/or all territories. See ea.com/games/ea-sports-fc/fc-26/game-disclaimers for details.**Conditions, limitations and exclusions apply. See EA Play Terms for details.About Electronic ArtsElectronic Arts (NASDAQ: EA) is a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately $7.5 billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.Category: EA Sports EA SPORTS FC Newsroom [email protected] Source: Electronic Arts Inc. Multimedia Files:
    Like
    Love
    Wow
    Sad
    Angry
    481
    · 2 Comments ·0 Shares
  • يا جماعة، شفتو الموضوع الجديد على Forbes؟ "Inside The Richest Presidential Cabinet Ever"!

    في المقال هذا، يشرحولنا كيفاش الأغنى في العالم جمعوا فريق الرئاسة عندهم. وكأنها قصة من خيال، بس فعلاً كل واحد منهم عنده ثروة ضخمة وسيرة ذاتية باهية. يحكي على كيفاش يقدروا الثروة تلعب دور في اتخاذ القرارات، وكيف يمكن التأثير على السياسة!

    شخصياً، نحب نعرف أكثر على هاد الأمور. كاين بزاف ناس يفكروا أن المال هو كل شيء، لكن شفت أن الخبرة والنية الطيبة تقدر تكون أكثر أهمية من الفلوس.

    نحبكم تفكروا في هذا الموضوع، وعلاش لا نستفيدوا من هذه التجارب في حياتنا اليومية.

    https://forbesmiddleeast.com/leadership/leaders/inside-the-richest-presidential-cabinet-ever

    #ثروة #قيادة #سياسة #Leadership #Richest
    يا جماعة، شفتو الموضوع الجديد على Forbes؟ "Inside The Richest Presidential Cabinet Ever"! 😲 في المقال هذا، يشرحولنا كيفاش الأغنى في العالم جمعوا فريق الرئاسة عندهم. وكأنها قصة من خيال، بس فعلاً كل واحد منهم عنده ثروة ضخمة وسيرة ذاتية باهية. يحكي على كيفاش يقدروا الثروة تلعب دور في اتخاذ القرارات، وكيف يمكن التأثير على السياسة! شخصياً، نحب نعرف أكثر على هاد الأمور. كاين بزاف ناس يفكروا أن المال هو كل شيء، لكن شفت أن الخبرة والنية الطيبة تقدر تكون أكثر أهمية من الفلوس. نحبكم تفكروا في هذا الموضوع، وعلاش لا نستفيدوا من هذه التجارب في حياتنا اليومية. https://forbesmiddleeast.com/leadership/leaders/inside-the-richest-presidential-cabinet-ever #ثروة #قيادة #سياسة #Leadership #Richest
    forbesmiddleeast.com
    Inside The Richest Presidential Cabinet Ever
    Like
    Love
    Wow
    Sad
    Angry
    519
    · 1 Comments ·0 Shares
  • Romeo is a Dead Man: A sneak peak of what to expect

    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes.

    Play Video

    OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”.

    Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts.

    And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before.

    As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off!

    Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year!

    Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission.

    That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time!
    #romeo #dead #man #sneak #peak
    Romeo is a Dead Man: A sneak peak of what to expect
    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes. Play Video OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”. Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts. And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before. As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off! Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year! Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission. That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time! #romeo #dead #man #sneak #peak
    Romeo is a Dead Man: A sneak peak of what to expect
    blog.playstation.com
    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes. Play Video OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”. Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts. And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before. As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off! Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year! Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission. That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time!
    Like
    Love
    Wow
    Sad
    Angry
    773
    · 2 Comments ·0 Shares
  • XPPen Quiz — Winners Revealed!

    80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way.
    #xppen #quiz #winners #revealed
    XPPen Quiz — Winners Revealed!
    80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way. #xppen #quiz #winners #revealed
    XPPen Quiz — Winners Revealed!
    80.lv
    80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way.
    Like
    Love
    Wow
    Angry
    Sad
    787
    · 2 Comments ·0 Shares
  • يا جماعة، شفتوا كيفاش الأغنياء يزيدوا يربحوا في كيش ما كان؟ بعد خطاب باول في جاكسون هول، أغنى 10 أشخاص في العالم زادوا ثروتهم بـ 33 مليار دولار!

    سماهم لي! هاد الشي يخلينا نتسائلوا: واش يتمتعوا بذكاء استثماري خارق ولا كاين حاجة ورانا ما نعرفوهاش؟ المقال يشرح كيفاش تأثير الكلمات يقدر يغير كل حاجة في عالم المال والأعمال، وحتى كيفاش يمكن أن واحد الخطاب يدفع الثروات للارتفاع.

    شخصياً، نحب نتابع الأخبار الاقتصادية ونحس بلي هاد الأمور تأثر علينا جميعاً، حتى لو رانا بعيدين على عالم البزنس.

    من المهم نفكروا في كيفاش الأحداث العالمية تقدر تغير مصير الناس وتهزّ الاقتصاد.

    https://forbesmiddleeast.com/billionaires/world-billionaires/the-worlds-10-wealthiest-people-became-$33-billion-richer-after-powells-jackson-hole-speech
    #اقتصاد #ثروات #بزنس
    يا جماعة، شفتوا كيفاش الأغنياء يزيدوا يربحوا في كيش ما كان؟ 💸 بعد خطاب باول في جاكسون هول، أغنى 10 أشخاص في العالم زادوا ثروتهم بـ 33 مليار دولار! 😲 سماهم لي! هاد الشي يخلينا نتسائلوا: واش يتمتعوا بذكاء استثماري خارق ولا كاين حاجة ورانا ما نعرفوهاش؟ المقال يشرح كيفاش تأثير الكلمات يقدر يغير كل حاجة في عالم المال والأعمال، وحتى كيفاش يمكن أن واحد الخطاب يدفع الثروات للارتفاع. شخصياً، نحب نتابع الأخبار الاقتصادية ونحس بلي هاد الأمور تأثر علينا جميعاً، حتى لو رانا بعيدين على عالم البزنس. من المهم نفكروا في كيفاش الأحداث العالمية تقدر تغير مصير الناس وتهزّ الاقتصاد. https://forbesmiddleeast.com/billionaires/world-billionaires/the-worlds-10-wealthiest-people-became-$33-billion-richer-after-powells-jackson-hole-speech #اقتصاد #ثروات #بزنس
    forbesmiddleeast.com
    The World’s 10 Wealthiest People Became $33 Billion Richer After Powell’s Jackson Hole Speech
    Like
    Love
    Wow
    Sad
    Angry
    667
    · 1 Comments ·0 Shares
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Comments ·0 Shares
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Comments ·0 Shares
More Results
ollo https://www.ollo.ws