• Portfolio Review Live Event Today – Don't Miss!

    80 Level Community80 Level CommunityPublished27 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayToday at 3 PM UTC | 8 AM PT | 11 AM EST, we will host a Portfolio Review with José Vega, founder and senior artist at Worldbuilders Workshop!José will review the portfolios of our Showcase Competition winners and deliver an in-depth live critique. Expect actionable feedback on your strengths, as well as practical advice on refining your work to better align with your career goals. This is a unique opportunity to receive expert guidance that can help your portfolio shine and set you apart in the industry.Join us here!
    #portfolio #review #live #event #today
    Portfolio Review Live Event Today – Don't Miss!
    80 Level Community80 Level CommunityPublished27 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayToday at 3 PM UTC | 8 AM PT | 11 AM EST, we will host a Portfolio Review with José Vega, founder and senior artist at Worldbuilders Workshop!José will review the portfolios of our Showcase Competition winners and deliver an in-depth live critique. Expect actionable feedback on your strengths, as well as practical advice on refining your work to better align with your career goals. This is a unique opportunity to receive expert guidance that can help your portfolio shine and set you apart in the industry.Join us here! #portfolio #review #live #event #today
    Portfolio Review Live Event Today – Don't Miss!
    80.lv
    80 Level Community80 Level CommunityPublished27 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayToday at 3 PM UTC | 8 AM PT | 11 AM EST, we will host a Portfolio Review with José Vega, founder and senior artist at Worldbuilders Workshop!José will review the portfolios of our Showcase Competition winners and deliver an in-depth live critique. Expect actionable feedback on your strengths, as well as practical advice on refining your work to better align with your career goals. This is a unique opportunity to receive expert guidance that can help your portfolio shine and set you apart in the industry.Join us here!
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 2 Comments ·0 Shares
  • EA and Full Circle Reveal September 16 Early Access Release Date for skate.

    August 26, 2025

    Get Ready to Explore San Vansterdam, an Ever-Evolving Skateboarding Sandbox, and Experience the Best Skateboarding Gameplay in the Franchise

    Watch the skate. Early Access Release Date Trailer HERE.
    REDWOOD CITY, Calif.----
    Today, Electronic Arts Inc.and Full Circle announced that skate., the next chapter of the award-winning skateboarding video game franchise, will launch into Early Access on September 16 for PlayStation® 5, Xbox Series X|S, PlayStation® 4, Xbox One, and PC via Steam, Epic Games Store and EA app. skate. is free-to-download, cross-platform,* and cross-progression,* making it as accessible as possible for both longtime skate. franchise fans, and newcomers alike.Set in the vibrant city of San Vansterdam, skate. is a multiplayer skateboarding destination offering a massive open world where players discover unique skate spots, land insane tricks and connect - or compete - with friends online. With four unique neighborhoods - Hedgemont, Gullcrest, Market Mile and Brickswich - each offering its own distinct vibe and challenges, the city is a huge playground for skaters. From parks and plazas to rooftops and massive ramps, every corner is packed with skateable spots, including the House of Rolling Reverence, a former church transformed into a trick haven for skaters.At the core of skate. is the restored and improved Flick-It System, brought back to life through the Frostbite™ Engine to deliver the best skateboarding experience we have ever offered. With unparalleled precision and control, Flick-It brings the magic of the franchise back to life for a new generation of players. skate. also“skate. is not just a return, it’s a complete evolution of the franchise that’s built to last," said Mike McCartney, Executive Producer of skate. “Our goal with skate. is to capture the freedom, creative expression and community of skateboarding, and share it with as many people as possible. From day one, our priority has been to honor the legacy of the franchise while pushing it into a bold new future - one built in partnership with our players.”In skate., players can discover new ways to explore and get vertical with new off-board controls, giving them total freedom to roam, climb and find epic new spots. Additional features like Quick Drop allow players to place ramps, rails, benches and more anywhere in the world to fine-tune their perfect spot. skate. offers players the ability to find their own fun with endless activities, including rotating world map challenges - such as Line Challenges, Own The Spot and Sessions - as well as high-energy Throwdowns with friends. The new Spectate mode lets players instantly find the action and use Spectaport to jump straight into any live session.Collaborative development with skate. fans and the community have always been a priority throughout our development process. Player feedback has been welcome all throughout playtesting and something Full Circle will continue to look at during the game’s Early Access period. During Early Access, players can expect to see new content every season, offering fresh challenges, cosmetics, music, world updates, and new events each season.“Community is the heart of skateboarding, and it’s the heart of skate.,” said Jeff Seamster, Head of Creative on skate. “From day one, we’ve aimed to create an open, welcoming space shaped by players. We’ve built this game alongside our community - listening, learning and evolving together. Whether you’re a seasoned skater or just dropped in, skate. is a place to express yourself, connect and grow. No gatekeepers, no barriers - just a city built for skating and a community that keeps it alive. Early Access is a huge milestone in that journey, and we’re hyped to keep growing San Vansterdam with our crew around the world.”skate. launches into Early Access on September 16 on PlayStation® 5, Xbox Series X|S, PlayStation® 4, Xbox One & PC via Steam, Epic Games Store and EA app. Stay tuned - there’s much more to come from skate.For more information on skate., visit: & restrictions apply. See for details.PRESS ASSETS ARE AVAILABLE AT EAPressPortal.comAbout Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.

    Jino Talens
    Integrated Comms Director
    jtalens@ea.com
    Source: Electronic Arts Inc.

    Multimedia Files:
    #full #circle #reveal #september #early
    EA and Full Circle Reveal September 16 Early Access Release Date for skate.
    August 26, 2025 Get Ready to Explore San Vansterdam, an Ever-Evolving Skateboarding Sandbox, and Experience the Best Skateboarding Gameplay in the Franchise Watch the skate. Early Access Release Date Trailer HERE. REDWOOD CITY, Calif.---- Today, Electronic Arts Inc.and Full Circle announced that skate., the next chapter of the award-winning skateboarding video game franchise, will launch into Early Access on September 16 for PlayStation® 5, Xbox Series X|S, PlayStation® 4, Xbox One, and PC via Steam, Epic Games Store and EA app. skate. is free-to-download, cross-platform,* and cross-progression,* making it as accessible as possible for both longtime skate. franchise fans, and newcomers alike.Set in the vibrant city of San Vansterdam, skate. is a multiplayer skateboarding destination offering a massive open world where players discover unique skate spots, land insane tricks and connect - or compete - with friends online. With four unique neighborhoods - Hedgemont, Gullcrest, Market Mile and Brickswich - each offering its own distinct vibe and challenges, the city is a huge playground for skaters. From parks and plazas to rooftops and massive ramps, every corner is packed with skateable spots, including the House of Rolling Reverence, a former church transformed into a trick haven for skaters.At the core of skate. is the restored and improved Flick-It System, brought back to life through the Frostbite™ Engine to deliver the best skateboarding experience we have ever offered. With unparalleled precision and control, Flick-It brings the magic of the franchise back to life for a new generation of players. skate. also“skate. is not just a return, it’s a complete evolution of the franchise that’s built to last," said Mike McCartney, Executive Producer of skate. “Our goal with skate. is to capture the freedom, creative expression and community of skateboarding, and share it with as many people as possible. From day one, our priority has been to honor the legacy of the franchise while pushing it into a bold new future - one built in partnership with our players.”In skate., players can discover new ways to explore and get vertical with new off-board controls, giving them total freedom to roam, climb and find epic new spots. Additional features like Quick Drop allow players to place ramps, rails, benches and more anywhere in the world to fine-tune their perfect spot. skate. offers players the ability to find their own fun with endless activities, including rotating world map challenges - such as Line Challenges, Own The Spot and Sessions - as well as high-energy Throwdowns with friends. The new Spectate mode lets players instantly find the action and use Spectaport to jump straight into any live session.Collaborative development with skate. fans and the community have always been a priority throughout our development process. Player feedback has been welcome all throughout playtesting and something Full Circle will continue to look at during the game’s Early Access period. During Early Access, players can expect to see new content every season, offering fresh challenges, cosmetics, music, world updates, and new events each season.“Community is the heart of skateboarding, and it’s the heart of skate.,” said Jeff Seamster, Head of Creative on skate. “From day one, we’ve aimed to create an open, welcoming space shaped by players. We’ve built this game alongside our community - listening, learning and evolving together. Whether you’re a seasoned skater or just dropped in, skate. is a place to express yourself, connect and grow. No gatekeepers, no barriers - just a city built for skating and a community that keeps it alive. Early Access is a huge milestone in that journey, and we’re hyped to keep growing San Vansterdam with our crew around the world.”skate. launches into Early Access on September 16 on PlayStation® 5, Xbox Series X|S, PlayStation® 4, Xbox One & PC via Steam, Epic Games Store and EA app. Stay tuned - there’s much more to come from skate.For more information on skate., visit: & restrictions apply. See for details.PRESS ASSETS ARE AVAILABLE AT EAPressPortal.comAbout Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Jino Talens Integrated Comms Director jtalens@ea.com Source: Electronic Arts Inc. Multimedia Files: #full #circle #reveal #september #early
    EA and Full Circle Reveal September 16 Early Access Release Date for skate.
    news.ea.com
    August 26, 2025 Get Ready to Explore San Vansterdam, an Ever-Evolving Skateboarding Sandbox, and Experience the Best Skateboarding Gameplay in the Franchise Watch the skate. Early Access Release Date Trailer HERE. REDWOOD CITY, Calif.--(BUSINESS WIRE)-- Today, Electronic Arts Inc. (NASDAQ: EA) and Full Circle announced that skate., the next chapter of the award-winning skateboarding video game franchise, will launch into Early Access on September 16 for PlayStation® 5, Xbox Series X|S, PlayStation® 4, Xbox One, and PC via Steam, Epic Games Store and EA app. skate. is free-to-download, cross-platform,* and cross-progression,* making it as accessible as possible for both longtime skate. franchise fans, and newcomers alike.Set in the vibrant city of San Vansterdam, skate. is a multiplayer skateboarding destination offering a massive open world where players discover unique skate spots, land insane tricks and connect - or compete - with friends online. With four unique neighborhoods - Hedgemont, Gullcrest, Market Mile and Brickswich - each offering its own distinct vibe and challenges, the city is a huge playground for skaters. From parks and plazas to rooftops and massive ramps, every corner is packed with skateable spots, including the House of Rolling Reverence, a former church transformed into a trick haven for skaters.At the core of skate. is the restored and improved Flick-It System, brought back to life through the Frostbite™ Engine to deliver the best skateboarding experience we have ever offered. With unparalleled precision and control, Flick-It brings the magic of the franchise back to life for a new generation of players. skate. also“skate. is not just a return, it’s a complete evolution of the franchise that’s built to last," said Mike McCartney, Executive Producer of skate. “Our goal with skate. is to capture the freedom, creative expression and community of skateboarding, and share it with as many people as possible. From day one, our priority has been to honor the legacy of the franchise while pushing it into a bold new future - one built in partnership with our players.”In skate., players can discover new ways to explore and get vertical with new off-board controls, giving them total freedom to roam, climb and find epic new spots. Additional features like Quick Drop allow players to place ramps, rails, benches and more anywhere in the world to fine-tune their perfect spot. skate. offers players the ability to find their own fun with endless activities, including rotating world map challenges - such as Line Challenges, Own The Spot and Sessions - as well as high-energy Throwdowns with friends. The new Spectate mode lets players instantly find the action and use Spectaport to jump straight into any live session.Collaborative development with skate. fans and the community have always been a priority throughout our development process. Player feedback has been welcome all throughout playtesting and something Full Circle will continue to look at during the game’s Early Access period. During Early Access, players can expect to see new content every season, offering fresh challenges, cosmetics, music, world updates, and new events each season.“Community is the heart of skateboarding, and it’s the heart of skate.,” said Jeff Seamster, Head of Creative on skate. “From day one, we’ve aimed to create an open, welcoming space shaped by players. We’ve built this game alongside our community - listening, learning and evolving together. Whether you’re a seasoned skater or just dropped in, skate. is a place to express yourself, connect and grow. No gatekeepers, no barriers - just a city built for skating and a community that keeps it alive. Early Access is a huge milestone in that journey, and we’re hyped to keep growing San Vansterdam with our crew around the world.”skate. launches into Early Access on September 16 on PlayStation® 5, Xbox Series X|S, PlayStation® 4, Xbox One & PC via Steam, Epic Games Store and EA app. Stay tuned - there’s much more to come from skate.For more information on skate., visit: https://www.ea.com/games/skate/skate.*Conditions & restrictions apply. See https://go.ea.com/skate-cross-play for details.PRESS ASSETS ARE AVAILABLE AT EAPressPortal.comAbout Electronic ArtsElectronic Arts (NASDAQ: EA) is a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately $7.5 billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Jino Talens Integrated Comms Director jtalens@ea.com Source: Electronic Arts Inc. Multimedia Files:
    Like
    Love
    Wow
    Sad
    Angry
    628
    · 2 Comments ·0 Shares
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Comments ·0 Shares
  • Creating a Detailed Helmet Inspired by Fallout Using Substance 3D

    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    80.lv
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter (currently under NDA). Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    701
    · 2 Comments ·0 Shares
  • Winners of the Showcase Competition

    1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge!
    #winners #showcase #competition
    Winners of the Showcase Competition
    1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge! #winners #showcase #competition
    Winners of the Showcase Competition
    80.lv
    1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge!
    Like
    Love
    Wow
    Angry
    Sad
    397
    · 2 Comments ·0 Shares
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 Comments ·0 Shares
  • يا جماعة، هل تحبوا البرمجة وتحبوا تكتشفوا أدوات جديدة؟ عندي لكم فيديو اليوم فيه حاجة مميزة بزاف!

    الفيديو يعرض "CodeGeeX"، أقوى أداة برمجة مجانية و اللي تتفوق على GPT-5 وClaude. هذي الأداة مش غير سهلة، بل تقدر تعاوني في كتابة كودات معقدة وتخليك تبدع في مشاريعك، مثل snake game و portfolio website. بصراحة، انبهرت بالقدرات تاعها!

    في تجربتي الشخصية، استعملت CodeGeeX لإنجاز مشروع تو-دو أب و فعلاً حسيت بفرق كبير في الإنتاجية. شحال من فكرة عندك و تحب تجسّدها؟ انطلقوا و استعملوا هاذ الأداة الرائعة.

    تفضلوا شوفوا الفيديو واستمتعوا بالتعلم!

    https://www.youtube.com/watch?v=Nev61vgVkTs

    #CodeGeeX #برمجة_مجانية #AI #تعلم_البرمجة #GPT5
    🌟 يا جماعة، هل تحبوا البرمجة وتحبوا تكتشفوا أدوات جديدة؟ عندي لكم فيديو اليوم فيه حاجة مميزة بزاف! 👀 الفيديو يعرض "CodeGeeX"، أقوى أداة برمجة مجانية و اللي تتفوق على GPT-5 وClaude. هذي الأداة مش غير سهلة، بل تقدر تعاوني في كتابة كودات معقدة وتخليك تبدع في مشاريعك، مثل snake game و portfolio website. بصراحة، انبهرت بالقدرات تاعها! 💻✨ في تجربتي الشخصية، استعملت CodeGeeX لإنجاز مشروع تو-دو أب و فعلاً حسيت بفرق كبير في الإنتاجية. شحال من فكرة عندك و تحب تجسّدها؟ انطلقوا و استعملوا هاذ الأداة الرائعة. تفضلوا شوفوا الفيديو واستمتعوا بالتعلم! https://www.youtube.com/watch?v=Nev61vgVkTs #CodeGeeX #برمجة_مجانية #AI #تعلم_البرمجة #GPT5
    1 Comments ·0 Shares
  • هلا بيكم يا أصدقاء!

    لازم نحكي على حاجة جديدة وشيقة، وهي الفيديو الجديد اللي نقدملكم فيه "ما لا يسعك جهله في عصر الذكاء الاصطناعي - التجربة الأولى". في هاد الفيديو، رح نبني موقع شخصي (Portfolio Website) فقط باستخدام ChatGPT المجاني وVisual Studio Code، حتى لو كنت مبتدئ! كل خطوة رح نشرحها بالتفصيل، وبإمكانك تنشر الموقع مجانًا.

    شخصيًا، لقيت أنه الذكاء الاصطناعي يفتح لنا أبواب جديدة، ويخلي الأفكار المجنونة تتحقق بسهولة. صراحة، التجربة كانت ممتعة ومفيدة بزاف!

    شوفوا الفيديو وخلونا نجربوا مع بعض كيفاش نطوروا من مهاراتنا في هاد العصر الجديد.

    https://www.youtube.com/watch?v=LsquJgEyBgg

    #الذكاءالإصطناعي #artificialintelligence #chatgpt #coding #webdevelopment
    هلا بيكم يا أصدقاء! 😄 لازم نحكي على حاجة جديدة وشيقة، وهي الفيديو الجديد اللي نقدملكم فيه "ما لا يسعك جهله في عصر الذكاء الاصطناعي - التجربة الأولى". في هاد الفيديو، رح نبني موقع شخصي (Portfolio Website) فقط باستخدام ChatGPT المجاني وVisual Studio Code، حتى لو كنت مبتدئ! كل خطوة رح نشرحها بالتفصيل، وبإمكانك تنشر الموقع مجانًا. شخصيًا، لقيت أنه الذكاء الاصطناعي يفتح لنا أبواب جديدة، ويخلي الأفكار المجنونة تتحقق بسهولة. صراحة، التجربة كانت ممتعة ومفيدة بزاف! شوفوا الفيديو وخلونا نجربوا مع بعض كيفاش نطوروا من مهاراتنا في هاد العصر الجديد. https://www.youtube.com/watch?v=LsquJgEyBgg #الذكاءالإصطناعي #artificialintelligence #chatgpt #coding #webdevelopment
    1 Comments ·0 Shares
  • Embracer will deploy 'targeted cost initiatives' and AI tech to unlock more value

    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 14, 20253 Min ReadLogo via Embracer Group / Kingdom Come Deliverance screenshot via Warhorse StudiosEmbracer Group—which is in the process of splitting into three standalone companies following an era of mass layoffs, project cancellations, and divestments—has confirmed it will explore "targeted cost initiatives" and look to streamline processes with the help of AI technology during what CEO Phil Rogers described as a "transition year" for the Swedish conglomerate.Addressing investors in the company's latest fiscal report, Rogers said Embracer's performance during the first quarter of the current financial year was "quiet" and said the company must now focus on "operational and strategic execution" to position itself for long-term growth.Consolidated net sales decreased by 31 percent to SEK 3,355 millionduring Q1. Breaking that total down by operating segment, PC/Console Games decreased by 38 percent to SEK 1,641 million; Mobile Games decreased by 63 percent to SEK 520 million; and Entertainment & Services increased by 41 percent to SEK 1,193 million."As we move forward, we are taking a conservative approach for this current year, reflecting a measured view on the timing and performance of our PC/Console release schedule in addition to potential continued softness in our catalog following Q1," said Rogers, who officially stepped up as CEO on August 1, 2025, to allow outgoing chief exec Lars Wingefors to take on the mantle of executive chair. Related:"This year is a transition period as we lay the foundations of Fellowship Entertainment and focus on building a business led by key IP and empowered teams, in a structure enabling focus and operational discipline. It is paramount that we concentrate on the quality and long-term value of our releases rather than chasing short-term gains."What does that mean for Embracer employees? According to Rogers, the company will implement "targeted cost initiatives" relating to underperforming business. Those initiatives could potentially result in more divestments. Game Developer has reached out to Embracer to clarify whether those plans could potentially include layoffs.Embracer CEO believes AI will become an "increasingly supportive force"Rogers claims Embracer is facing a "pivotal moment" and must double down on its biggest franchises. He explained the company has increased capital allocation to its core IPs, which include The Lord of the Rings, Tomb Raider, Kingdom Come Deliverance, Metro, Dead Island, Darksiders, and Remnant. He believes those franchises represent "one of the most exciting IP portfolios in the industry" but said Embracer must now "sharpen" its focus. The company currently has nine triple-A titles slated for release, excluding projects being financed by external partners. Related:"As previously noted, one or a couple of these games will most likely slip into FY 2028/29, but we do see a clear increase in release cadence as compared to our average of just over 1 AAA game per year in the past five years," said Rogers, discussing that release slate. "We expect the increased released pipeline in combination with lower fixed costs will notably improve free cashflow FY 2026/27 onwards."As Embracer prepares to evolve into Fellowship Entertainment, Rogers said the company must significantly rewire its business to create a "powerhouse unit" within its PC and console division. According to Rogers, leveraging AI technologies will be  integral part of that process. His predecessor had already suggested that ignoring AI tools could lead to it being "outrun" by its competitors. "This comes through smarter collaboration, increased streamlining, shared services and with AI as an increasingly supportive force," Rogers continued. "These factors will be key to unlocking value and expanding margins." As the table below shows, Embracer has already significantly reduced its workforce following a number of layoffs and key divestments. Related:Its entire workforce totaled 7,228 peopleas of June 2025. That a notable decrease on the 13,712 workersit employed at the end of June 2024. The company currently has 116 video games in development—down on the 127 projects it had in the pipeline this time last year, but actually up on the 108 titles it showcased in March. about:Embracer GroupGenerative AI, Machine Learning, & LLMsTop StoriesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #embracer #will #deploy #039targeted #cost
    Embracer will deploy 'targeted cost initiatives' and AI tech to unlock more value
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 14, 20253 Min ReadLogo via Embracer Group / Kingdom Come Deliverance screenshot via Warhorse StudiosEmbracer Group—which is in the process of splitting into three standalone companies following an era of mass layoffs, project cancellations, and divestments—has confirmed it will explore "targeted cost initiatives" and look to streamline processes with the help of AI technology during what CEO Phil Rogers described as a "transition year" for the Swedish conglomerate.Addressing investors in the company's latest fiscal report, Rogers said Embracer's performance during the first quarter of the current financial year was "quiet" and said the company must now focus on "operational and strategic execution" to position itself for long-term growth.Consolidated net sales decreased by 31 percent to SEK 3,355 millionduring Q1. Breaking that total down by operating segment, PC/Console Games decreased by 38 percent to SEK 1,641 million; Mobile Games decreased by 63 percent to SEK 520 million; and Entertainment & Services increased by 41 percent to SEK 1,193 million."As we move forward, we are taking a conservative approach for this current year, reflecting a measured view on the timing and performance of our PC/Console release schedule in addition to potential continued softness in our catalog following Q1," said Rogers, who officially stepped up as CEO on August 1, 2025, to allow outgoing chief exec Lars Wingefors to take on the mantle of executive chair. Related:"This year is a transition period as we lay the foundations of Fellowship Entertainment and focus on building a business led by key IP and empowered teams, in a structure enabling focus and operational discipline. It is paramount that we concentrate on the quality and long-term value of our releases rather than chasing short-term gains."What does that mean for Embracer employees? According to Rogers, the company will implement "targeted cost initiatives" relating to underperforming business. Those initiatives could potentially result in more divestments. Game Developer has reached out to Embracer to clarify whether those plans could potentially include layoffs.Embracer CEO believes AI will become an "increasingly supportive force"Rogers claims Embracer is facing a "pivotal moment" and must double down on its biggest franchises. He explained the company has increased capital allocation to its core IPs, which include The Lord of the Rings, Tomb Raider, Kingdom Come Deliverance, Metro, Dead Island, Darksiders, and Remnant. He believes those franchises represent "one of the most exciting IP portfolios in the industry" but said Embracer must now "sharpen" its focus. The company currently has nine triple-A titles slated for release, excluding projects being financed by external partners. Related:"As previously noted, one or a couple of these games will most likely slip into FY 2028/29, but we do see a clear increase in release cadence as compared to our average of just over 1 AAA game per year in the past five years," said Rogers, discussing that release slate. "We expect the increased released pipeline in combination with lower fixed costs will notably improve free cashflow FY 2026/27 onwards."As Embracer prepares to evolve into Fellowship Entertainment, Rogers said the company must significantly rewire its business to create a "powerhouse unit" within its PC and console division. According to Rogers, leveraging AI technologies will be  integral part of that process. His predecessor had already suggested that ignoring AI tools could lead to it being "outrun" by its competitors. "This comes through smarter collaboration, increased streamlining, shared services and with AI as an increasingly supportive force," Rogers continued. "These factors will be key to unlocking value and expanding margins." As the table below shows, Embracer has already significantly reduced its workforce following a number of layoffs and key divestments. Related:Its entire workforce totaled 7,228 peopleas of June 2025. That a notable decrease on the 13,712 workersit employed at the end of June 2024. The company currently has 116 video games in development—down on the 127 projects it had in the pipeline this time last year, but actually up on the 108 titles it showcased in March. about:Embracer GroupGenerative AI, Machine Learning, & LLMsTop StoriesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #embracer #will #deploy #039targeted #cost
    Embracer will deploy 'targeted cost initiatives' and AI tech to unlock more value
    www.gamedeveloper.com
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 14, 20253 Min ReadLogo via Embracer Group / Kingdom Come Deliverance screenshot via Warhorse StudiosEmbracer Group—which is in the process of splitting into three standalone companies following an era of mass layoffs, project cancellations, and divestments—has confirmed it will explore "targeted cost initiatives" and look to streamline processes with the help of AI technology during what CEO Phil Rogers described as a "transition year" for the Swedish conglomerate.Addressing investors in the company's latest fiscal report, Rogers said Embracer's performance during the first quarter of the current financial year was "quiet" and said the company must now focus on "operational and strategic execution" to position itself for long-term growth.Consolidated net sales decreased by 31 percent to SEK 3,355 million ($350.5 million) during Q1. Breaking that total down by operating segment, PC/Console Games decreased by 38 percent to SEK 1,641 million; Mobile Games decreased by 63 percent to SEK 520 million; and Entertainment & Services increased by 41 percent to SEK 1,193 million."As we move forward, we are taking a conservative approach for this current year, reflecting a measured view on the timing and performance of our PC/Console release schedule in addition to potential continued softness in our catalog following Q1," said Rogers, who officially stepped up as CEO on August 1, 2025, to allow outgoing chief exec Lars Wingefors to take on the mantle of executive chair. Related:"This year is a transition period as we lay the foundations of Fellowship Entertainment and focus on building a business led by key IP and empowered teams, in a structure enabling focus and operational discipline. It is paramount that we concentrate on the quality and long-term value of our releases rather than chasing short-term gains."What does that mean for Embracer employees? According to Rogers, the company will implement "targeted cost initiatives" relating to underperforming business. Those initiatives could potentially result in more divestments. Game Developer has reached out to Embracer to clarify whether those plans could potentially include layoffs.Embracer CEO believes AI will become an "increasingly supportive force"Rogers claims Embracer is facing a "pivotal moment" and must double down on its biggest franchises. He explained the company has increased capital allocation to its core IPs, which include The Lord of the Rings, Tomb Raider, Kingdom Come Deliverance, Metro, Dead Island, Darksiders, and Remnant. He believes those franchises represent "one of the most exciting IP portfolios in the industry" but said Embracer must now "sharpen" its focus. The company currently has nine triple-A titles slated for release, excluding projects being financed by external partners. Related:"As previously noted, one or a couple of these games will most likely slip into FY 2028/29, but we do see a clear increase in release cadence as compared to our average of just over 1 AAA game per year in the past five years," said Rogers, discussing that release slate. "We expect the increased released pipeline in combination with lower fixed costs will notably improve free cashflow FY 2026/27 onwards."As Embracer prepares to evolve into Fellowship Entertainment, Rogers said the company must significantly rewire its business to create a "powerhouse unit" within its PC and console division. According to Rogers, leveraging AI technologies will be  integral part of that process. His predecessor had already suggested that ignoring AI tools could lead to it being "outrun" by its competitors. "This comes through smarter collaboration, increased streamlining, shared services and with AI as an increasingly supportive force," Rogers continued. "These factors will be key to unlocking value and expanding margins." As the table below shows, Embracer has already significantly reduced its workforce following a number of layoffs and key divestments. Related:Its entire workforce totaled 7,228 people (including 5,452 game developers) as of June 2025. That a notable decrease on the 13,712 workers (and 10,713 game developers) it employed at the end of June 2024. The company currently has 116 video games in development—down on the 127 projects it had in the pipeline this time last year, but actually up on the 108 titles it showcased in March.Read more about:Embracer GroupGenerative AI, Machine Learning, & LLMsTop StoriesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    2 Comments ·0 Shares
  • EA SPORTS™ Madden NFL 26 Launches Worldwide Today—Powered by Real NFL Data, and Unleashing the Most Explosive and Immersive NFL Experience Yet

    Experience all-new QB DNA and Coach DNA, Signature Quarterback Play, Adaptive Coaching, explosive movement, and true NFL presentation as Madden NFL 26 launches for the first time on Nintendo Switch 2 and Amazon Luna.
    REDWOOD CITY, Calif.----
    Just in time for the NFL season, Electronic Arts Inc.and EA SPORTS™ have released EA SPORTS™ Madden NFL 26 — the most explosive, authentic, and immersive football experience in franchise history. Built from Sundays and powered by AI-driven systems trained on thousands of real NFL plays, the game debuts all-new QB and Coach DNA for player-specific traits, signature playstyles, and adaptive strategy. Players will experience dynamic Football Weather, enhanced physics-based gameplay, and deeper customization across Franchise and Superstar modes — all on PlayStation®5, Xbox Series X|S, Nintendo Switch™2, Amazon Luna, and PC.Madden NFL 26 Available Now“Madden NFL 26 is a true leap forward in authenticity and control,” said Daryl Holt, SVP and Group GM, EA SPORTS. “With smarter quarterbacks, adaptive coaching AI, and our breakthrough QB DNA and Coach DNA systems, every snap feels true to the NFL fans love. Explosive movement, dynamic weather, and authentic stadium atmospheres capture the passion and drama of the game. And with Madden NFL 26 now on Nintendo Switch 2, we’re bringing that unmatched realism and energy to more fans than ever before.”Through a new partnership with Nintendo announced in the spring, EA SPORTS brings the authentic Madden NFL experience to Nintendo Switch 2 for the first time. By launching on Nintendo’s console, Madden NFL 26 expands its reach to a broader, more diverse audience—offering explosive gameplay and immersive NFL action anytime, anywhere.Fans everywhere can now hop in and experience Madden NFL 26’s game-changing AI innovations with QB DNA and Coach DNA, delivering more immersive NFL atmospheres on game day, and expanding fan-favorite modes with new depth and strategy across every snap in its feature set:QB DNA: Star NFL quarterbacks move, look, and feel more like the superstars they are. Leveraging AI-powered machine learning, QB DNA introduces unique pocket behaviors, signature throwing motions, and distinct scrambling styles that mirror real-life NFL signal-callers, delivering the most lifelike quarterback gameplay in franchise history.Coach DNA: Coaches employ real philosophies and adaptive strategies based on nearly a decade of NFL data. Dynamic coach suggestions and multi-player counters provide smart play recommendations and strategic depth, making every matchup feel authentic and challenging.Powerful NFL Movement & Physics Expansion: Experience the league’s unmatched athleticism with updated player movement, physics-based interactions, and new mechanics like Custom Defensive Zones, Adaptive Coverage, and enhanced D-line stunts and twists.Football Weather: Extreme weather conditions such as snow, fog, and rain impact visibility, movement, stamina, and ball security, add a new layer of realism and strategy to every game.True NFL Gameday Experience: From the Skol chant in Minnesota to Baltimore’s pre-game light show, authentic team traditions, dynamic halftime shows, and custom broadcast packages immerse players in the sights and sounds of the NFL.Franchise mode: Introduces four new coach archetypes with evolving abilities, a deeper Weekly Strategy system featuring custom Play Sheets, and enhanced player health management with real-time status updates. Stay connected to your league through the new Approval Rating system, plus weekly recaps from Scott Hanson and commentary from Rich Eisen.Superstar Mode: Import your Road to Glory player and shape their NFL career with evolving storylines, draft-impacting performances, and weekly goals. Manage relationships, development, and durability through the new Sphere of Influence and Wear & Tear systems as you rise from college star to NFL legend.Madden Ultimate Team™: Build your dream roster with NFL legends and stars, tackle new dynamic MUT Events, and rise through 50-player Leaderboard Campaigns. NFL Team Pass delivers team-specific rewards and ever-evolving ways to play.The Madden NFL 26 Soundtrack features 77 songs across menus and stadiums, offering expanded control, variety, and immersion. New this season, players can customize their menus playlist with both new releases and iconic stadium anthems. The soundtrack includes music from Twenty One Pilots, Lizzo, Lil Nas X, BIA, and Luke Combs, plus over 30 stadium classics from Green Day, Rage Against The Machine, Foo Fighters, and more — all curated to amplify the energy and authenticity of the NFL experience.Additionally, Madden NFL 26 is now available on Amazon Luna, bringing the authentic football experience to even more players through cloud gaming. Luna lets fans play instantly on devices they already own — including Fire TV, tablets, mobile phones, smart TVs, and more — with no downloads, installs, or updates required. Wherever Luna is available, players can enjoy all that Madden NFL 26 has to offer, including modes like Franchise and Superstar.On mobile, Madden NFL 26 Mobile delivers the ultimate football experience on your phone, packed with more control, strategy, and customization than ever before. This season brings a fresh slate of features, including Dual Player Cards that cover multiple positions and unlock unique chemistry boosts. Fine-tune your roster with over 20 upgradeable Player Traits, and take your lineup to the next level with Player EVO — a new system that lets you absorb higher OVR players to power up your favorites. Whether you're a returning veteran or new to the game, Madden NFL 26 Mobile offers deeper gameplay, more flexibility, and a true NFL experience — right at your fingertips. Download Madden NFL 26 Mobile for free from the App Store® or Google Play™ today.EA Play members can live every stadium-shaking moment with the EA Play* 10-hour game trial, available now. Members also score monthly Ultimate Team™ Packs, as well as receive 10% off EA digital purchases - including game downloads, Madden Points and DLC. For more information on EA Play please visit tuned for more Madden NFL 26 details on the official Madden NFL website and social media.*Conditions, limitations and exclusions apply. See EA Play Terms for details.For Madden NFL 26 assets, visit: EAPressPortal.com.Madden NFL 26 is developed in Orlando, Florida and Madrid, Spain by EA SPORTS and will be available worldwide August 14 for Xbox Series X|S, PlayStation 5, Nintendo Switch 2, Amazon Luna and PC via EA app for Windows, Steam, Epic Games StoreAbout Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.

    Erin Exum
    Director, Integrated Comms
    #sports #madden #nfl #launches #worldwide
    EA SPORTS™ Madden NFL 26 Launches Worldwide Today—Powered by Real NFL Data, and Unleashing the Most Explosive and Immersive NFL Experience Yet
    Experience all-new QB DNA and Coach DNA, Signature Quarterback Play, Adaptive Coaching, explosive movement, and true NFL presentation as Madden NFL 26 launches for the first time on Nintendo Switch 2 and Amazon Luna. REDWOOD CITY, Calif.---- Just in time for the NFL season, Electronic Arts Inc.and EA SPORTS™ have released EA SPORTS™ Madden NFL 26 — the most explosive, authentic, and immersive football experience in franchise history. Built from Sundays and powered by AI-driven systems trained on thousands of real NFL plays, the game debuts all-new QB and Coach DNA for player-specific traits, signature playstyles, and adaptive strategy. Players will experience dynamic Football Weather, enhanced physics-based gameplay, and deeper customization across Franchise and Superstar modes — all on PlayStation®5, Xbox Series X|S, Nintendo Switch™2, Amazon Luna, and PC.Madden NFL 26 Available Now“Madden NFL 26 is a true leap forward in authenticity and control,” said Daryl Holt, SVP and Group GM, EA SPORTS. “With smarter quarterbacks, adaptive coaching AI, and our breakthrough QB DNA and Coach DNA systems, every snap feels true to the NFL fans love. Explosive movement, dynamic weather, and authentic stadium atmospheres capture the passion and drama of the game. And with Madden NFL 26 now on Nintendo Switch 2, we’re bringing that unmatched realism and energy to more fans than ever before.”Through a new partnership with Nintendo announced in the spring, EA SPORTS brings the authentic Madden NFL experience to Nintendo Switch 2 for the first time. By launching on Nintendo’s console, Madden NFL 26 expands its reach to a broader, more diverse audience—offering explosive gameplay and immersive NFL action anytime, anywhere.Fans everywhere can now hop in and experience Madden NFL 26’s game-changing AI innovations with QB DNA and Coach DNA, delivering more immersive NFL atmospheres on game day, and expanding fan-favorite modes with new depth and strategy across every snap in its feature set:QB DNA: Star NFL quarterbacks move, look, and feel more like the superstars they are. Leveraging AI-powered machine learning, QB DNA introduces unique pocket behaviors, signature throwing motions, and distinct scrambling styles that mirror real-life NFL signal-callers, delivering the most lifelike quarterback gameplay in franchise history.Coach DNA: Coaches employ real philosophies and adaptive strategies based on nearly a decade of NFL data. Dynamic coach suggestions and multi-player counters provide smart play recommendations and strategic depth, making every matchup feel authentic and challenging.Powerful NFL Movement & Physics Expansion: Experience the league’s unmatched athleticism with updated player movement, physics-based interactions, and new mechanics like Custom Defensive Zones, Adaptive Coverage, and enhanced D-line stunts and twists.Football Weather: Extreme weather conditions such as snow, fog, and rain impact visibility, movement, stamina, and ball security, add a new layer of realism and strategy to every game.True NFL Gameday Experience: From the Skol chant in Minnesota to Baltimore’s pre-game light show, authentic team traditions, dynamic halftime shows, and custom broadcast packages immerse players in the sights and sounds of the NFL.Franchise mode: Introduces four new coach archetypes with evolving abilities, a deeper Weekly Strategy system featuring custom Play Sheets, and enhanced player health management with real-time status updates. Stay connected to your league through the new Approval Rating system, plus weekly recaps from Scott Hanson and commentary from Rich Eisen.Superstar Mode: Import your Road to Glory player and shape their NFL career with evolving storylines, draft-impacting performances, and weekly goals. Manage relationships, development, and durability through the new Sphere of Influence and Wear & Tear systems as you rise from college star to NFL legend.Madden Ultimate Team™: Build your dream roster with NFL legends and stars, tackle new dynamic MUT Events, and rise through 50-player Leaderboard Campaigns. NFL Team Pass delivers team-specific rewards and ever-evolving ways to play.The Madden NFL 26 Soundtrack features 77 songs across menus and stadiums, offering expanded control, variety, and immersion. New this season, players can customize their menus playlist with both new releases and iconic stadium anthems. The soundtrack includes music from Twenty One Pilots, Lizzo, Lil Nas X, BIA, and Luke Combs, plus over 30 stadium classics from Green Day, Rage Against The Machine, Foo Fighters, and more — all curated to amplify the energy and authenticity of the NFL experience.Additionally, Madden NFL 26 is now available on Amazon Luna, bringing the authentic football experience to even more players through cloud gaming. Luna lets fans play instantly on devices they already own — including Fire TV, tablets, mobile phones, smart TVs, and more — with no downloads, installs, or updates required. Wherever Luna is available, players can enjoy all that Madden NFL 26 has to offer, including modes like Franchise and Superstar.On mobile, Madden NFL 26 Mobile delivers the ultimate football experience on your phone, packed with more control, strategy, and customization than ever before. This season brings a fresh slate of features, including Dual Player Cards that cover multiple positions and unlock unique chemistry boosts. Fine-tune your roster with over 20 upgradeable Player Traits, and take your lineup to the next level with Player EVO — a new system that lets you absorb higher OVR players to power up your favorites. Whether you're a returning veteran or new to the game, Madden NFL 26 Mobile offers deeper gameplay, more flexibility, and a true NFL experience — right at your fingertips. Download Madden NFL 26 Mobile for free from the App Store® or Google Play™ today.EA Play members can live every stadium-shaking moment with the EA Play* 10-hour game trial, available now. Members also score monthly Ultimate Team™ Packs, as well as receive 10% off EA digital purchases - including game downloads, Madden Points and DLC. For more information on EA Play please visit tuned for more Madden NFL 26 details on the official Madden NFL website and social media.*Conditions, limitations and exclusions apply. See EA Play Terms for details.For Madden NFL 26 assets, visit: EAPressPortal.com.Madden NFL 26 is developed in Orlando, Florida and Madrid, Spain by EA SPORTS and will be available worldwide August 14 for Xbox Series X|S, PlayStation 5, Nintendo Switch 2, Amazon Luna and PC via EA app for Windows, Steam, Epic Games StoreAbout Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Erin Exum Director, Integrated Comms #sports #madden #nfl #launches #worldwide
    EA SPORTS™ Madden NFL 26 Launches Worldwide Today—Powered by Real NFL Data, and Unleashing the Most Explosive and Immersive NFL Experience Yet
    news.ea.com
    Experience all-new QB DNA and Coach DNA, Signature Quarterback Play, Adaptive Coaching, explosive movement, and true NFL presentation as Madden NFL 26 launches for the first time on Nintendo Switch 2 and Amazon Luna. REDWOOD CITY, Calif.--(BUSINESS WIRE)-- Just in time for the NFL season, Electronic Arts Inc. (NASDAQ: EA) and EA SPORTS™ have released EA SPORTS™ Madden NFL 26 — the most explosive, authentic, and immersive football experience in franchise history. Built from Sundays and powered by AI-driven systems trained on thousands of real NFL plays, the game debuts all-new QB and Coach DNA for player-specific traits, signature playstyles, and adaptive strategy. Players will experience dynamic Football Weather, enhanced physics-based gameplay, and deeper customization across Franchise and Superstar modes — all on PlayStation®5, Xbox Series X|S, Nintendo Switch™2, Amazon Luna, and PC.Madden NFL 26 Available Now“Madden NFL 26 is a true leap forward in authenticity and control,” said Daryl Holt, SVP and Group GM, EA SPORTS. “With smarter quarterbacks, adaptive coaching AI, and our breakthrough QB DNA and Coach DNA systems, every snap feels true to the NFL fans love. Explosive movement, dynamic weather, and authentic stadium atmospheres capture the passion and drama of the game. And with Madden NFL 26 now on Nintendo Switch 2, we’re bringing that unmatched realism and energy to more fans than ever before.”Through a new partnership with Nintendo announced in the spring, EA SPORTS brings the authentic Madden NFL experience to Nintendo Switch 2 for the first time. By launching on Nintendo’s console, Madden NFL 26 expands its reach to a broader, more diverse audience—offering explosive gameplay and immersive NFL action anytime, anywhere.Fans everywhere can now hop in and experience Madden NFL 26’s game-changing AI innovations with QB DNA and Coach DNA, delivering more immersive NFL atmospheres on game day, and expanding fan-favorite modes with new depth and strategy across every snap in its feature set:QB DNA: Star NFL quarterbacks move, look, and feel more like the superstars they are. Leveraging AI-powered machine learning, QB DNA introduces unique pocket behaviors, signature throwing motions, and distinct scrambling styles that mirror real-life NFL signal-callers, delivering the most lifelike quarterback gameplay in franchise history.Coach DNA: Coaches employ real philosophies and adaptive strategies based on nearly a decade of NFL data. Dynamic coach suggestions and multi-player counters provide smart play recommendations and strategic depth, making every matchup feel authentic and challenging.Powerful NFL Movement & Physics Expansion: Experience the league’s unmatched athleticism with updated player movement, physics-based interactions, and new mechanics like Custom Defensive Zones, Adaptive Coverage, and enhanced D-line stunts and twists.Football Weather: Extreme weather conditions such as snow, fog, and rain impact visibility, movement, stamina, and ball security, add a new layer of realism and strategy to every game.True NFL Gameday Experience: From the Skol chant in Minnesota to Baltimore’s pre-game light show, authentic team traditions, dynamic halftime shows, and custom broadcast packages immerse players in the sights and sounds of the NFL.Franchise mode: Introduces four new coach archetypes with evolving abilities, a deeper Weekly Strategy system featuring custom Play Sheets, and enhanced player health management with real-time status updates. Stay connected to your league through the new Approval Rating system, plus weekly recaps from Scott Hanson and commentary from Rich Eisen.Superstar Mode: Import your Road to Glory player and shape their NFL career with evolving storylines, draft-impacting performances, and weekly goals. Manage relationships, development, and durability through the new Sphere of Influence and Wear & Tear systems as you rise from college star to NFL legend.Madden Ultimate Team™: Build your dream roster with NFL legends and stars, tackle new dynamic MUT Events, and rise through 50-player Leaderboard Campaigns. NFL Team Pass delivers team-specific rewards and ever-evolving ways to play.The Madden NFL 26 Soundtrack features 77 songs across menus and stadiums, offering expanded control, variety, and immersion. New this season, players can customize their menus playlist with both new releases and iconic stadium anthems. The soundtrack includes music from Twenty One Pilots, Lizzo, Lil Nas X, BIA, and Luke Combs, plus over 30 stadium classics from Green Day, Rage Against The Machine, Foo Fighters, and more — all curated to amplify the energy and authenticity of the NFL experience.Additionally, Madden NFL 26 is now available on Amazon Luna, bringing the authentic football experience to even more players through cloud gaming. Luna lets fans play instantly on devices they already own — including Fire TV, tablets, mobile phones, smart TVs, and more — with no downloads, installs, or updates required. Wherever Luna is available, players can enjoy all that Madden NFL 26 has to offer, including modes like Franchise and Superstar.On mobile, Madden NFL 26 Mobile delivers the ultimate football experience on your phone, packed with more control, strategy, and customization than ever before. This season brings a fresh slate of features, including Dual Player Cards that cover multiple positions and unlock unique chemistry boosts. Fine-tune your roster with over 20 upgradeable Player Traits, and take your lineup to the next level with Player EVO — a new system that lets you absorb higher OVR players to power up your favorites. Whether you're a returning veteran or new to the game, Madden NFL 26 Mobile offers deeper gameplay, more flexibility, and a true NFL experience — right at your fingertips. Download Madden NFL 26 Mobile for free from the App Store® or Google Play™ today.EA Play members can live every stadium-shaking moment with the EA Play* 10-hour game trial, available now. Members also score monthly Ultimate Team™ Packs, as well as receive 10% off EA digital purchases - including game downloads, Madden Points and DLC. For more information on EA Play please visit https://www.ea.com/ea-play.Stay tuned for more Madden NFL 26 details on the official Madden NFL website and social media (Instagram, X, TikTok, and YouTube).*Conditions, limitations and exclusions apply. See EA Play Terms for details.For Madden NFL 26 assets, visit: EAPressPortal.com.Madden NFL 26 is developed in Orlando, Florida and Madrid, Spain by EA SPORTS and will be available worldwide August 14 for Xbox Series X|S, PlayStation 5, Nintendo Switch 2, Amazon Luna and PC via EA app for Windows, Steam, Epic Games StoreAbout Electronic ArtsElectronic Arts (NASDAQ: EA) is a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately $7.5 billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Erin Exum Director, Integrated Comms [email protected] Source: Electronic Arts Inc.
    2 Comments ·0 Shares
  • EA SPORTS™ NHL® 26 Elevates Authenticity With NHL EDGE Partnership and ICE-Q 2.0 Integration

    August 11, 2025

    Advanced League-Backed Positional Data Now Drives Hockey’s Most Lifelike Experience Off The Ice; NHL EDGE Brings Advanced Analytics to Chel
    REDWOOD CITY, Calif.----
    Electronic Arts Inc.today announced a groundbreaking new partnership with the National Hockey League to integrate official NHL EDGE data directly into EA SPORTS™ NHL® 26, marking a new era for authenticity in sports gaming and entertainment. This collaboration brings the same advanced analytics used by NHL teams into the hands of players and fans, powering the all-new ICE-Q 2.0 gameplay system for the most realistic hockey gameplay ever.Electronic Arts presents new partnership with the National Hockey League to integrate official NHL EDGE data directly into EA SPORTS™ NHL® 26For the first time in franchise history, real-world Puck and Player Tracking data - captured by infrared technology and arena cameras across all 32 NHL rinks - is seamlessly woven into the fabric of NHL 26. The NHL EDGE system tracks millions of data points, from skating acceleration and top speed to shot power, shot location and save types, all of which now inform on-ice behavior in NHL 26. ICE Q 2.0 powered by NHL EDGE ensures the in-game action mirrors the intensity and individuality of the NHL’s best.With ICE-Q 2.0 players will notice significant differentiation between superstar athletes. Real NHL data directly influences player attributes and personal tendencies like skating speed, shot power, signature playstyles, and even goalie reactions, elevating the experience with authentic intensity and strategic depth. This unique integration allows every player to live their NHL dream - moving, thinking, and playing just like their favorite athletes."The energy from the NHL is electric, with its diehard fans and the talent and physicality of the athletes. It's our job to translate that energy from the ice to the screen and make it as realistic as possible," said Cam Weber, President, EA SPORTS. "By harnessing the very same data points the NHL uses to inform all sorts of game-time strategies, EA SPORTS is doubling down on innovations and partnerships to make the most true-to-life reflection of hockey possible."“The partnership with EA SPORTS and NHL EDGE is about more than just numbers, it’s about bringing the soul of our game to life for a new generation,” said Brian Jennings, NHL Chief Branding Officer and Senior Executive Vice President. “We’ve spent years developing our best-in-class Puck and Player Tracking system to help grow the game and create new fan experiences. By placing league-grade analytics in the hands of every fan, we’re deepening the connection between the real NHL and Chel, and setting a new standard for sports gaming immersion.”Developed by EA Vancouver and EA Bucharest, EA SPORTS NHL will be available on September 12, 2025 on PlayStation®5 and Xbox Series X|S. Players who pre-order the Deluxe Edition will receive up to seven days early access and a host of in-game items and rewards. EA Play members can play like superstars with EA SPORTS™ NHL 26 in the EA Play** 10-hour early access trial starting September 5th, 2025. Members also score perks, including 3,000 WOC Coins and Season Pass XP Multiplier Tokens, as well as receive 10% off EA digital content including pre-orders, game downloads, NHL Points, and DLC. For more information on EA Play please visit keep up-to-date with the latest game news and information, visit and follow our social channels.PRESS ASSETS ARE AVAILABLE AT EAPressPortal.com*Conditions and restrictions apply. See ea.com/games/nhl/nhl-26/game-disclaimers for details.** Conditions, limitations and exclusions apply. See EA Play Terms for details.About Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognised for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.© 2024 Electronic Arts Inc. Electronic Arts, EA SPORTS, Frostbite, and the EA SPORTS and Frostbite logos are trademarks of Electronic Arts Inc.About the NHLThe National Hockey League, founded in 1917, consists of 32 Member Clubs. Each team roster reflects the League’s international makeup with players from more than 20 countries represented, all vying for the most cherished and historic trophy in professional sports – the Stanley Cup®. Every year, the NHL entertains more than 670 million fans in-arena and through its partners on national television and radio; more than 191 million followers - league, team and player accounts combined - across Facebook, Twitter, Instagram, Snapchat, TikTok, and YouTube; and more than 100 million fans online at NHL.com. The League broadcasts games in more than 160 countries and territories through its rightsholders including ESPN, WBD Sports and NHL Network in the U.S.; Sportsnet and TVA Sports in Canada; Viaplay in the Nordics, Baltics, Poland and the UK; MTV3 in Finland; Nova in Czech Republic and Slovakia; Sky Sports and ProSieben in Germany; MySports in Switzerland; and CCTV5+ in China; and reaches fans worldwide with games available to stream in every country. Fans are engaged across the League’s digital assets on mobile devices via the free NHL® App; across nine social media platforms; on SiriusXM NHL Network Radio™; and on NHL.com, available in eight languages and featuring unprecedented access to player and team statistics as well as every regular-season and playoff game box score dating back to the League’s inception, powered by SAP. NHL Productions develops compelling original programming featuring unprecedented access to players, coaches and League and team personnel for distribution across the NHL’s social and digital platforms.The NHL is committed to building healthy and vibrant communities using the sport of hockey to celebrate fans of every race, color, religion, national origin, gender identity, age, sexual orientation, and socio-economic status. The NHL’s Hockey Is For Everyone® initiative reinforces that the official policy of the sport is one of inclusion on the ice, in locker rooms, boardrooms and stands. The NHL is expanding access and opportunity for people of all backgrounds and abilities to play hockey, fostering more inclusive environments and growing the game through a greater diversity of participants. To date, the NHL has invested more than million in youth hockey and grassroots programs, with a commitment to invest an additional million for diversity and inclusion programs over the next year.NHL and the NHL Shield are registered trademarks of the National Hockey League. NHL and NHL team marks are the property of the NHL and its teams. © 2025 NHL. All Rights Reserved.About the National Hockey League Players’ AssociationThe National Hockey League Players’ Association, established in 1967, is a labour organization whose members are the players in the National Hockey League. The NHLPA works on behalf of the players in varied disciplines such as labour relations, product licensing, marketing, international hockey and community relations, all in furtherance of its efforts to promote its members and the game of hockey. In 1999, the NHLPA Goals & Dreams fund was launched as a way for the players to give something back to the game they love. Over the past 25 years, tens of thousands of deserving children in 44 countries have benefited from the players' donations of hockey equipment. NHLPA Goals & Dreams has donated more than million to grassroots hockey programs, making it the largest program of its kind. For more information on the NHLPA, please visit www.nhlpa.com.NHLPA, National Hockey League Players’ Association and the NHLPA logo are registered trademarks of the NHLPA and are used under license. © NHLPA. All Rights Reserved.

    Natalia Lombardi
    Global Public Relations ManagerSource: Electronic Arts Inc.

    Multimedia Files:
    #sports #nhl #elevates #authenticity #with
    EA SPORTS™ NHL® 26 Elevates Authenticity With NHL EDGE Partnership and ICE-Q 2.0 Integration
    August 11, 2025 Advanced League-Backed Positional Data Now Drives Hockey’s Most Lifelike Experience Off The Ice; NHL EDGE Brings Advanced Analytics to Chel REDWOOD CITY, Calif.---- Electronic Arts Inc.today announced a groundbreaking new partnership with the National Hockey League to integrate official NHL EDGE data directly into EA SPORTS™ NHL® 26, marking a new era for authenticity in sports gaming and entertainment. This collaboration brings the same advanced analytics used by NHL teams into the hands of players and fans, powering the all-new ICE-Q 2.0 gameplay system for the most realistic hockey gameplay ever.Electronic Arts presents new partnership with the National Hockey League to integrate official NHL EDGE data directly into EA SPORTS™ NHL® 26For the first time in franchise history, real-world Puck and Player Tracking data - captured by infrared technology and arena cameras across all 32 NHL rinks - is seamlessly woven into the fabric of NHL 26. The NHL EDGE system tracks millions of data points, from skating acceleration and top speed to shot power, shot location and save types, all of which now inform on-ice behavior in NHL 26. ICE Q 2.0 powered by NHL EDGE ensures the in-game action mirrors the intensity and individuality of the NHL’s best.With ICE-Q 2.0 players will notice significant differentiation between superstar athletes. Real NHL data directly influences player attributes and personal tendencies like skating speed, shot power, signature playstyles, and even goalie reactions, elevating the experience with authentic intensity and strategic depth. This unique integration allows every player to live their NHL dream - moving, thinking, and playing just like their favorite athletes."The energy from the NHL is electric, with its diehard fans and the talent and physicality of the athletes. It's our job to translate that energy from the ice to the screen and make it as realistic as possible," said Cam Weber, President, EA SPORTS. "By harnessing the very same data points the NHL uses to inform all sorts of game-time strategies, EA SPORTS is doubling down on innovations and partnerships to make the most true-to-life reflection of hockey possible."“The partnership with EA SPORTS and NHL EDGE is about more than just numbers, it’s about bringing the soul of our game to life for a new generation,” said Brian Jennings, NHL Chief Branding Officer and Senior Executive Vice President. “We’ve spent years developing our best-in-class Puck and Player Tracking system to help grow the game and create new fan experiences. By placing league-grade analytics in the hands of every fan, we’re deepening the connection between the real NHL and Chel, and setting a new standard for sports gaming immersion.”Developed by EA Vancouver and EA Bucharest, EA SPORTS NHL will be available on September 12, 2025 on PlayStation®5 and Xbox Series X|S. Players who pre-order the Deluxe Edition will receive up to seven days early access and a host of in-game items and rewards. EA Play members can play like superstars with EA SPORTS™ NHL 26 in the EA Play** 10-hour early access trial starting September 5th, 2025. Members also score perks, including 3,000 WOC Coins and Season Pass XP Multiplier Tokens, as well as receive 10% off EA digital content including pre-orders, game downloads, NHL Points, and DLC. For more information on EA Play please visit keep up-to-date with the latest game news and information, visit and follow our social channels.PRESS ASSETS ARE AVAILABLE AT EAPressPortal.com*Conditions and restrictions apply. See ea.com/games/nhl/nhl-26/game-disclaimers for details.** Conditions, limitations and exclusions apply. See EA Play Terms for details.About Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognised for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.© 2024 Electronic Arts Inc. Electronic Arts, EA SPORTS, Frostbite, and the EA SPORTS and Frostbite logos are trademarks of Electronic Arts Inc.About the NHLThe National Hockey League, founded in 1917, consists of 32 Member Clubs. Each team roster reflects the League’s international makeup with players from more than 20 countries represented, all vying for the most cherished and historic trophy in professional sports – the Stanley Cup®. Every year, the NHL entertains more than 670 million fans in-arena and through its partners on national television and radio; more than 191 million followers - league, team and player accounts combined - across Facebook, Twitter, Instagram, Snapchat, TikTok, and YouTube; and more than 100 million fans online at NHL.com. The League broadcasts games in more than 160 countries and territories through its rightsholders including ESPN, WBD Sports and NHL Network in the U.S.; Sportsnet and TVA Sports in Canada; Viaplay in the Nordics, Baltics, Poland and the UK; MTV3 in Finland; Nova in Czech Republic and Slovakia; Sky Sports and ProSieben in Germany; MySports in Switzerland; and CCTV5+ in China; and reaches fans worldwide with games available to stream in every country. Fans are engaged across the League’s digital assets on mobile devices via the free NHL® App; across nine social media platforms; on SiriusXM NHL Network Radio™; and on NHL.com, available in eight languages and featuring unprecedented access to player and team statistics as well as every regular-season and playoff game box score dating back to the League’s inception, powered by SAP. NHL Productions develops compelling original programming featuring unprecedented access to players, coaches and League and team personnel for distribution across the NHL’s social and digital platforms.The NHL is committed to building healthy and vibrant communities using the sport of hockey to celebrate fans of every race, color, religion, national origin, gender identity, age, sexual orientation, and socio-economic status. The NHL’s Hockey Is For Everyone® initiative reinforces that the official policy of the sport is one of inclusion on the ice, in locker rooms, boardrooms and stands. The NHL is expanding access and opportunity for people of all backgrounds and abilities to play hockey, fostering more inclusive environments and growing the game through a greater diversity of participants. To date, the NHL has invested more than million in youth hockey and grassroots programs, with a commitment to invest an additional million for diversity and inclusion programs over the next year.NHL and the NHL Shield are registered trademarks of the National Hockey League. NHL and NHL team marks are the property of the NHL and its teams. © 2025 NHL. All Rights Reserved.About the National Hockey League Players’ AssociationThe National Hockey League Players’ Association, established in 1967, is a labour organization whose members are the players in the National Hockey League. The NHLPA works on behalf of the players in varied disciplines such as labour relations, product licensing, marketing, international hockey and community relations, all in furtherance of its efforts to promote its members and the game of hockey. In 1999, the NHLPA Goals & Dreams fund was launched as a way for the players to give something back to the game they love. Over the past 25 years, tens of thousands of deserving children in 44 countries have benefited from the players' donations of hockey equipment. NHLPA Goals & Dreams has donated more than million to grassroots hockey programs, making it the largest program of its kind. For more information on the NHLPA, please visit www.nhlpa.com.NHLPA, National Hockey League Players’ Association and the NHLPA logo are registered trademarks of the NHLPA and are used under license. © NHLPA. All Rights Reserved. Natalia Lombardi Global Public Relations ManagerSource: Electronic Arts Inc. Multimedia Files: #sports #nhl #elevates #authenticity #with
    EA SPORTS™ NHL® 26 Elevates Authenticity With NHL EDGE Partnership and ICE-Q 2.0 Integration
    news.ea.com
    August 11, 2025 Advanced League-Backed Positional Data Now Drives Hockey’s Most Lifelike Experience Off The Ice; NHL EDGE Brings Advanced Analytics to Chel REDWOOD CITY, Calif.--(BUSINESS WIRE)-- Electronic Arts Inc. (NASDAQ: EA) today announced a groundbreaking new partnership with the National Hockey League to integrate official NHL EDGE data directly into EA SPORTS™ NHL® 26, marking a new era for authenticity in sports gaming and entertainment. This collaboration brings the same advanced analytics used by NHL teams into the hands of players and fans, powering the all-new ICE-Q 2.0 gameplay system for the most realistic hockey gameplay ever.Electronic Arts presents new partnership with the National Hockey League to integrate official NHL EDGE data directly into EA SPORTS™ NHL® 26For the first time in franchise history, real-world Puck and Player Tracking data - captured by infrared technology and arena cameras across all 32 NHL rinks - is seamlessly woven into the fabric of NHL 26. The NHL EDGE system tracks millions of data points, from skating acceleration and top speed to shot power, shot location and save types, all of which now inform on-ice behavior in NHL 26. ICE Q 2.0 powered by NHL EDGE ensures the in-game action mirrors the intensity and individuality of the NHL’s best.With ICE-Q 2.0 players will notice significant differentiation between superstar athletes. Real NHL data directly influences player attributes and personal tendencies like skating speed, shot power, signature playstyles, and even goalie reactions, elevating the experience with authentic intensity and strategic depth. This unique integration allows every player to live their NHL dream - moving, thinking, and playing just like their favorite athletes."The energy from the NHL is electric, with its diehard fans and the talent and physicality of the athletes. It's our job to translate that energy from the ice to the screen and make it as realistic as possible," said Cam Weber, President, EA SPORTS. "By harnessing the very same data points the NHL uses to inform all sorts of game-time strategies, EA SPORTS is doubling down on innovations and partnerships to make the most true-to-life reflection of hockey possible."“The partnership with EA SPORTS and NHL EDGE is about more than just numbers, it’s about bringing the soul of our game to life for a new generation,” said Brian Jennings, NHL Chief Branding Officer and Senior Executive Vice President. “We’ve spent years developing our best-in-class Puck and Player Tracking system to help grow the game and create new fan experiences. By placing league-grade analytics in the hands of every fan, we’re deepening the connection between the real NHL and Chel, and setting a new standard for sports gaming immersion.”Developed by EA Vancouver and EA Bucharest, EA SPORTS NHL will be available on September 12, 2025 on PlayStation®5 and Xbox Series X|S. Players who pre-order the Deluxe Edition will receive up to seven days early access and a host of in-game items and rewards. EA Play members can play like superstars with EA SPORTS™ NHL 26 in the EA Play** 10-hour early access trial starting September 5th, 2025. Members also score perks, including 3,000 WOC Coins and Season Pass XP Multiplier Tokens, as well as receive 10% off EA digital content including pre-orders, game downloads, NHL Points, and DLC. For more information on EA Play please visit https://www.ea.com/ea-play.To keep up-to-date with the latest game news and information, visit https://www.ea.com/games/nhl/nhl-26 and follow our social channels.PRESS ASSETS ARE AVAILABLE AT EAPressPortal.com*Conditions and restrictions apply. See ea.com/games/nhl/nhl-26/game-disclaimers for details.** Conditions, limitations and exclusions apply. See EA Play Terms for details.About Electronic ArtsElectronic Arts (NASDAQ: EA) is a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately $7.5 billion. Headquartered in Redwood City, California, EA is recognised for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.© 2024 Electronic Arts Inc. Electronic Arts, EA SPORTS, Frostbite, and the EA SPORTS and Frostbite logos are trademarks of Electronic Arts Inc.About the NHLThe National Hockey League (NHL®), founded in 1917, consists of 32 Member Clubs. Each team roster reflects the League’s international makeup with players from more than 20 countries represented, all vying for the most cherished and historic trophy in professional sports – the Stanley Cup®. Every year, the NHL entertains more than 670 million fans in-arena and through its partners on national television and radio; more than 191 million followers - league, team and player accounts combined - across Facebook, Twitter, Instagram, Snapchat, TikTok, and YouTube; and more than 100 million fans online at NHL.com. The League broadcasts games in more than 160 countries and territories through its rightsholders including ESPN, WBD Sports and NHL Network in the U.S.; Sportsnet and TVA Sports in Canada; Viaplay in the Nordics, Baltics, Poland and the UK; MTV3 in Finland; Nova in Czech Republic and Slovakia; Sky Sports and ProSieben in Germany; MySports in Switzerland; and CCTV5+ in China; and reaches fans worldwide with games available to stream in every country. Fans are engaged across the League’s digital assets on mobile devices via the free NHL® App; across nine social media platforms; on SiriusXM NHL Network Radio™; and on NHL.com, available in eight languages and featuring unprecedented access to player and team statistics as well as every regular-season and playoff game box score dating back to the League’s inception, powered by SAP. NHL Productions develops compelling original programming featuring unprecedented access to players, coaches and League and team personnel for distribution across the NHL’s social and digital platforms.The NHL is committed to building healthy and vibrant communities using the sport of hockey to celebrate fans of every race, color, religion, national origin, gender identity, age, sexual orientation, and socio-economic status. The NHL’s Hockey Is For Everyone® initiative reinforces that the official policy of the sport is one of inclusion on the ice, in locker rooms, boardrooms and stands. The NHL is expanding access and opportunity for people of all backgrounds and abilities to play hockey, fostering more inclusive environments and growing the game through a greater diversity of participants. To date, the NHL has invested more than $100 million in youth hockey and grassroots programs, with a commitment to invest an additional $5 million for diversity and inclusion programs over the next year.NHL and the NHL Shield are registered trademarks of the National Hockey League. NHL and NHL team marks are the property of the NHL and its teams. © 2025 NHL. All Rights Reserved.About the National Hockey League Players’ AssociationThe National Hockey League Players’ Association, established in 1967, is a labour organization whose members are the players in the National Hockey League. The NHLPA works on behalf of the players in varied disciplines such as labour relations, product licensing, marketing, international hockey and community relations, all in furtherance of its efforts to promote its members and the game of hockey. In 1999, the NHLPA Goals & Dreams fund was launched as a way for the players to give something back to the game they love. Over the past 25 years, tens of thousands of deserving children in 44 countries have benefited from the players' donations of hockey equipment. NHLPA Goals & Dreams has donated more than $26 million to grassroots hockey programs, making it the largest program of its kind. For more information on the NHLPA, please visit www.nhlpa.com.NHLPA, National Hockey League Players’ Association and the NHLPA logo are registered trademarks of the NHLPA and are used under license. © NHLPA. All Rights Reserved. Natalia Lombardi Global Public Relations Manager [email protected] Source: Electronic Arts Inc. Multimedia Files:
    2 Comments ·0 Shares
  • Vibrant Painterly-Style Magical Sphere Created with Blender

    Take a moment to admire this vibrant setup created by Digital Artist and Software Engineer David Lettier. What makes this work stand out is its stunning painterly graphic style with violet and cyan reflections, combined with dynamically changing animated frames. To create this fascinating prop, the artist utilized Blender.David Lettier's portfolio features a lot of appealing hand-painted-style works, such as this kitchen interior, a 3D lamp, Christmas decorations, and more:Also, check out the amazing works of Yuasa Yuu that look like paintings:Follow David Lettier on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #vibrant #painterlystyle #magical #sphere #created
    Vibrant Painterly-Style Magical Sphere Created with Blender
    Take a moment to admire this vibrant setup created by Digital Artist and Software Engineer David Lettier. What makes this work stand out is its stunning painterly graphic style with violet and cyan reflections, combined with dynamically changing animated frames. To create this fascinating prop, the artist utilized Blender.David Lettier's portfolio features a lot of appealing hand-painted-style works, such as this kitchen interior, a 3D lamp, Christmas decorations, and more:Also, check out the amazing works of Yuasa Yuu that look like paintings:Follow David Lettier on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #vibrant #painterlystyle #magical #sphere #created
    Vibrant Painterly-Style Magical Sphere Created with Blender
    80.lv
    Take a moment to admire this vibrant setup created by Digital Artist and Software Engineer David Lettier. What makes this work stand out is its stunning painterly graphic style with violet and cyan reflections, combined with dynamically changing animated frames. To create this fascinating prop, the artist utilized Blender.David Lettier's portfolio features a lot of appealing hand-painted-style works, such as this kitchen interior, a 3D lamp, Christmas decorations, and more:Also, check out the amazing works of Yuasa Yuu that look like paintings:Follow David Lettier on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    4 Comments ·0 Shares
More Results
ollo https://www.ollo.ws