• هل عمرك حسيت بلي كلشي راح يتعكر في البيزنس وتحتاج تكون مرن؟

    في الحلقة الثانية من الموسم الثاني من "بودكاست مهارات مع نديم بركات"، نستضيف إبراهيم ناجي ليتحدث عن مهارة المرونة في البيزنس. فالفيديو يتناول كيفاش نقدروا نتأقلموا مع التغيرات السريعة في السوق ونبقاوا ديما في القمة. هذي المهارة راهي مهمة بزاف خاصة في الأوقات الصعبة!

    شخصياً، في تجربتي مع البزنس، تعلمت بلي المرونة راهي سر النجاح. كل ما تكون قادر على التكيف مع الظروف، كل ما تقدر تحافظ على عملك وتنمو فيه. كيفش تبني مرونة في مشروعك؟

    لا تفوت هذا الفيديو، راح يفيدك بزاف في رحلتك!

    https://www.youtube.com/watch?v=gCks9GF613s
    #مهارات_البيزنس #Flexibility #Podcast #إبراهيم_ناجي #نديم_بركات
    هل عمرك حسيت بلي كلشي راح يتعكر في البيزنس وتحتاج تكون مرن؟ 🤔 في الحلقة الثانية من الموسم الثاني من "بودكاست مهارات مع نديم بركات"، نستضيف إبراهيم ناجي ليتحدث عن مهارة المرونة في البيزنس. فالفيديو يتناول كيفاش نقدروا نتأقلموا مع التغيرات السريعة في السوق ونبقاوا ديما في القمة. هذي المهارة راهي مهمة بزاف خاصة في الأوقات الصعبة! شخصياً، في تجربتي مع البزنس، تعلمت بلي المرونة راهي سر النجاح. كل ما تكون قادر على التكيف مع الظروف، كل ما تقدر تحافظ على عملك وتنمو فيه. كيفش تبني مرونة في مشروعك؟ لا تفوت هذا الفيديو، راح يفيدك بزاف في رحلتك! https://www.youtube.com/watch?v=gCks9GF613s #مهارات_البيزنس #Flexibility #Podcast #إبراهيم_ناجي #نديم_بركات
    Like
    Love
    Wow
    Sad
    Angry
    752
    · 1 Comments ·0 Shares
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Comments ·0 Shares
  • واش راكم يا جماعة؟ اليوم حبيتش نحكيلكم على حاجة جديدة في Arsenal، و هي "Gyokeres stat shows Arsenal have a new threat - and a challenge for Arteta".

    في المقال، يتحدث عن كيفاش Viktor Gyokeres يعطينا نظرة جديدة على قوة الفريق و التحديات اللي يواجهها المدرب Arteta. بصراحة، أنا من الناس اللي نحب نتابع التكتك و كيفاش الفرق تتغير، و رأيي أنو Gyokeres يقدر يكون إضافة كبيرة للفريق، خاصة مع هاذ الأرقام الرائعة.

    شفتو كيفاش Arsenal تقدر تطور من طريقة لعبهم بفضل اللاعبين الجدد؟ بصح كيفاش Arteta راح يواجه التحديات الجديدة هاذي؟

    في الأخير، نحب نتمنى للجميع أنو يكون عندكم فكرة واضحة على التطورات في الفريق.

    https://www.skysports.com/football/news/11095/13418083/viktor-gyokeres-runs-for-arsenal-james-garners-flexibility-and-adrien-trufferts-overlapping-runs-feature-in-the-debrief
    #Arsenal #Gyokeres #Football
    واش راكم يا جماعة؟ اليوم حبيتش نحكيلكم على حاجة جديدة في Arsenal، و هي "Gyokeres stat shows Arsenal have a new threat - and a challenge for Arteta". في المقال، يتحدث عن كيفاش Viktor Gyokeres يعطينا نظرة جديدة على قوة الفريق و التحديات اللي يواجهها المدرب Arteta. بصراحة، أنا من الناس اللي نحب نتابع التكتك و كيفاش الفرق تتغير، و رأيي أنو Gyokeres يقدر يكون إضافة كبيرة للفريق، خاصة مع هاذ الأرقام الرائعة. شفتو كيفاش Arsenal تقدر تطور من طريقة لعبهم بفضل اللاعبين الجدد؟ بصح كيفاش Arteta راح يواجه التحديات الجديدة هاذي؟ في الأخير، نحب نتمنى للجميع أنو يكون عندكم فكرة واضحة على التطورات في الفريق. https://www.skysports.com/football/news/11095/13418083/viktor-gyokeres-runs-for-arsenal-james-garners-flexibility-and-adrien-trufferts-overlapping-runs-feature-in-the-debrief #Arsenal #Gyokeres #Football
    Like
    Love
    Wow
    Sad
    Angry
    470
    · 1 Comments ·0 Shares
  • Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA

    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference.
    A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market.
    At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers.
    In addition, NVIDIA experts will present at four sessions and one tutorial detailing how:

    NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale.
    NVIDIA Networking Fosters AI Innovation at Scale
    AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently.
    In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit.
    NVIDIA ConnectX-8 SuperNIC
    Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale.
    As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange.
    NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence.
    Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet.
    At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk.
    NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads.
    An NVIDIA rack-scale system.
    Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance.
    NVIDIA Blackwell and CUDA Bring AI to Millions of Developers
    The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology.
    NVIDIA GeForce RTX 5090 GPU
    It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects.
    NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere.
    Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon.
    From Algorithms to AI Supercomputers — Optimized for LLMs
    NVIDIA DGX Spark
    Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries.
    As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog.
    Open-Source Collaborations Propel Inference Innovation
    NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows.
    Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others.
    Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure.
    Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.
     
    #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.   #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    blogs.nvidia.com
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale. (Featuring Idan Burstein, principal architect of network adapters and systems-on-a-chip at NVIDIA) Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities. (Featuring Marc Blackstein, senior director of architecture at NVIDIA) Co-packaged optics (CPO) switches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories. (Featuring Gilad Shainer, senior vice president of networking at NVIDIA) The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer. (Featuring Andi Skende, senior distinguished engineer at NVIDIA) It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models (LLMs). Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.  
    Like
    Love
    Wow
    Angry
    Sad
    332
    · 2 Comments ·0 Shares
  • Payments in the Americas

    The Americas, led by the United States, Canada, and Brazil, now account for more than billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Laterprovider in the U.S. and beyond. Affirm has processed over billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. andCanada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here
    #payments #americas
    Payments in the Americas
    The Americas, led by the United States, Canada, and Brazil, now account for more than billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Laterprovider in the U.S. and beyond. Affirm has processed over billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. andCanada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here #payments #americas
    Payments in the Americas
    80.lv
    The Americas, led by the United States, Canada, and Brazil, now account for more than $100 billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass $100 billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Later (BNPL) provider in the U.S. and beyond. Affirm has processed over $75 billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. and (soon) Canada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here
    2 Comments ·0 Shares
  • Blender Developers Meeting Notes: 18 August 2025

    Blender Developers Meeting Notes: 18 August 2025

    By
    n8n

    on
    August 19, 2025

    Blender Development

    Notes for weekly communication of ongoing projects and modules.
    This is a selection of changes that happened over the last week. For a full overview including fixes, code only changes and more visit projects.blender.org.

    Reset various runtime data for writing files-Improve RNA performance tests flexibility.-Move VSync from an environment variable to an argument-Recognize ACES config un-tone-mapped view-Allow empty names in File Output node-Support strings sockets-Allow menu sockets for pixel nodes-Removing Sun Beams node-Don’t get/set PWD env var for working directory functions-Parallelize NURBS basis cache evaluation with Ocomplexity-Add cyclic curve offsets cache-Do not sample direct light when ray segment is invalid-Always add world as object-Create one box for vdb mesh instead of many-Render volume by ray marching through octrees-Compute volume transmittance using telescoping-Shade volume with null scattering-Volume Scattering Probability Guiding-Use RGBE for denoised guiding buffers to reduce memory usage-Use analytic formula for homogeneous volume-Add and update volume test files-Store octree parent nodes in a stack-oneAPI: Disable L0 copy optimization for several dGPUs-Use deterministic linear interpolation for velocity-Use one-tap stochastic interpolation for volume-Add material name collision mode-Shader:

    Add support for full template specialization-Rewrite default_argument_mutation using parser-Fix parser not being consistent-Replace template macro implementation by copy paste-Preprocess: Improve error reporting-Remove Shader Draw Parameter workaround-Add flag for shader debug info generation-Improve cyclical end cap rendering-Export other curve types to SVG-Edit Mode Pen Tool-Support extracting Vulkan & OpenGL args even when disabled-Add Apply Transforms option to obj exporter-Fix recursive resync incorrectly clearing hierarchy info.-Prevent matching collection items only by their index if a name and ID are provided.-Avoid quadratic vertex valence complexity for corner normals-Improve performance and compression, always compress-Allow Preferences editor to be opened in Maximized Area-Gray out or hide asset shelf toggle if not available-Center-align header modal status text-Prevent automatic mode switching in certain scenarios-Remove liboverride UI dead code, improve UI messages.-Prevent ‘liboverride’ ‘decorator’ button to control keyframes.-Widen Preferences Window-Theme: Move curve handle properties in common-Generalized Alert and Popup Block Error Indication-Tree View: Operator to delete with X key-Use UI_alert For Vulcan Fallback Warning-Remove unused theme properties-Warning When Dragging Non-Blend File Onto Executable-“Duplicate Strips” also duplicates referenced IDs-Add copy and paste operators to preview keymap-Improve Histogram scope for HDR content-Add scene assets through strip add menu-Clear Strip Keyframes from Preview-Enable Pie Menu on Drag for Preview Keyframe Insert-Add “Mirror” menu to preview strip menu-Disable descriptor buffers-Swap to system memory for device local memory-Destroy resources in submission thread-Update VMA to 3.3.0-Remove MoltenVK-Enable maintenance4 in VMA-Add message type for remote downloader messages to message bus-
    #blender #developers #meeting #notes #august
    Blender Developers Meeting Notes: 18 August 2025
    Blender Developers Meeting Notes: 18 August 2025 By n8n on August 19, 2025 Blender Development Notes for weekly communication of ongoing projects and modules. This is a selection of changes that happened over the last week. For a full overview including fixes, code only changes and more visit projects.blender.org. Reset various runtime data for writing files-Improve RNA performance tests flexibility.-Move VSync from an environment variable to an argument-Recognize ACES config un-tone-mapped view-Allow empty names in File Output node-Support strings sockets-Allow menu sockets for pixel nodes-Removing Sun Beams node-Don’t get/set PWD env var for working directory functions-Parallelize NURBS basis cache evaluation with Ocomplexity-Add cyclic curve offsets cache-Do not sample direct light when ray segment is invalid-Always add world as object-Create one box for vdb mesh instead of many-Render volume by ray marching through octrees-Compute volume transmittance using telescoping-Shade volume with null scattering-Volume Scattering Probability Guiding-Use RGBE for denoised guiding buffers to reduce memory usage-Use analytic formula for homogeneous volume-Add and update volume test files-Store octree parent nodes in a stack-oneAPI: Disable L0 copy optimization for several dGPUs-Use deterministic linear interpolation for velocity-Use one-tap stochastic interpolation for volume-Add material name collision mode-Shader: Add support for full template specialization-Rewrite default_argument_mutation using parser-Fix parser not being consistent-Replace template macro implementation by copy paste-Preprocess: Improve error reporting-Remove Shader Draw Parameter workaround-Add flag for shader debug info generation-Improve cyclical end cap rendering-Export other curve types to SVG-Edit Mode Pen Tool-Support extracting Vulkan & OpenGL args even when disabled-Add Apply Transforms option to obj exporter-Fix recursive resync incorrectly clearing hierarchy info.-Prevent matching collection items only by their index if a name and ID are provided.-Avoid quadratic vertex valence complexity for corner normals-Improve performance and compression, always compress-Allow Preferences editor to be opened in Maximized Area-Gray out or hide asset shelf toggle if not available-Center-align header modal status text-Prevent automatic mode switching in certain scenarios-Remove liboverride UI dead code, improve UI messages.-Prevent ‘liboverride’ ‘decorator’ button to control keyframes.-Widen Preferences Window-Theme: Move curve handle properties in common-Generalized Alert and Popup Block Error Indication-Tree View: Operator to delete with X key-Use UI_alert For Vulcan Fallback Warning-Remove unused theme properties-Warning When Dragging Non-Blend File Onto Executable-“Duplicate Strips” also duplicates referenced IDs-Add copy and paste operators to preview keymap-Improve Histogram scope for HDR content-Add scene assets through strip add menu-Clear Strip Keyframes from Preview-Enable Pie Menu on Drag for Preview Keyframe Insert-Add “Mirror” menu to preview strip menu-Disable descriptor buffers-Swap to system memory for device local memory-Destroy resources in submission thread-Update VMA to 3.3.0-Remove MoltenVK-Enable maintenance4 in VMA-Add message type for remote downloader messages to message bus- #blender #developers #meeting #notes #august
    Blender Developers Meeting Notes: 18 August 2025
    www.blendernation.com
    Blender Developers Meeting Notes: 18 August 2025 By n8n on August 19, 2025 Blender Development Notes for weekly communication of ongoing projects and modules. This is a selection of changes that happened over the last week. For a full overview including fixes, code only changes and more visit projects.blender.org. Reset various runtime data for writing files (commit) - (Hans Goudey) Improve RNA performance tests flexibility. (commit) - (Bastien Montagne) Move VSync from an environment variable to an argument (commit) - (Campbell Barton) Recognize ACES config un-tone-mapped view (commit) - (Brecht Van Lommel) Allow empty names in File Output node (commit) - (Omar Emara) Support strings sockets (commit) - (Omar Emara) Allow menu sockets for pixel nodes (commit) - (Omar Emara) Removing Sun Beams node (commit) - (Mohamed Hassan) Don’t get/set PWD env var for working directory functions (commit) - (Jesse Yurkovich) Parallelize NURBS basis cache evaluation with O(n) complexity (commit) - (Mattias Fredriksson) Add cyclic curve offsets cache (commit) - (Hans Goudey) Do not sample direct light when ray segment is invalid (commit) - (Weizhen Huang) Always add world as object (commit) - (Weizhen Huang) Create one box for vdb mesh instead of many (commit) - (Weizhen Huang) Render volume by ray marching through octrees (commit) - (Weizhen Huang) Compute volume transmittance using telescoping (commit) - (Weizhen Huang) Shade volume with null scattering (commit) - (Weizhen Huang) Volume Scattering Probability Guiding (commit) - (Weizhen Huang) Use RGBE for denoised guiding buffers to reduce memory usage (commit) - (Weizhen Huang) Use analytic formula for homogeneous volume (commit) - (Weizhen Huang) Add and update volume test files (commit) - (Weizhen Huang) Store octree parent nodes in a stack (commit) - (Weizhen Huang) oneAPI: Disable L0 copy optimization for several dGPUs (commit) - (Nikita Sirgienko) Use deterministic linear interpolation for velocity (commit) - (Weizhen Huang) Use one-tap stochastic interpolation for volume (commit) - (Weizhen Huang) Add material name collision mode (commit) - (Oxicid) Shader: Add support for full template specialization (commit) - (Clément Foucault) Rewrite default_argument_mutation using parser (commit) - (Clément Foucault) Fix parser not being consistent (commit) - (Clément Foucault) Replace template macro implementation by copy paste (commit) - (Clément Foucault) Preprocess: Improve error reporting (commit) - (Clément Foucault) Remove Shader Draw Parameter workaround (commit) - (Clément Foucault) Add flag for shader debug info generation (commit) - (Christoph Neuhauser) Improve cyclical end cap rendering (commit) - (Casey Bianco-Davis) Export other curve types to SVG (commit) - (Casey Bianco-Davis) Edit Mode Pen Tool (commit) - (Casey Bianco-Davis) Support extracting Vulkan & OpenGL args even when disabled (commit) - (Campbell Barton) Add Apply Transforms option to obj exporter (commit) - (Thomas Hope) Fix recursive resync incorrectly clearing hierarchy info. (commit) - (Bastien Montagne) Prevent matching collection items only by their index if a name and ID are provided. (commit) - (Bastien Montagne) Avoid quadratic vertex valence complexity for corner normals (commit) - (_илья __) Improve performance and compression, always compress (commit) - (Aras Pranckevicius) Allow Preferences editor to be opened in Maximized Area (commit) - (Jonas Holzman) Gray out or hide asset shelf toggle if not available (commit) - (Julian Eisel) Center-align header modal status text (commit) - (Pablo Vazquez) Prevent automatic mode switching in certain scenarios (commit) - (Sean Kim) Remove liboverride UI dead code, improve UI messages. (commit) - (Bastien Montagne) Prevent ‘liboverride’ ‘decorator’ button to control keyframes. (commit) - (Bastien Montagne) Widen Preferences Window (commit) - (Pablo Vazquez) Theme: Move curve handle properties in common (commit) - (Nika Kutsniashvili) Generalized Alert and Popup Block Error Indication (commit) - (Harley Acheson) Tree View: Operator to delete with X key (commit) - (Pratik Borhade) Use UI_alert For Vulcan Fallback Warning (commit) - (Harley Acheson) Remove unused theme properties (commit) - (Nika Kutsniashvili) Warning When Dragging Non-Blend File Onto Executable (commit) - (Harley Acheson) “Duplicate Strips” also duplicates referenced IDs (commit) - (Falk David) Add copy and paste operators to preview keymap (commit) - (Ramon Klauck) Improve Histogram scope for HDR content (commit) - (Aras Pranckevicius) Add scene assets through strip add menu (commit) - (Falk David) Clear Strip Keyframes from Preview (commit) - (Ramon Klauck) Enable Pie Menu on Drag for Preview Keyframe Insert (commit) - (Ramon Klauck) Add “Mirror” menu to preview strip menu (commit) - (Ramon Klauck) Disable descriptor buffers (commit) - (Jeroen Bakker) Swap to system memory for device local memory (commit) - (Jeroen Bakker) Destroy resources in submission thread (commit) - (Jeroen Bakker) Update VMA to 3.3.0 (commit) - (Jeroen Bakker) Remove MoltenVK (commit) - (Jeroen Bakker) Enable maintenance4 in VMA (commit) - (Jeroen Bakker) Add message type for remote downloader messages to message bus (commit) - (Julian Eisel)
    2 Comments ·0 Shares
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 Comments ·0 Shares
  • هل فكرت يومًا كيف ممكن لمنتج واحد يكيف نفسه مع احتياجاتك مع مرور الوقت؟

    المقال الجديد يتحدث عن 11 منتج ذكي، مصمم بطريقة مرنة تسمح له بالتطور معك. الفكرة مش في أن كل منتج يكون متعدد الوظائف، ولكن في أنه يبقى مفتوح للتغيير ويقدر يتأقلم مع الظروف الجديدة.

    أنا شخصيًا جربت بعض المنتجات اللي تتكيف مع متطلبات حياتي اليومية، مثل الأدوات الذكية في المطبخ. حسيت بفرق كبير في الراحة والكفاءة!

    تخيل كيف ممكن لهذه الأفكار الجديدة تغير تجربتنا اليومية وتعطينا فرص جديدة.

    https://architizer.com/blog/practice/materials/flexibility-intelligent-products-evolve-over-time/

    #منتجات_ذكية #تطور_المنتجات #Flexibility #Innovation #التصميم
    🌟 هل فكرت يومًا كيف ممكن لمنتج واحد يكيف نفسه مع احتياجاتك مع مرور الوقت؟ 🤔 المقال الجديد يتحدث عن 11 منتج ذكي، مصمم بطريقة مرنة تسمح له بالتطور معك. الفكرة مش في أن كل منتج يكون متعدد الوظائف، ولكن في أنه يبقى مفتوح للتغيير ويقدر يتأقلم مع الظروف الجديدة. 🙌💡 أنا شخصيًا جربت بعض المنتجات اللي تتكيف مع متطلبات حياتي اليومية، مثل الأدوات الذكية في المطبخ. حسيت بفرق كبير في الراحة والكفاءة! 😍 تخيل كيف ممكن لهذه الأفكار الجديدة تغير تجربتنا اليومية وتعطينا فرص جديدة. https://architizer.com/blog/practice/materials/flexibility-intelligent-products-evolve-over-time/ #منتجات_ذكية #تطور_المنتجات #Flexibility #Innovation #التصميم
    architizer.com
    Flexible design doesn't always mean multifunctionality. Sometimes just being open to change is enough to make a product brilliant. The post A Never-Ending Story: 11 Intelligent Products Designed to Evolve Over Time appeared first on Journal.
    1 Comments ·0 Shares
  • حاب تزيد مبيعاتك في التجارة الإلكترونية؟ عندك فكرة، لكن محتاج طرق جديدة؟

    في الفيديو الجديد، نتحدث عن "Flexible Ads Scaling" وكي تبني استراتيجيات ذكية لزيادة مبيعاتك على فايسبوك أدس. راح نشوف كيفاش تستغل هاد التقنية بالشكل الصحيح، ونشارك معاكم نصائح قيمة لهيكلة حملاتك بأقل تكلفة. شفت كيفاش التغييرات الصغيرة يمكن تخلي نتائجك كبيرة؟

    من تجربتي الشخصية، لما بدأت نطبق هاد الاستراتيجيات، لاحظت فرق كبير في التفاعل والمبيعات.

    إذا أنت بعد حاب تعرف كيفاش توسّع نشاطك وتزيد من فعالية حملاتك، تابع الفيديو حتى الأخير!

    https://www.youtube.com/watch?v=qOEEVDQT3vY

    #تجارة_إلكترونية #Flexibility #FacebookAds #Scaling #استراتيجيات
    🌟 حاب تزيد مبيعاتك في التجارة الإلكترونية؟ عندك فكرة، لكن محتاج طرق جديدة؟ 🤔 في الفيديو الجديد، نتحدث عن "Flexible Ads Scaling" وكي تبني استراتيجيات ذكية لزيادة مبيعاتك على فايسبوك أدس. راح نشوف كيفاش تستغل هاد التقنية بالشكل الصحيح، ونشارك معاكم نصائح قيمة لهيكلة حملاتك بأقل تكلفة. شفت كيفاش التغييرات الصغيرة يمكن تخلي نتائجك كبيرة؟ من تجربتي الشخصية، لما بدأت نطبق هاد الاستراتيجيات، لاحظت فرق كبير في التفاعل والمبيعات. 💪🔥 إذا أنت بعد حاب تعرف كيفاش توسّع نشاطك وتزيد من فعالية حملاتك، تابع الفيديو حتى الأخير! 👇 https://www.youtube.com/watch?v=qOEEVDQT3vY #تجارة_إلكترونية #Flexibility #FacebookAds #Scaling #استراتيجيات
    1 Comments ·0 Shares
  • Behind the Effects of RENEGADES

    Behind the Effects of RENEGADES

    By
    ALLO

    on
    August 12, 2025

    Behind the ScenesALLO contributed VFX and Blender-designed 3D printed props to a short superhero film.I had the pleasure of working in collaboration with Station 1 Studios and MoNo Films on a superhero short film, RENEGADES that released last Friday. I co-produced, built props, and more importantly served as the Effects Supervisor. All the VFX were done in Blender, Resolve, and Final Cut Pro – from obvious effects to small unseen edits. Additionally, I worked with as many practical effects and props as possible on this project, which was quite a fun and challenging time.I also used Blender to create some of the props for this film, designing models such as Concrete’s helmet to be 3D printed and post-processed.I had the pleasure to make the credits sequence for this short, also using Blender. I loved the flexibility of creating the sequence in full real-time in EEVEE with real-time compositing, while having the music playback in real-time for easy editing. The power of the 3D tools in Blender is truly astounding, enabling me to create the sequence fully in viewport in a fraction of the time that I would have needed in After Effects or Resolve.Additionally, I used Blender for some of the marketing, creating character posters of our main actors:I hope you found this reflection on how Blender was utilized for this project interesting, and that you also enjoy the film!
    #behind #effects #renegades
    Behind the Effects of RENEGADES
    Behind the Effects of RENEGADES By ALLO on August 12, 2025 Behind the ScenesALLO contributed VFX and Blender-designed 3D printed props to a short superhero film.I had the pleasure of working in collaboration with Station 1 Studios and MoNo Films on a superhero short film, RENEGADES that released last Friday. I co-produced, built props, and more importantly served as the Effects Supervisor. All the VFX were done in Blender, Resolve, and Final Cut Pro – from obvious effects to small unseen edits. Additionally, I worked with as many practical effects and props as possible on this project, which was quite a fun and challenging time.I also used Blender to create some of the props for this film, designing models such as Concrete’s helmet to be 3D printed and post-processed.I had the pleasure to make the credits sequence for this short, also using Blender. I loved the flexibility of creating the sequence in full real-time in EEVEE with real-time compositing, while having the music playback in real-time for easy editing. The power of the 3D tools in Blender is truly astounding, enabling me to create the sequence fully in viewport in a fraction of the time that I would have needed in After Effects or Resolve.Additionally, I used Blender for some of the marketing, creating character posters of our main actors:I hope you found this reflection on how Blender was utilized for this project interesting, and that you also enjoy the film! #behind #effects #renegades
    Behind the Effects of RENEGADES
    www.blendernation.com
    Behind the Effects of RENEGADES By ALLO on August 12, 2025 Behind the ScenesALLO contributed VFX and Blender-designed 3D printed props to a short superhero film.I had the pleasure of working in collaboration with Station 1 Studios and MoNo Films on a superhero short film, RENEGADES that released last Friday. I co-produced, built props, and more importantly served as the Effects Supervisor. All the VFX were done in Blender, Resolve, and Final Cut Pro – from obvious effects to small unseen edits. Additionally, I worked with as many practical effects and props as possible on this project, which was quite a fun and challenging time.I also used Blender to create some of the props for this film, designing models such as Concrete’s helmet to be 3D printed and post-processed.I had the pleasure to make the credits sequence for this short, also using Blender. I loved the flexibility of creating the sequence in full real-time in EEVEE with real-time compositing, while having the music playback in real-time for easy editing. The power of the 3D tools in Blender is truly astounding, enabling me to create the sequence fully in viewport in a fraction of the time that I would have needed in After Effects or Resolve.Additionally, I used Blender for some of the marketing, creating character posters of our main actors:I hope you found this reflection on how Blender was utilized for this project interesting, and that you also enjoy the film!
    2 Comments ·0 Shares
  • EA SPORTS™ Madden NFL 26 Launches Worldwide Today—Powered by Real NFL Data, and Unleashing the Most Explosive and Immersive NFL Experience Yet

    Experience all-new QB DNA and Coach DNA, Signature Quarterback Play, Adaptive Coaching, explosive movement, and true NFL presentation as Madden NFL 26 launches for the first time on Nintendo Switch 2 and Amazon Luna.
    REDWOOD CITY, Calif.----
    Just in time for the NFL season, Electronic Arts Inc.and EA SPORTS™ have released EA SPORTS™ Madden NFL 26 — the most explosive, authentic, and immersive football experience in franchise history. Built from Sundays and powered by AI-driven systems trained on thousands of real NFL plays, the game debuts all-new QB and Coach DNA for player-specific traits, signature playstyles, and adaptive strategy. Players will experience dynamic Football Weather, enhanced physics-based gameplay, and deeper customization across Franchise and Superstar modes — all on PlayStation®5, Xbox Series X|S, Nintendo Switch™2, Amazon Luna, and PC.Madden NFL 26 Available Now“Madden NFL 26 is a true leap forward in authenticity and control,” said Daryl Holt, SVP and Group GM, EA SPORTS. “With smarter quarterbacks, adaptive coaching AI, and our breakthrough QB DNA and Coach DNA systems, every snap feels true to the NFL fans love. Explosive movement, dynamic weather, and authentic stadium atmospheres capture the passion and drama of the game. And with Madden NFL 26 now on Nintendo Switch 2, we’re bringing that unmatched realism and energy to more fans than ever before.”Through a new partnership with Nintendo announced in the spring, EA SPORTS brings the authentic Madden NFL experience to Nintendo Switch 2 for the first time. By launching on Nintendo’s console, Madden NFL 26 expands its reach to a broader, more diverse audience—offering explosive gameplay and immersive NFL action anytime, anywhere.Fans everywhere can now hop in and experience Madden NFL 26’s game-changing AI innovations with QB DNA and Coach DNA, delivering more immersive NFL atmospheres on game day, and expanding fan-favorite modes with new depth and strategy across every snap in its feature set:QB DNA: Star NFL quarterbacks move, look, and feel more like the superstars they are. Leveraging AI-powered machine learning, QB DNA introduces unique pocket behaviors, signature throwing motions, and distinct scrambling styles that mirror real-life NFL signal-callers, delivering the most lifelike quarterback gameplay in franchise history.Coach DNA: Coaches employ real philosophies and adaptive strategies based on nearly a decade of NFL data. Dynamic coach suggestions and multi-player counters provide smart play recommendations and strategic depth, making every matchup feel authentic and challenging.Powerful NFL Movement & Physics Expansion: Experience the league’s unmatched athleticism with updated player movement, physics-based interactions, and new mechanics like Custom Defensive Zones, Adaptive Coverage, and enhanced D-line stunts and twists.Football Weather: Extreme weather conditions such as snow, fog, and rain impact visibility, movement, stamina, and ball security, add a new layer of realism and strategy to every game.True NFL Gameday Experience: From the Skol chant in Minnesota to Baltimore’s pre-game light show, authentic team traditions, dynamic halftime shows, and custom broadcast packages immerse players in the sights and sounds of the NFL.Franchise mode: Introduces four new coach archetypes with evolving abilities, a deeper Weekly Strategy system featuring custom Play Sheets, and enhanced player health management with real-time status updates. Stay connected to your league through the new Approval Rating system, plus weekly recaps from Scott Hanson and commentary from Rich Eisen.Superstar Mode: Import your Road to Glory player and shape their NFL career with evolving storylines, draft-impacting performances, and weekly goals. Manage relationships, development, and durability through the new Sphere of Influence and Wear & Tear systems as you rise from college star to NFL legend.Madden Ultimate Team™: Build your dream roster with NFL legends and stars, tackle new dynamic MUT Events, and rise through 50-player Leaderboard Campaigns. NFL Team Pass delivers team-specific rewards and ever-evolving ways to play.The Madden NFL 26 Soundtrack features 77 songs across menus and stadiums, offering expanded control, variety, and immersion. New this season, players can customize their menus playlist with both new releases and iconic stadium anthems. The soundtrack includes music from Twenty One Pilots, Lizzo, Lil Nas X, BIA, and Luke Combs, plus over 30 stadium classics from Green Day, Rage Against The Machine, Foo Fighters, and more — all curated to amplify the energy and authenticity of the NFL experience.Additionally, Madden NFL 26 is now available on Amazon Luna, bringing the authentic football experience to even more players through cloud gaming. Luna lets fans play instantly on devices they already own — including Fire TV, tablets, mobile phones, smart TVs, and more — with no downloads, installs, or updates required. Wherever Luna is available, players can enjoy all that Madden NFL 26 has to offer, including modes like Franchise and Superstar.On mobile, Madden NFL 26 Mobile delivers the ultimate football experience on your phone, packed with more control, strategy, and customization than ever before. This season brings a fresh slate of features, including Dual Player Cards that cover multiple positions and unlock unique chemistry boosts. Fine-tune your roster with over 20 upgradeable Player Traits, and take your lineup to the next level with Player EVO — a new system that lets you absorb higher OVR players to power up your favorites. Whether you're a returning veteran or new to the game, Madden NFL 26 Mobile offers deeper gameplay, more flexibility, and a true NFL experience — right at your fingertips. Download Madden NFL 26 Mobile for free from the App Store® or Google Play™ today.EA Play members can live every stadium-shaking moment with the EA Play* 10-hour game trial, available now. Members also score monthly Ultimate Team™ Packs, as well as receive 10% off EA digital purchases - including game downloads, Madden Points and DLC. For more information on EA Play please visit tuned for more Madden NFL 26 details on the official Madden NFL website and social media.*Conditions, limitations and exclusions apply. See EA Play Terms for details.For Madden NFL 26 assets, visit: EAPressPortal.com.Madden NFL 26 is developed in Orlando, Florida and Madrid, Spain by EA SPORTS and will be available worldwide August 14 for Xbox Series X|S, PlayStation 5, Nintendo Switch 2, Amazon Luna and PC via EA app for Windows, Steam, Epic Games StoreAbout Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.

    Erin Exum
    Director, Integrated Comms
    #sports #madden #nfl #launches #worldwide
    EA SPORTS™ Madden NFL 26 Launches Worldwide Today—Powered by Real NFL Data, and Unleashing the Most Explosive and Immersive NFL Experience Yet
    Experience all-new QB DNA and Coach DNA, Signature Quarterback Play, Adaptive Coaching, explosive movement, and true NFL presentation as Madden NFL 26 launches for the first time on Nintendo Switch 2 and Amazon Luna. REDWOOD CITY, Calif.---- Just in time for the NFL season, Electronic Arts Inc.and EA SPORTS™ have released EA SPORTS™ Madden NFL 26 — the most explosive, authentic, and immersive football experience in franchise history. Built from Sundays and powered by AI-driven systems trained on thousands of real NFL plays, the game debuts all-new QB and Coach DNA for player-specific traits, signature playstyles, and adaptive strategy. Players will experience dynamic Football Weather, enhanced physics-based gameplay, and deeper customization across Franchise and Superstar modes — all on PlayStation®5, Xbox Series X|S, Nintendo Switch™2, Amazon Luna, and PC.Madden NFL 26 Available Now“Madden NFL 26 is a true leap forward in authenticity and control,” said Daryl Holt, SVP and Group GM, EA SPORTS. “With smarter quarterbacks, adaptive coaching AI, and our breakthrough QB DNA and Coach DNA systems, every snap feels true to the NFL fans love. Explosive movement, dynamic weather, and authentic stadium atmospheres capture the passion and drama of the game. And with Madden NFL 26 now on Nintendo Switch 2, we’re bringing that unmatched realism and energy to more fans than ever before.”Through a new partnership with Nintendo announced in the spring, EA SPORTS brings the authentic Madden NFL experience to Nintendo Switch 2 for the first time. By launching on Nintendo’s console, Madden NFL 26 expands its reach to a broader, more diverse audience—offering explosive gameplay and immersive NFL action anytime, anywhere.Fans everywhere can now hop in and experience Madden NFL 26’s game-changing AI innovations with QB DNA and Coach DNA, delivering more immersive NFL atmospheres on game day, and expanding fan-favorite modes with new depth and strategy across every snap in its feature set:QB DNA: Star NFL quarterbacks move, look, and feel more like the superstars they are. Leveraging AI-powered machine learning, QB DNA introduces unique pocket behaviors, signature throwing motions, and distinct scrambling styles that mirror real-life NFL signal-callers, delivering the most lifelike quarterback gameplay in franchise history.Coach DNA: Coaches employ real philosophies and adaptive strategies based on nearly a decade of NFL data. Dynamic coach suggestions and multi-player counters provide smart play recommendations and strategic depth, making every matchup feel authentic and challenging.Powerful NFL Movement & Physics Expansion: Experience the league’s unmatched athleticism with updated player movement, physics-based interactions, and new mechanics like Custom Defensive Zones, Adaptive Coverage, and enhanced D-line stunts and twists.Football Weather: Extreme weather conditions such as snow, fog, and rain impact visibility, movement, stamina, and ball security, add a new layer of realism and strategy to every game.True NFL Gameday Experience: From the Skol chant in Minnesota to Baltimore’s pre-game light show, authentic team traditions, dynamic halftime shows, and custom broadcast packages immerse players in the sights and sounds of the NFL.Franchise mode: Introduces four new coach archetypes with evolving abilities, a deeper Weekly Strategy system featuring custom Play Sheets, and enhanced player health management with real-time status updates. Stay connected to your league through the new Approval Rating system, plus weekly recaps from Scott Hanson and commentary from Rich Eisen.Superstar Mode: Import your Road to Glory player and shape their NFL career with evolving storylines, draft-impacting performances, and weekly goals. Manage relationships, development, and durability through the new Sphere of Influence and Wear & Tear systems as you rise from college star to NFL legend.Madden Ultimate Team™: Build your dream roster with NFL legends and stars, tackle new dynamic MUT Events, and rise through 50-player Leaderboard Campaigns. NFL Team Pass delivers team-specific rewards and ever-evolving ways to play.The Madden NFL 26 Soundtrack features 77 songs across menus and stadiums, offering expanded control, variety, and immersion. New this season, players can customize their menus playlist with both new releases and iconic stadium anthems. The soundtrack includes music from Twenty One Pilots, Lizzo, Lil Nas X, BIA, and Luke Combs, plus over 30 stadium classics from Green Day, Rage Against The Machine, Foo Fighters, and more — all curated to amplify the energy and authenticity of the NFL experience.Additionally, Madden NFL 26 is now available on Amazon Luna, bringing the authentic football experience to even more players through cloud gaming. Luna lets fans play instantly on devices they already own — including Fire TV, tablets, mobile phones, smart TVs, and more — with no downloads, installs, or updates required. Wherever Luna is available, players can enjoy all that Madden NFL 26 has to offer, including modes like Franchise and Superstar.On mobile, Madden NFL 26 Mobile delivers the ultimate football experience on your phone, packed with more control, strategy, and customization than ever before. This season brings a fresh slate of features, including Dual Player Cards that cover multiple positions and unlock unique chemistry boosts. Fine-tune your roster with over 20 upgradeable Player Traits, and take your lineup to the next level with Player EVO — a new system that lets you absorb higher OVR players to power up your favorites. Whether you're a returning veteran or new to the game, Madden NFL 26 Mobile offers deeper gameplay, more flexibility, and a true NFL experience — right at your fingertips. Download Madden NFL 26 Mobile for free from the App Store® or Google Play™ today.EA Play members can live every stadium-shaking moment with the EA Play* 10-hour game trial, available now. Members also score monthly Ultimate Team™ Packs, as well as receive 10% off EA digital purchases - including game downloads, Madden Points and DLC. For more information on EA Play please visit tuned for more Madden NFL 26 details on the official Madden NFL website and social media.*Conditions, limitations and exclusions apply. See EA Play Terms for details.For Madden NFL 26 assets, visit: EAPressPortal.com.Madden NFL 26 is developed in Orlando, Florida and Madrid, Spain by EA SPORTS and will be available worldwide August 14 for Xbox Series X|S, PlayStation 5, Nintendo Switch 2, Amazon Luna and PC via EA app for Windows, Steam, Epic Games StoreAbout Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Erin Exum Director, Integrated Comms #sports #madden #nfl #launches #worldwide
    EA SPORTS™ Madden NFL 26 Launches Worldwide Today—Powered by Real NFL Data, and Unleashing the Most Explosive and Immersive NFL Experience Yet
    news.ea.com
    Experience all-new QB DNA and Coach DNA, Signature Quarterback Play, Adaptive Coaching, explosive movement, and true NFL presentation as Madden NFL 26 launches for the first time on Nintendo Switch 2 and Amazon Luna. REDWOOD CITY, Calif.--(BUSINESS WIRE)-- Just in time for the NFL season, Electronic Arts Inc. (NASDAQ: EA) and EA SPORTS™ have released EA SPORTS™ Madden NFL 26 — the most explosive, authentic, and immersive football experience in franchise history. Built from Sundays and powered by AI-driven systems trained on thousands of real NFL plays, the game debuts all-new QB and Coach DNA for player-specific traits, signature playstyles, and adaptive strategy. Players will experience dynamic Football Weather, enhanced physics-based gameplay, and deeper customization across Franchise and Superstar modes — all on PlayStation®5, Xbox Series X|S, Nintendo Switch™2, Amazon Luna, and PC.Madden NFL 26 Available Now“Madden NFL 26 is a true leap forward in authenticity and control,” said Daryl Holt, SVP and Group GM, EA SPORTS. “With smarter quarterbacks, adaptive coaching AI, and our breakthrough QB DNA and Coach DNA systems, every snap feels true to the NFL fans love. Explosive movement, dynamic weather, and authentic stadium atmospheres capture the passion and drama of the game. And with Madden NFL 26 now on Nintendo Switch 2, we’re bringing that unmatched realism and energy to more fans than ever before.”Through a new partnership with Nintendo announced in the spring, EA SPORTS brings the authentic Madden NFL experience to Nintendo Switch 2 for the first time. By launching on Nintendo’s console, Madden NFL 26 expands its reach to a broader, more diverse audience—offering explosive gameplay and immersive NFL action anytime, anywhere.Fans everywhere can now hop in and experience Madden NFL 26’s game-changing AI innovations with QB DNA and Coach DNA, delivering more immersive NFL atmospheres on game day, and expanding fan-favorite modes with new depth and strategy across every snap in its feature set:QB DNA: Star NFL quarterbacks move, look, and feel more like the superstars they are. Leveraging AI-powered machine learning, QB DNA introduces unique pocket behaviors, signature throwing motions, and distinct scrambling styles that mirror real-life NFL signal-callers, delivering the most lifelike quarterback gameplay in franchise history.Coach DNA: Coaches employ real philosophies and adaptive strategies based on nearly a decade of NFL data. Dynamic coach suggestions and multi-player counters provide smart play recommendations and strategic depth, making every matchup feel authentic and challenging.Powerful NFL Movement & Physics Expansion: Experience the league’s unmatched athleticism with updated player movement, physics-based interactions, and new mechanics like Custom Defensive Zones, Adaptive Coverage, and enhanced D-line stunts and twists.Football Weather: Extreme weather conditions such as snow, fog, and rain impact visibility, movement, stamina, and ball security, add a new layer of realism and strategy to every game.True NFL Gameday Experience: From the Skol chant in Minnesota to Baltimore’s pre-game light show, authentic team traditions, dynamic halftime shows, and custom broadcast packages immerse players in the sights and sounds of the NFL.Franchise mode: Introduces four new coach archetypes with evolving abilities, a deeper Weekly Strategy system featuring custom Play Sheets, and enhanced player health management with real-time status updates. Stay connected to your league through the new Approval Rating system, plus weekly recaps from Scott Hanson and commentary from Rich Eisen.Superstar Mode: Import your Road to Glory player and shape their NFL career with evolving storylines, draft-impacting performances, and weekly goals. Manage relationships, development, and durability through the new Sphere of Influence and Wear & Tear systems as you rise from college star to NFL legend.Madden Ultimate Team™: Build your dream roster with NFL legends and stars, tackle new dynamic MUT Events, and rise through 50-player Leaderboard Campaigns. NFL Team Pass delivers team-specific rewards and ever-evolving ways to play.The Madden NFL 26 Soundtrack features 77 songs across menus and stadiums, offering expanded control, variety, and immersion. New this season, players can customize their menus playlist with both new releases and iconic stadium anthems. The soundtrack includes music from Twenty One Pilots, Lizzo, Lil Nas X, BIA, and Luke Combs, plus over 30 stadium classics from Green Day, Rage Against The Machine, Foo Fighters, and more — all curated to amplify the energy and authenticity of the NFL experience.Additionally, Madden NFL 26 is now available on Amazon Luna, bringing the authentic football experience to even more players through cloud gaming. Luna lets fans play instantly on devices they already own — including Fire TV, tablets, mobile phones, smart TVs, and more — with no downloads, installs, or updates required. Wherever Luna is available, players can enjoy all that Madden NFL 26 has to offer, including modes like Franchise and Superstar.On mobile, Madden NFL 26 Mobile delivers the ultimate football experience on your phone, packed with more control, strategy, and customization than ever before. This season brings a fresh slate of features, including Dual Player Cards that cover multiple positions and unlock unique chemistry boosts. Fine-tune your roster with over 20 upgradeable Player Traits, and take your lineup to the next level with Player EVO — a new system that lets you absorb higher OVR players to power up your favorites. Whether you're a returning veteran or new to the game, Madden NFL 26 Mobile offers deeper gameplay, more flexibility, and a true NFL experience — right at your fingertips. Download Madden NFL 26 Mobile for free from the App Store® or Google Play™ today.EA Play members can live every stadium-shaking moment with the EA Play* 10-hour game trial, available now. Members also score monthly Ultimate Team™ Packs, as well as receive 10% off EA digital purchases - including game downloads, Madden Points and DLC. For more information on EA Play please visit https://www.ea.com/ea-play.Stay tuned for more Madden NFL 26 details on the official Madden NFL website and social media (Instagram, X, TikTok, and YouTube).*Conditions, limitations and exclusions apply. See EA Play Terms for details.For Madden NFL 26 assets, visit: EAPressPortal.com.Madden NFL 26 is developed in Orlando, Florida and Madrid, Spain by EA SPORTS and will be available worldwide August 14 for Xbox Series X|S, PlayStation 5, Nintendo Switch 2, Amazon Luna and PC via EA app for Windows, Steam, Epic Games StoreAbout Electronic ArtsElectronic Arts (NASDAQ: EA) is a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately $7.5 billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Erin Exum Director, Integrated Comms [email protected] Source: Electronic Arts Inc.
    2 Comments ·0 Shares
ollo https://www.ollo.ws