• Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Commentaires ·0 Parts
  • واش دراكم على أخبار البطولة الإنجليزية؟

    في المقال هدا، نغوصوا في تفاصيل الأسبوع الأخير من Championship، وين شفتوا كيفاش لامبارد كسب النقاط مع Coventry، وMillwall لي برهنوا على قوتهم، وBoro لي مازالوا محافظين على سجلهم المثالي! هاذو اللاعبين جابوا حيوية للمنافسة، وكل ماتش كان فيه إثارة وتشويق.

    بصراحة، كاين حاجة تشدني في هاذ الفرق، خاصة كيفاش يتحلوا بالصبر والعزيمة رغم الصعوبات. سواء كنت تشجع فريق أو لا، كاين حاجة نقدروا نتعلموها من هاذ الرياضيين: العمل الجاد يجيب نتائج!

    خليكم ديما متابعين، والفوتبول دايما مليء بالمفاجآت!

    https://www.skysports.com/football/news/11095/13415561/championship-talking-points

    #البطولة_الإنجليزية #Football #منافسة #Championship #رياضة
    🎉 واش دراكم على أخبار البطولة الإنجليزية؟ 🤔 في المقال هدا، نغوصوا في تفاصيل الأسبوع الأخير من Championship، وين شفتوا كيفاش لامبارد كسب النقاط مع Coventry، وMillwall لي برهنوا على قوتهم، وBoro لي مازالوا محافظين على سجلهم المثالي! هاذو اللاعبين جابوا حيوية للمنافسة، وكل ماتش كان فيه إثارة وتشويق. بصراحة، كاين حاجة تشدني في هاذ الفرق، خاصة كيفاش يتحلوا بالصبر والعزيمة رغم الصعوبات. سواء كنت تشجع فريق أو لا، كاين حاجة نقدروا نتعلموها من هاذ الرياضيين: العمل الجاد يجيب نتائج! خليكم ديما متابعين، والفوتبول دايما مليء بالمفاجآت! https://www.skysports.com/football/news/11095/13415561/championship-talking-points #البطولة_الإنجليزية #Football #منافسة #Championship #رياضة
    Like
    Love
    Wow
    Sad
    Angry
    945
    · 1 Commentaires ·0 Parts
  • Kiss takes the stage in World of Tanks’ Metal Fest

    We love summer music festivals, so here at Wargaming, our team at World of Tanks Modern Armor has made it a mission to bring you a hard-rockin’ annual music event that truly shakes the battlefield: Metal Fest.New tanks, new 3D Commanders, new Challenges and events: they’re all part of what Metal Fest offers each summer. But this is our third year of the event, coming to you on PS4 and PS5 starting August 26. We knew we had to go bigger and louder than ever.To borrow some lyrics you might know, we wanted the best—and we got the best.This year, our featured act is none other than the legendary band Kiss! Not only that; we’ve got the actual voices of core members Gene Simmons and Paul Stanley in the game.This is how it all shook out. 

    Play Video

    Shout It Out Loud

    Ever since the band’s shows at The Daisy in March 1973, when they debuted the character designs they’d become known for, Kiss has been more than a group of skilled musicians. They’ve been icons and personas.So even though Metal Fest 2025 features four new Kiss-inspired Premium tanks, we knew specifically that the 3D Commanders representing the four classic Kiss personashad to be absolutely right and larger than life.Fortunately, as World of Tanks’ senior producer JJ Bakken explains, the band was all in. “Geneand Paulwere gracious enough to give us some of their time for the game, as they represent the highest profile characters in Kiss … Both Gene and Paul saw all our concepts as we created them for characters and tanks.brought the idea to us to really lean into the fantastical elements of each character.” 

    Our art team worked to get those fantastical elements down, whether we’re talking the feline claws and nimble animations given to The Catman 3D Commander or the enormous pair of bat-like wings that Tanks’ art director Andy Dorizas suggested for The Demon 3D Commander.But as any tanker knows, when it comes to our 3D Commanders, it’s not just about the look. Our players’ favorite Commanders speak with custom-written voiceover lines, so of course that’s the case for all four of our Kiss Commanders.“Kiss themselves made the decision to have Paul and Gene featured as voices in the game,” says the game’s audio director, Brendan Blewett. “They were very particular in that the Kiss ‘characters’ are just that—characters, not real-life individuals. Each of them has traits and those are portrayed, in the instance of The Starchild and The Demon, by Paul and Gene.” 

    So what was it like, working with legendary musicians to bring the voices of their world-famous characters to our console battlefield?“Working with Paul and Gene was an absolute blast,” says Blewett. “These guys are obviously seasoned studio vets and really made the sessions fun and engaging.”He adds, “Gene lived up to his reputation as a master of trivia and kept us entertained between takes regaling us with stories from the road and factoids. Paul was absolutely a gracious, friendly individual and belted out an incredibly intense vocal performance and kept it going for the whole session. We even quipped that it was ‘like six months of shows in two hours.’ Impressive!”As for the voiceover for The Starman and The Catman, tankers and Kiss fans should rest rock out assured that these Commanders have received the same attention to detail. According to Blewett, “We worked with Kiss to understand the character profiles of The Catman and The Spaceman and came up with casting guidelines from there. For instance, The Catman is a smaller guy, witty and agile, while The Spaceman is older and wiser. The word ‘sagacious’ was used in session to describe the personality of The Spaceman.”

    War MachineIf you think the Kiss 3D Commanders sound impressive, be sure to recruit them during Metal Fest, and pair them up with our four Premium Kiss tanks, also inspired and named after the characters: The Demon, The Starchild, The Spaceman, and The Catman.

    Each of these tanks not only takes visual inspiration from Kiss; it also has abilities inspired by a specific band member’s persona. You’d better believe that The Demon is a tank that mounts a flamethrower!

    All of this is in addition to the Challenges, special event battles, daily login rewards, and more that Metal Fest offers. Rock out while you can, and don’t miss any of it—Metal Fest takes place in World of Tanks Modern Armor from August 26 through September 15 on PS4 and PS5!
    #kiss #takes #stage #world #tanks
    Kiss takes the stage in World of Tanks’ Metal Fest
    We love summer music festivals, so here at Wargaming, our team at World of Tanks Modern Armor has made it a mission to bring you a hard-rockin’ annual music event that truly shakes the battlefield: Metal Fest.New tanks, new 3D Commanders, new Challenges and events: they’re all part of what Metal Fest offers each summer. But this is our third year of the event, coming to you on PS4 and PS5 starting August 26. We knew we had to go bigger and louder than ever.To borrow some lyrics you might know, we wanted the best—and we got the best.This year, our featured act is none other than the legendary band Kiss! Not only that; we’ve got the actual voices of core members Gene Simmons and Paul Stanley in the game.This is how it all shook out.  Play Video Shout It Out Loud Ever since the band’s shows at The Daisy in March 1973, when they debuted the character designs they’d become known for, Kiss has been more than a group of skilled musicians. They’ve been icons and personas.So even though Metal Fest 2025 features four new Kiss-inspired Premium tanks, we knew specifically that the 3D Commanders representing the four classic Kiss personashad to be absolutely right and larger than life.Fortunately, as World of Tanks’ senior producer JJ Bakken explains, the band was all in. “Geneand Paulwere gracious enough to give us some of their time for the game, as they represent the highest profile characters in Kiss … Both Gene and Paul saw all our concepts as we created them for characters and tanks.brought the idea to us to really lean into the fantastical elements of each character.”  Our art team worked to get those fantastical elements down, whether we’re talking the feline claws and nimble animations given to The Catman 3D Commander or the enormous pair of bat-like wings that Tanks’ art director Andy Dorizas suggested for The Demon 3D Commander.But as any tanker knows, when it comes to our 3D Commanders, it’s not just about the look. Our players’ favorite Commanders speak with custom-written voiceover lines, so of course that’s the case for all four of our Kiss Commanders.“Kiss themselves made the decision to have Paul and Gene featured as voices in the game,” says the game’s audio director, Brendan Blewett. “They were very particular in that the Kiss ‘characters’ are just that—characters, not real-life individuals. Each of them has traits and those are portrayed, in the instance of The Starchild and The Demon, by Paul and Gene.”  So what was it like, working with legendary musicians to bring the voices of their world-famous characters to our console battlefield?“Working with Paul and Gene was an absolute blast,” says Blewett. “These guys are obviously seasoned studio vets and really made the sessions fun and engaging.”He adds, “Gene lived up to his reputation as a master of trivia and kept us entertained between takes regaling us with stories from the road and factoids. Paul was absolutely a gracious, friendly individual and belted out an incredibly intense vocal performance and kept it going for the whole session. We even quipped that it was ‘like six months of shows in two hours.’ Impressive!”As for the voiceover for The Starman and The Catman, tankers and Kiss fans should rest rock out assured that these Commanders have received the same attention to detail. According to Blewett, “We worked with Kiss to understand the character profiles of The Catman and The Spaceman and came up with casting guidelines from there. For instance, The Catman is a smaller guy, witty and agile, while The Spaceman is older and wiser. The word ‘sagacious’ was used in session to describe the personality of The Spaceman.” War MachineIf you think the Kiss 3D Commanders sound impressive, be sure to recruit them during Metal Fest, and pair them up with our four Premium Kiss tanks, also inspired and named after the characters: The Demon, The Starchild, The Spaceman, and The Catman. Each of these tanks not only takes visual inspiration from Kiss; it also has abilities inspired by a specific band member’s persona. You’d better believe that The Demon is a tank that mounts a flamethrower! All of this is in addition to the Challenges, special event battles, daily login rewards, and more that Metal Fest offers. Rock out while you can, and don’t miss any of it—Metal Fest takes place in World of Tanks Modern Armor from August 26 through September 15 on PS4 and PS5! #kiss #takes #stage #world #tanks
    Kiss takes the stage in World of Tanks’ Metal Fest
    blog.playstation.com
    We love summer music festivals, so here at Wargaming, our team at World of Tanks Modern Armor has made it a mission to bring you a hard-rockin’ annual music event that truly shakes the battlefield: Metal Fest.New tanks, new 3D Commanders, new Challenges and events: they’re all part of what Metal Fest offers each summer. But this is our third year of the event, coming to you on PS4 and PS5 starting August 26. We knew we had to go bigger and louder than ever.To borrow some lyrics you might know, we wanted the best—and we got the best.This year, our featured act is none other than the legendary band Kiss! Not only that; we’ve got the actual voices of core members Gene Simmons and Paul Stanley in the game.This is how it all shook out.  Play Video Shout It Out Loud Ever since the band’s shows at The Daisy in March 1973, when they debuted the character designs they’d become known for, Kiss has been more than a group of skilled musicians. They’ve been icons and personas.So even though Metal Fest 2025 features four new Kiss-inspired Premium tanks, we knew specifically that the 3D Commanders representing the four classic Kiss personas (The Demon, The Starchild, The Spaceman, and The Catman) had to be absolutely right and larger than life.Fortunately, as World of Tanks’ senior producer JJ Bakken explains, the band was all in. “Gene [Simmons] and Paul [Stanley] were gracious enough to give us some of their time for the game, as they represent the highest profile characters in Kiss … Both Gene and Paul saw all our concepts as we created them for characters and tanks. [They] brought the idea to us to really lean into the fantastical elements of each character.”  Our art team worked to get those fantastical elements down, whether we’re talking the feline claws and nimble animations given to The Catman 3D Commander or the enormous pair of bat-like wings that Tanks’ art director Andy Dorizas suggested for The Demon 3D Commander.But as any tanker knows, when it comes to our 3D Commanders, it’s not just about the look. Our players’ favorite Commanders speak with custom-written voiceover lines, so of course that’s the case for all four of our Kiss Commanders.“Kiss themselves made the decision to have Paul and Gene featured as voices in the game,” says the game’s audio director, Brendan Blewett. “They were very particular in that the Kiss ‘characters’ are just that—characters, not real-life individuals. Each of them has traits and those are portrayed, in the instance of The Starchild and The Demon, by Paul and Gene.”  So what was it like, working with legendary musicians to bring the voices of their world-famous characters to our console battlefield?“Working with Paul and Gene was an absolute blast,” says Blewett. “These guys are obviously seasoned studio vets and really made the sessions fun and engaging.”He adds, “Gene lived up to his reputation as a master of trivia and kept us entertained between takes regaling us with stories from the road and factoids. Paul was absolutely a gracious, friendly individual and belted out an incredibly intense vocal performance and kept it going for the whole session. We even quipped that it was ‘like six months of shows in two hours.’ Impressive!”As for the voiceover for The Starman and The Catman, tankers and Kiss fans should rest rock out assured that these Commanders have received the same attention to detail. According to Blewett, “We worked with Kiss to understand the character profiles of The Catman and The Spaceman and came up with casting guidelines from there. For instance, The Catman is a smaller guy, witty and agile, while The Spaceman is older and wiser. The word ‘sagacious’ was used in session to describe the personality of The Spaceman.” War Machine(s) If you think the Kiss 3D Commanders sound impressive (and yes, I’m biased, but they are), be sure to recruit them during Metal Fest, and pair them up with our four Premium Kiss tanks, also inspired and named after the characters: The Demon, The Starchild, The Spaceman, and The Catman. Each of these tanks not only takes visual inspiration from Kiss; it also has abilities inspired by a specific band member’s persona. You’d better believe that The Demon is a tank that mounts a flamethrower! All of this is in addition to the Challenges, special event battles, daily login rewards, and more that Metal Fest offers. Rock out while you can, and don’t miss any of it—Metal Fest takes place in World of Tanks Modern Armor from August 26 through September 15 on PS4 and PS5!
    2 Commentaires ·0 Parts
  • السلام عليكم يا جماعة!

    اليوم حبيت نشارك معاكم مقال مدهش تحت عنوان "RoundUp 180 - Talking Between Holes". إذا كنت من محبي عالم الألعاب، راهو عندك موعد مع مواضيع متنوعة تجمع بين الذكريات القديمة والقصص المثيرة! من Hardware Flashback للمسابقات القياسية في الألعاب، وحتى آخر الأخبار وTrivia، كل شي موجود!

    شخصياً، دايماً نحب نرجع لذكريات الطفولة ولعبة "Dinosaur Pie" تحديداً! تعطي شعور خاص وكأننا نعيش في زمن جميل.

    فكروا في كم من الأشياء الجديدة ممكن تتعلموها وتكتشفوها في عالم الألعاب. انطلقوا واقرأوا المقال!

    https://www.retrogamingroundup.com/shownotes/2022/roundup180_2022.05.php

    #ألعاب #RetroGaming #GamingCulture #ذكريات #Passion
    🎮 السلام عليكم يا جماعة! 🤗 اليوم حبيت نشارك معاكم مقال مدهش تحت عنوان "RoundUp 180 - Talking Between Holes". إذا كنت من محبي عالم الألعاب، راهو عندك موعد مع مواضيع متنوعة تجمع بين الذكريات القديمة والقصص المثيرة! من Hardware Flashback للمسابقات القياسية في الألعاب، وحتى آخر الأخبار وTrivia، كل شي موجود! شخصياً، دايماً نحب نرجع لذكريات الطفولة ولعبة "Dinosaur Pie" تحديداً! تعطي شعور خاص وكأننا نعيش في زمن جميل. فكروا في كم من الأشياء الجديدة ممكن تتعلموها وتكتشفوها في عالم الألعاب. انطلقوا واقرأوا المقال! https://www.retrogamingroundup.com/shownotes/2022/roundup180_2022.05.php #ألعاب #RetroGaming #GamingCulture #ذكريات #Passion
    www.retrogamingroundup.com
    Hardware Flashback - (00:00) Dinosaur Pie - (32:10) Guinness Gaming Records - (1:35:58) Martin Galway: Never Ending Story - (1:37:35) Jeremy Cooke Interview - (1:39:34) Top Ten Game Endorsements - (2:37:24) Gaming Trivia - (4:55:49)
    1 Commentaires ·0 Parts
  • واش راكم يا جماعة؟ اليوم حبيت نهدر معاكم على حاجة شوية حماسية في عالم الكرة! ⚽️

    المقال يتكلم على آخر أحداث البطولة، وين وركسهم مزال بلا انتصارات، ولانفارد يحقق إنجازات مع كوفنتري، بينما شيفيلد يونايتد يعاني من تراجع ملحوظ. كل فريق عنده قصته الخاصة، وهاد الشيء يجعل البطولة أكثر تشويقًا، صح ولا لا؟

    شخصيًا، نحب نشوف كيفاش الفرق تتأقلم مع التحديات، ونشعر بضغط المدربين واللاعبين. تذكروا لما كان فريقنا يحارب من أجل البقاء؟ كانت لحظات مشوقة، فيها الفرح والحزن!

    أحسن حاجة في الكرة هي المفاجآت، وكيما يقولوا "كل مباراة قصة جديدة". نحب نشوف آرائكم حول هالموضوع!

    https://www.skysports.com/football/news/11095/13412755/championship-talking-points-wrexham-still-winless-lampards-coventry-soar-sheff-utd-slump-ag
    واش راكم يا جماعة؟ اليوم حبيت نهدر معاكم على حاجة شوية حماسية في عالم الكرة! ⚽️ المقال يتكلم على آخر أحداث البطولة، وين وركسهم مزال بلا انتصارات، ولانفارد يحقق إنجازات مع كوفنتري، بينما شيفيلد يونايتد يعاني من تراجع ملحوظ. كل فريق عنده قصته الخاصة، وهاد الشيء يجعل البطولة أكثر تشويقًا، صح ولا لا؟ شخصيًا، نحب نشوف كيفاش الفرق تتأقلم مع التحديات، ونشعر بضغط المدربين واللاعبين. تذكروا لما كان فريقنا يحارب من أجل البقاء؟ كانت لحظات مشوقة، فيها الفرح والحزن! أحسن حاجة في الكرة هي المفاجآت، وكيما يقولوا "كل مباراة قصة جديدة". نحب نشوف آرائكم حول هالموضوع! https://www.skysports.com/football/news/11095/13412755/championship-talking-points-wrexham-still-winless-lampards-coventry-soar-sheff-utd-slump-ag
    1 Commentaires ·0 Parts
  • ياهلا! وينكم يا أصدقائي؟

    الويكاند جاي علينا وباقي غير شوية لنستمتع بالألعاب! لكن قبل ما ندخلو في خططنا، خلو نرجعو شوي للويك اللي فات. كان شوي هادئ في عالم نينتندو، ماكانش حتى خبر عن Direct! غريب، صح؟ لكن عندهم بعض المفاجآت، مثل الحدث الجديد لـ Mario Kart في اليابان وإعلان عن وصول Chibi-Robo لمكتبة GameCube NSO. بالإضافة، جربنا لعبة غريبة شوي اسمها Pokémon Legends: Z-A.

    أنا شخصياً متحمس لتجربة الألعاب الجديدة، خاصة مع الأجواء اللي تخلي الواحد يستمتع بركوب الموجة!

    يلا، خليونا نفكروا في التجارب الجديدة اللي نقدروا نعيشوها هذا الويكاند.

    https://www.nintendolife.com/features/talking-point-what-are-you-playing-this-weekend-16th-august
    #ألعاب #Nintendo #ويكاند #VideoGames #GamingCommunity
    ياهلا! وينكم يا أصدقائي؟ 🎮 الويكاند جاي علينا وباقي غير شوية لنستمتع بالألعاب! لكن قبل ما ندخلو في خططنا، خلو نرجعو شوي للويك اللي فات. كان شوي هادئ في عالم نينتندو، ماكانش حتى خبر عن Direct! غريب، صح؟ لكن عندهم بعض المفاجآت، مثل الحدث الجديد لـ Mario Kart في اليابان وإعلان عن وصول Chibi-Robo لمكتبة GameCube NSO. بالإضافة، جربنا لعبة غريبة شوي اسمها Pokémon Legends: Z-A. أنا شخصياً متحمس لتجربة الألعاب الجديدة، خاصة مع الأجواء اللي تخلي الواحد يستمتع بركوب الموجة! يلا، خليونا نفكروا في التجارب الجديدة اللي نقدروا نعيشوها هذا الويكاند. https://www.nintendolife.com/features/talking-point-what-are-you-playing-this-weekend-16th-august #ألعاب #Nintendo #ويكاند #VideoGames #GamingCommunity
    www.nintendolife.com
    Excelsior!The freakin' weekend is finally upon us, and we've got games to play! Before we get into our plans, though, let's recap the week.It was a pretty quiet one in Nintendo Land, with not so much as a whiff of a Direct — which is weird, after the
    Like
    Wow
    Love
    Angry
    Sad
    164
    · 1 Commentaires ·0 Parts
  • "كاين أمور في الكتابة تعكس مشاعرنا بشكل قوي، وهذا هو سر قوة الكلمات."

    في مقالي "Foolishness on the Page"، نحكي مع الكاتب Zahid Rafiq حول كيف أن الأحداث في الكتاب تعكس رؤى الشخصيات، والرؤى هذي تتشكل بناءً على مشاعرهم. يعني، كل ما نحسوه، ينعكس في الكتابة، وهذا يفسر لينا علاش الكتّاب يقدروا يوصلونا لعالمهم الخاص.

    من تجربتي، لما نكتب، نحاول نتصل بالشخصيات ونشعر بما يشعرون به، وهذا يخلينا نعيش التجربة معاهوم. الكتابة ما هيش غير كلمات، بل هي طريقة للتعبير عن الذات وفهم مشاعر الآخرين.

    خلينا نفكر في كيف يمكن للكلمات تغيرنا وتربطنا ببعضنا البعض، فالكتابة هي جسر بين الأرواح.

    https://www.publicbooks.org/foolishness-on-the-page-talking-with-zahid-rafiq/

    #كتابة #فلسفة #زahidRafiq #أدب #تحليل
    "كاين أمور في الكتابة تعكس مشاعرنا بشكل قوي، وهذا هو سر قوة الكلمات." في مقالي "Foolishness on the Page"، نحكي مع الكاتب Zahid Rafiq حول كيف أن الأحداث في الكتاب تعكس رؤى الشخصيات، والرؤى هذي تتشكل بناءً على مشاعرهم. يعني، كل ما نحسوه، ينعكس في الكتابة، وهذا يفسر لينا علاش الكتّاب يقدروا يوصلونا لعالمهم الخاص. من تجربتي، لما نكتب، نحاول نتصل بالشخصيات ونشعر بما يشعرون به، وهذا يخلينا نعيش التجربة معاهوم. الكتابة ما هيش غير كلمات، بل هي طريقة للتعبير عن الذات وفهم مشاعر الآخرين. خلينا نفكر في كيف يمكن للكلمات تغيرنا وتربطنا ببعضنا البعض، فالكتابة هي جسر بين الأرواح. https://www.publicbooks.org/foolishness-on-the-page-talking-with-zahid-rafiq/ #كتابة #فلسفة #زahidRafiq #أدب #تحليل
    www.publicbooks.org
    “What ends up in the writing is what the characters see, and what they see is determined by how they feel.” The post “Foolishness on the Page”: Talking with Zahid Rafiq appeared first on Public Books.
    1 Commentaires ·0 Parts
  • يا جماعة، واش راكم ديرين؟ اليوم رايحة نحكي لكم على موضوع يشعل المخ!

    في المقال الجديد تحت عنوان "What The Heck Are You Talking About؟"، صاحب الموضوع يهدر على مشاكل واجهات الويب اللي تشبه React. لكن المفاجأة هي أنه ما جابش أرقام، واليوم راح يقدملنا الأرقام ويعطينا طريقة جديدة باش نقيم الأداء. والشيء المثير هو أنه راح يستخدم كود خاص به وما يحبش ينتقد كود الآخرين، لكن كاين شوية "انتقادات" في الموضوع!

    شخصياً، كل مرة نبدأ مشروع جديد، نقعد نحتار في اختيار الأدوات المناسبة، ونشوف كيفاش نقدر ننجح في تحسين الأداء. المهم نكونو متيقظين ونستخدم الأرقام كمؤشر.

    بالمختصر، خليونا نفكرو شوية في الأرقام وكيش راهم يغيرو لنا اللعبة، يمكن راكم تكتشفو حاجة جديدة!

    https://code.thheller.com/blog/shadow-cljs/2025/06/25/what-the-heck-are-you-talking
    🔍 يا جماعة، واش راكم ديرين؟ اليوم رايحة نحكي لكم على موضوع يشعل المخ! 😅 في المقال الجديد تحت عنوان "What The Heck Are You Talking About؟"، صاحب الموضوع يهدر على مشاكل واجهات الويب اللي تشبه React. لكن المفاجأة هي أنه ما جابش أرقام، واليوم راح يقدملنا الأرقام ويعطينا طريقة جديدة باش نقيم الأداء. والشيء المثير هو أنه راح يستخدم كود خاص به وما يحبش ينتقد كود الآخرين، لكن كاين شوية "انتقادات" في الموضوع! 😜 شخصياً، كل مرة نبدأ مشروع جديد، نقعد نحتار في اختيار الأدوات المناسبة، ونشوف كيفاش نقدر ننجح في تحسين الأداء. المهم نكونو متيقظين ونستخدم الأرقام كمؤشر. بالمختصر، خليونا نفكرو شوية في الأرقام وكيش راهم يغيرو لنا اللعبة، يمكن راكم تكتشفو حاجة جديدة! https://code.thheller.com/blog/shadow-cljs/2025/06/25/what-the-heck-are-you-talking
    code.thheller.com
    In my previous What The Heck Just Happened post I outlined some of the problems I see in react-like web frontends. I didn’t present any actual numbers though, so this is all about those numbers and my general approach to evaluate performance behavior
    1 Commentaires ·0 Parts
  • 'It is not our aim to grow, grow, grow:' Gamescom 2025 touts record exhibitors but organizers says quality is better than quantity

    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 15, 20254 Min ReadImage via GamescomGamescom 2025 is less than a week away and the annual industry showcase has broken a deluge of records before a single person has stepped foot inside the cavernous halls of the Koelnmesse. The five-day event, which brands itself as Europe's leading trade fair for digital games culture, will host over 1,500 exhibitors from 72 countries in 2025. It's a notable first that organizers say will comprise the most diverse lineup in Gamescom history. To accommodate burgeoning exhibitor interest, Gamescom 2025 is expanding its footprint to a record 233,000 square meters. Record registration numbers mean it's a smart move, with last year's event attracting 335,000 visitors. Opening Night Live, the digital and in-person show that kicks off the event with a deluge of video game announcements, has also been moved to Hall 1 for the first time. The switch means 5,000 people will be able to attend in-person—although the showcase will also be streamed online for a global audience.Felix Falk, managing director of game—the German games industry association that owns the Gamescom brand and co-organizes the event with Koelnmesse—described interest in the show as "immense," but why has Gamescom flourished in the years following the pandemic when another major industry event that went by the name of E3 fell into ruin? Related:Speaking to Game Developer earlier this week, Falk suggested Gamescom weathered that storm and emerged stronger because organizers understood the importance of establishing a digital footprint even before COVID-19 left the world in stasis. Opening Night Live was part of that push to attract a global audience via the power of streaming, and Falk explained that almost 50 million people watched last year's Geoff Keighly-fronted opening salvo. That's a lot of eyes on the Gamescom brand. Falk said the pivot to a hybrid digital-meets-physical event that included online communities meant Gamescom was in "good shape" before the pandemic. But what about post-COVID? In a world where major publishers are by no means guaranteed to attend in-person events—largely because the likes of Nintendo, Sony, and Microsoft have all taken to saving their biggest announcements for their own digital directs—where is the value in meeting face-to-face?Gamescom organizer says face-to-face events close the 'emotional distance' between  developers and consumersFor both exhibitors and consumers, Falk suggested there is an "emotional" aspect to attending events in-person that is tough to replicate digitally." Related:"Being on-site is a totally different experience and much deeper and much more worthy for the companies and the games, compared to the digital format," he said. "You can see that if you head to the indie area, which is the biggest indie area we've ever had, and there you'll normally find the developer stood next to the game. You can talk to them—and they love the feedback. Of course, you could do a survey online and get feedback that way, but it's different from talking to each other." In short, he explained that in-person events close the "emotional distance" between developers and players to create experiences that simply cannot be replicated online. Falk described digital events as "fast" and "dynamic" by contrast, which makes them a unique proposition in their own right. So, by cultivating an online presence and letting people engage with the show virtually, Falk claimed Gamescom managed to become a "platform for the whole industry.""means you do find target groups and communities you normally don't reach," he added. "You also reach media or stakeholders who wouldn't normally come to your specific showcase, because you're a part of the biggest show worldwide for gaming."Making digital inroads also allows Gamescom to expand without stretching the in-person event to a breaking point. Discussing what long-term success means for the showcase, Falk explained he doesn't believe Gamescom will live or die based on "one KPI of scale." Related:In fact, he said organizers have been intentionally limiting attendance in Cologne to preserve the atmosphere of the show. "We don't want the atmosphere to be worse because we squeeze in too many people," he continued. "We could squeeze more in—which we don't—because the quality of the experience is important for the fans." Still, there is room for measured growth. Falk noted the record number of exhibitors was possible because there is still room to expand the show floor, but reiterated that "more" isn't the overarching plan. "We have more exhibitors than ever before, which is great because we still have space to grow, butis more about variety and diversity of content," he added. "It is not our aim to grow, grow, grow—because that doesn't make sense. It's more about the quality and most importantly the digital reach, which we have seen over the last few years is exponentially growing." Gamescom is also expanding into other regions such as Latin America and Asia, but we'll have more on that particular topic next week. Stay locked on Game Developer for more.Game Developer attended Gamescom 2025 via the Gamescom Media Ambassador Program, which covered flights and accommodation. about:Top StoriesGamescomAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #039it #not #our #aim #grow
    'It is not our aim to grow, grow, grow:' Gamescom 2025 touts record exhibitors but organizers says quality is better than quantity
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 15, 20254 Min ReadImage via GamescomGamescom 2025 is less than a week away and the annual industry showcase has broken a deluge of records before a single person has stepped foot inside the cavernous halls of the Koelnmesse. The five-day event, which brands itself as Europe's leading trade fair for digital games culture, will host over 1,500 exhibitors from 72 countries in 2025. It's a notable first that organizers say will comprise the most diverse lineup in Gamescom history. To accommodate burgeoning exhibitor interest, Gamescom 2025 is expanding its footprint to a record 233,000 square meters. Record registration numbers mean it's a smart move, with last year's event attracting 335,000 visitors. Opening Night Live, the digital and in-person show that kicks off the event with a deluge of video game announcements, has also been moved to Hall 1 for the first time. The switch means 5,000 people will be able to attend in-person—although the showcase will also be streamed online for a global audience.Felix Falk, managing director of game—the German games industry association that owns the Gamescom brand and co-organizes the event with Koelnmesse—described interest in the show as "immense," but why has Gamescom flourished in the years following the pandemic when another major industry event that went by the name of E3 fell into ruin? Related:Speaking to Game Developer earlier this week, Falk suggested Gamescom weathered that storm and emerged stronger because organizers understood the importance of establishing a digital footprint even before COVID-19 left the world in stasis. Opening Night Live was part of that push to attract a global audience via the power of streaming, and Falk explained that almost 50 million people watched last year's Geoff Keighly-fronted opening salvo. That's a lot of eyes on the Gamescom brand. Falk said the pivot to a hybrid digital-meets-physical event that included online communities meant Gamescom was in "good shape" before the pandemic. But what about post-COVID? In a world where major publishers are by no means guaranteed to attend in-person events—largely because the likes of Nintendo, Sony, and Microsoft have all taken to saving their biggest announcements for their own digital directs—where is the value in meeting face-to-face?Gamescom organizer says face-to-face events close the 'emotional distance' between  developers and consumersFor both exhibitors and consumers, Falk suggested there is an "emotional" aspect to attending events in-person that is tough to replicate digitally." Related:"Being on-site is a totally different experience and much deeper and much more worthy for the companies and the games, compared to the digital format," he said. "You can see that if you head to the indie area, which is the biggest indie area we've ever had, and there you'll normally find the developer stood next to the game. You can talk to them—and they love the feedback. Of course, you could do a survey online and get feedback that way, but it's different from talking to each other." In short, he explained that in-person events close the "emotional distance" between developers and players to create experiences that simply cannot be replicated online. Falk described digital events as "fast" and "dynamic" by contrast, which makes them a unique proposition in their own right. So, by cultivating an online presence and letting people engage with the show virtually, Falk claimed Gamescom managed to become a "platform for the whole industry.""means you do find target groups and communities you normally don't reach," he added. "You also reach media or stakeholders who wouldn't normally come to your specific showcase, because you're a part of the biggest show worldwide for gaming."Making digital inroads also allows Gamescom to expand without stretching the in-person event to a breaking point. Discussing what long-term success means for the showcase, Falk explained he doesn't believe Gamescom will live or die based on "one KPI of scale." Related:In fact, he said organizers have been intentionally limiting attendance in Cologne to preserve the atmosphere of the show. "We don't want the atmosphere to be worse because we squeeze in too many people," he continued. "We could squeeze more in—which we don't—because the quality of the experience is important for the fans." Still, there is room for measured growth. Falk noted the record number of exhibitors was possible because there is still room to expand the show floor, but reiterated that "more" isn't the overarching plan. "We have more exhibitors than ever before, which is great because we still have space to grow, butis more about variety and diversity of content," he added. "It is not our aim to grow, grow, grow—because that doesn't make sense. It's more about the quality and most importantly the digital reach, which we have seen over the last few years is exponentially growing." Gamescom is also expanding into other regions such as Latin America and Asia, but we'll have more on that particular topic next week. Stay locked on Game Developer for more.Game Developer attended Gamescom 2025 via the Gamescom Media Ambassador Program, which covered flights and accommodation. about:Top StoriesGamescomAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #039it #not #our #aim #grow
    'It is not our aim to grow, grow, grow:' Gamescom 2025 touts record exhibitors but organizers says quality is better than quantity
    www.gamedeveloper.com
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 15, 20254 Min ReadImage via GamescomGamescom 2025 is less than a week away and the annual industry showcase has broken a deluge of records before a single person has stepped foot inside the cavernous halls of the Koelnmesse. The five-day event, which brands itself as Europe's leading trade fair for digital games culture, will host over 1,500 exhibitors from 72 countries in 2025. It's a notable first that organizers say will comprise the most diverse lineup in Gamescom history. To accommodate burgeoning exhibitor interest, Gamescom 2025 is expanding its footprint to a record 233,000 square meters. Record registration numbers mean it's a smart move, with last year's event attracting 335,000 visitors. Opening Night Live, the digital and in-person show that kicks off the event with a deluge of video game announcements, has also been moved to Hall 1 for the first time. The switch means 5,000 people will be able to attend in-person—although the showcase will also be streamed online for a global audience.Felix Falk, managing director of game—the German games industry association that owns the Gamescom brand and co-organizes the event with Koelnmesse—described interest in the show as "immense," but why has Gamescom flourished in the years following the pandemic when another major industry event that went by the name of E3 fell into ruin? Related:Speaking to Game Developer earlier this week, Falk suggested Gamescom weathered that storm and emerged stronger because organizers understood the importance of establishing a digital footprint even before COVID-19 left the world in stasis. Opening Night Live was part of that push to attract a global audience via the power of streaming, and Falk explained that almost 50 million people watched last year's Geoff Keighly-fronted opening salvo. That's a lot of eyes on the Gamescom brand. Falk said the pivot to a hybrid digital-meets-physical event that included online communities meant Gamescom was in "good shape" before the pandemic. But what about post-COVID? In a world where major publishers are by no means guaranteed to attend in-person events—largely because the likes of Nintendo, Sony, and Microsoft have all taken to saving their biggest announcements for their own digital directs—where is the value in meeting face-to-face?Gamescom organizer says face-to-face events close the 'emotional distance' between  developers and consumersFor both exhibitors and consumers, Falk suggested there is an "emotional" aspect to attending events in-person that is tough to replicate digitally." Related:"Being on-site is a totally different experience and much deeper and much more worthy for the companies and the games, compared to the digital format," he said. "You can see that if you head to the indie area, which is the biggest indie area we've ever had, and there you'll normally find the developer stood next to the game. You can talk to them—and they love the feedback. Of course, you could do a survey online and get feedback that way, but it's different from talking to each other." In short, he explained that in-person events close the "emotional distance" between developers and players to create experiences that simply cannot be replicated online. Falk described digital events as "fast" and "dynamic" by contrast, which makes them a unique proposition in their own right. So, by cultivating an online presence and letting people engage with the show virtually, Falk claimed Gamescom managed to become a "platform for the whole industry.""[The hybrid setup] means you do find target groups and communities you normally don't reach," he added. "You also reach media or stakeholders who wouldn't normally come to your specific showcase, because you're a part of the biggest show worldwide for gaming."Making digital inroads also allows Gamescom to expand without stretching the in-person event to a breaking point. Discussing what long-term success means for the showcase, Falk explained he doesn't believe Gamescom will live or die based on "one KPI of scale." Related:In fact, he said organizers have been intentionally limiting attendance in Cologne to preserve the atmosphere of the show. "We don't want the atmosphere to be worse because we squeeze in too many people," he continued. "We could squeeze more in—which we don't—because the quality of the experience is important for the fans." Still, there is room for measured growth. Falk noted the record number of exhibitors was possible because there is still room to expand the show floor, but reiterated that "more" isn't the overarching plan. "We have more exhibitors than ever before, which is great because we still have space to grow, but [success] is more about variety and diversity of content," he added. "It is not our aim to grow, grow, grow—because that doesn't make sense. It's more about the quality and most importantly the digital reach, which we have seen over the last few years is exponentially growing." Gamescom is also expanding into other regions such as Latin America and Asia, but we'll have more on that particular topic next week. Stay locked on Game Developer for more.Game Developer attended Gamescom 2025 via the Gamescom Media Ambassador Program, which covered flights and accommodation.Read more about:Top StoriesGamescomAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    2 Commentaires ·0 Parts
  • Best Dunkers In NBA 2K26

    We continue to get more player ratings for the upcoming NBA 2K26, and the latest set reveals who the game's best dunkers are. Specifically, this ranks the the top 10 dunkers based on their Driving Dunk stat, which helps represent those players who are best at pulling off a spectacular dunk on the court.This is a continuation of ratings week for 2K26, and we've already gone over the top 10 lists for the best three-point shooters, best handles, and best rookies. Anthony Edwards - 97Anthony Edwards leads the list of dunkers in NBA 2K26, and for good reason. No one had more flashy dunks over the course of last season than Ant. Ja Morant - 96Ja Morant comes in right behind Anthony Edwards at 96 OVR. If some of Morant's attempted dunks from the last two or three seasons had gone in, we'd probably be talking about him in the number one spot. Nevertheless, Morant is still one of the elite dunkers in the NBA. Jalen Johnson - 96Jalen Johnson is a player that's really come into his own the past two seasons, and he's flown under the radar as one of the best dunkers in the NBA since he was drafted four years ago. Johnson regularly puts on a show playing the Hawks, and he's had some of the most jaw-dropping dunks in the league recently. Amen Thompson - 96Amen Thompson is a young player who is on the verge of becoming a superstar. His sheer athleticism is something to behold every night he steps on the court, and his dunks have only been improving since being drafted by the Rockets. Shaedon Sharpe - 95Shaedon Sharpe is another underrated dunker who has become a near-20-point scorer for the Trail Blazers during the past two seasons. Many fans have wanted to see him compete in the Dunk Contest, so perhaps we'll finally get him included in 2026. Jaylen Brown - 94Jaylen Brown might be remembered most for a lackluster Dunk Contest in 2025, but let's put that aside and think about all of his miraculous dunks from the regular and postseason in the 2020s. Brown delivers some of the best posters in the NBA year in and year out, and his rating is right where it should be. Derrick Jones Jr. - 94Some players in the NBA have made a career off being able to dunk the ball, and while Derrick Jones Jr. is a good defender, his dunking ability is his best offensive skill. He won the 2020 Dunk Contest over another member of this top 10 list, cementing his place in NBA dunking lore. Zach LaVine - 94Zach LaVine is another spectacular dunker in the NBA, particularly due to his Dunk Contest performance in 2016. LaVine has remained elite in this regard as he's transitioned to a few different teams. Aaron Gordon - 94Aaron Gordon is perhaps the most well-known and highly-regarded dunker of his generation. Multiple Dunk Contest finals appearances and consistent dunking performances throughout his time in the NBA has cemented his status. It's only appropriate the top 10 would include someone who pulled off a game-wining, buzzer-beating dunk in this past year's playoffs. Zion Williamson - 93Rounding out the list is Zion Williamson, a player who can still pull off incredible dunks when he's playing, though he is often not on the court consistently, as Zion has certainly dealt with his fair share of injuries over the years.
    #best #dunkers #nba #2k26
    Best Dunkers In NBA 2K26
    We continue to get more player ratings for the upcoming NBA 2K26, and the latest set reveals who the game's best dunkers are. Specifically, this ranks the the top 10 dunkers based on their Driving Dunk stat, which helps represent those players who are best at pulling off a spectacular dunk on the court.This is a continuation of ratings week for 2K26, and we've already gone over the top 10 lists for the best three-point shooters, best handles, and best rookies. Anthony Edwards - 97Anthony Edwards leads the list of dunkers in NBA 2K26, and for good reason. No one had more flashy dunks over the course of last season than Ant. Ja Morant - 96Ja Morant comes in right behind Anthony Edwards at 96 OVR. If some of Morant's attempted dunks from the last two or three seasons had gone in, we'd probably be talking about him in the number one spot. Nevertheless, Morant is still one of the elite dunkers in the NBA. Jalen Johnson - 96Jalen Johnson is a player that's really come into his own the past two seasons, and he's flown under the radar as one of the best dunkers in the NBA since he was drafted four years ago. Johnson regularly puts on a show playing the Hawks, and he's had some of the most jaw-dropping dunks in the league recently. Amen Thompson - 96Amen Thompson is a young player who is on the verge of becoming a superstar. His sheer athleticism is something to behold every night he steps on the court, and his dunks have only been improving since being drafted by the Rockets. Shaedon Sharpe - 95Shaedon Sharpe is another underrated dunker who has become a near-20-point scorer for the Trail Blazers during the past two seasons. Many fans have wanted to see him compete in the Dunk Contest, so perhaps we'll finally get him included in 2026. Jaylen Brown - 94Jaylen Brown might be remembered most for a lackluster Dunk Contest in 2025, but let's put that aside and think about all of his miraculous dunks from the regular and postseason in the 2020s. Brown delivers some of the best posters in the NBA year in and year out, and his rating is right where it should be. Derrick Jones Jr. - 94Some players in the NBA have made a career off being able to dunk the ball, and while Derrick Jones Jr. is a good defender, his dunking ability is his best offensive skill. He won the 2020 Dunk Contest over another member of this top 10 list, cementing his place in NBA dunking lore. Zach LaVine - 94Zach LaVine is another spectacular dunker in the NBA, particularly due to his Dunk Contest performance in 2016. LaVine has remained elite in this regard as he's transitioned to a few different teams. Aaron Gordon - 94Aaron Gordon is perhaps the most well-known and highly-regarded dunker of his generation. Multiple Dunk Contest finals appearances and consistent dunking performances throughout his time in the NBA has cemented his status. It's only appropriate the top 10 would include someone who pulled off a game-wining, buzzer-beating dunk in this past year's playoffs. Zion Williamson - 93Rounding out the list is Zion Williamson, a player who can still pull off incredible dunks when he's playing, though he is often not on the court consistently, as Zion has certainly dealt with his fair share of injuries over the years. #best #dunkers #nba #2k26
    Best Dunkers In NBA 2K26
    www.gamespot.com
    We continue to get more player ratings for the upcoming NBA 2K26, and the latest set reveals who the game's best dunkers are. Specifically, this ranks the the top 10 dunkers based on their Driving Dunk stat, which helps represent those players who are best at pulling off a spectacular dunk on the court.This is a continuation of ratings week for 2K26, and we've already gone over the top 10 lists for the best three-point shooters, best handles, and best rookies. Anthony Edwards - 97 (Driving Dunk)Anthony Edwards leads the list of dunkers in NBA 2K26, and for good reason. No one had more flashy dunks over the course of last season than Ant. Ja Morant - 96 (Driving Dunk)Ja Morant comes in right behind Anthony Edwards at 96 OVR. If some of Morant's attempted dunks from the last two or three seasons had gone in, we'd probably be talking about him in the number one spot. Nevertheless, Morant is still one of the elite dunkers in the NBA. Jalen Johnson - 96 (Driving Dunk)Jalen Johnson is a player that's really come into his own the past two seasons, and he's flown under the radar as one of the best dunkers in the NBA since he was drafted four years ago. Johnson regularly puts on a show playing the Hawks, and he's had some of the most jaw-dropping dunks in the league recently. Amen Thompson - 96 (Driving Dunk)Amen Thompson is a young player who is on the verge of becoming a superstar. His sheer athleticism is something to behold every night he steps on the court, and his dunks have only been improving since being drafted by the Rockets. Shaedon Sharpe - 95 (Driving Dunk)Shaedon Sharpe is another underrated dunker who has become a near-20-point scorer for the Trail Blazers during the past two seasons. Many fans have wanted to see him compete in the Dunk Contest, so perhaps we'll finally get him included in 2026. Jaylen Brown - 94 (Driving Dunk)Jaylen Brown might be remembered most for a lackluster Dunk Contest in 2025, but let's put that aside and think about all of his miraculous dunks from the regular and postseason in the 2020s. Brown delivers some of the best posters in the NBA year in and year out, and his rating is right where it should be. Derrick Jones Jr. - 94 (Driving Dunk)Some players in the NBA have made a career off being able to dunk the ball, and while Derrick Jones Jr. is a good defender, his dunking ability is his best offensive skill. He won the 2020 Dunk Contest over another member of this top 10 list, cementing his place in NBA dunking lore. Zach LaVine - 94 (Driving Dunk)Zach LaVine is another spectacular dunker in the NBA, particularly due to his Dunk Contest performance in 2016. LaVine has remained elite in this regard as he's transitioned to a few different teams. Aaron Gordon - 94 (Driving Dunk)Aaron Gordon is perhaps the most well-known and highly-regarded dunker of his generation. Multiple Dunk Contest finals appearances and consistent dunking performances throughout his time in the NBA has cemented his status. It's only appropriate the top 10 would include someone who pulled off a game-wining, buzzer-beating dunk in this past year's playoffs. Zion Williamson - 93 (Driving Dunk)Rounding out the list is Zion Williamson, a player who can still pull off incredible dunks when he's playing, though he is often not on the court consistently, as Zion has certainly dealt with his fair share of injuries over the years.
    2 Commentaires ·0 Parts
  • Now We’re Talking: NVIDIA Releases Open Dataset, Models for Multilingual Speech AI

    Of around 7,000 languages in the world, a tiny fraction are supported by AI language models. NVIDIA is tackling the problem with a new dataset and models that support the development of high-quality speech recognition and translation AI for 25 European languages — including languages with limited available data like Croatian, Estonian and Maltese.
    These tools will enable developers to more easily scale AI applications to support global users with fast, accurate speech technology for production-scale use cases such as multilingual chatbots, customer service voice agents and near-real-time translation services. They include:

    Granary, a massive, open-source corpus of multilingual speech datasets that contains around a million hours of audio, including nearly 650,000 hours for speech recognition and over 350,000 hours for speech translation.
    NVIDIA Canary-1b-v2, a billion-parameter model trained on Granary for high-quality transcription of European languages, plus translation between English and two dozen supported languages.
    NVIDIA Parakeet-tdt-0.6b-v3, a streamlined, 600-million-parameter model designed for real-time or large-volume transcription of Granary’s supported languages.

    The paper behind Granary will be presented at Interspeech, a language processing conference taking place in the Netherlands, Aug. 17-21. The dataset, as well as the new Canary and Parakeet models, are now available on Hugging Face.
    How Granary Addresses Data Scarcity
    To develop the Granary dataset, the NVIDIA speech AI team collaborated with researchers from Carnegie Mellon University and Fondazione Bruno Kessler. The team passed unlabeled audio through an innovative processing pipeline powered by NVIDIA NeMo Speech Data Processor toolkit that turned it into structured, high-quality data.
    This pipeline allowed the researchers to enhance public speech data into a usable format for AI training, without the need for resource-intensive human annotation. It’s available in open source on GitHub.
    With Granary’s clean, ready-to-use data, developers can get a head start building models that tackle transcription and translation tasks in nearly all of the European Union’s 24 official languages, plus Russian and Ukrainian.
    For European languages underrepresented in human-annotated datasets, Granary provides a critical resource to develop more inclusive speech technologies that better reflect the linguistic diversity of the continent — all while using less training data.
    The team demonstrated in their Interspeech paper that, compared to other popular datasets, it takes around half as much Granary training data to achieve a target accuracy level for automatic speech recognitionand automatic speech translation.
    Tapping NVIDIA NeMo to Turbocharge Transcription
    The new Canary and Parakeet models offer examples of the kinds of models developers can build with Granary, customized to their target applications. Canary-1b-v2 is optimized for accuracy on complex tasks, while parakeet-tdt-0.6b-v3 is designed for high-speed, low-latency tasks.
    By sharing the methodology behind the Granary dataset and these two models, NVIDIA is enabling the global speech AI developer community to adapt this data processing workflow to other ASR or AST models or additional languages, accelerating speech AI innovation.
    Canary-1b-v2, available under a permissive license, expands the Canary family’s supported languages from four to 25. It offers transcription and translation quality comparable to models 3x larger while running inference up to 10x faster.


    NVIDIA NeMo, a modular software suite for managing the AI agent lifecycle, accelerated speech AI model development. NeMo Curator, part of the software suite, enabled the team to filter out synthetic examples from the source data so that only high-quality samples were used for model training. The team also harnessed the NeMo Speech Data Processor toolkit for tasks like aligning transcripts with audio files and converting data into the required formats.
    Parakeet-tdt-0.6b-v3 prioritizes high throughput and is capable of transcribing 24-minute audio segments in a single inference pass. The model automatically detects the input audio language and transcribes without additional prompting steps.
    Both Canary and Parakeet models provide accurate punctuation, capitalization and word-level timestamps in their outputs.
    on GitHub and get started with Granary on Hugging Face.
    #now #were #talking #nvidia #releases
    Now We’re Talking: NVIDIA Releases Open Dataset, Models for Multilingual Speech AI
    Of around 7,000 languages in the world, a tiny fraction are supported by AI language models. NVIDIA is tackling the problem with a new dataset and models that support the development of high-quality speech recognition and translation AI for 25 European languages — including languages with limited available data like Croatian, Estonian and Maltese. These tools will enable developers to more easily scale AI applications to support global users with fast, accurate speech technology for production-scale use cases such as multilingual chatbots, customer service voice agents and near-real-time translation services. They include: Granary, a massive, open-source corpus of multilingual speech datasets that contains around a million hours of audio, including nearly 650,000 hours for speech recognition and over 350,000 hours for speech translation. NVIDIA Canary-1b-v2, a billion-parameter model trained on Granary for high-quality transcription of European languages, plus translation between English and two dozen supported languages. NVIDIA Parakeet-tdt-0.6b-v3, a streamlined, 600-million-parameter model designed for real-time or large-volume transcription of Granary’s supported languages. The paper behind Granary will be presented at Interspeech, a language processing conference taking place in the Netherlands, Aug. 17-21. The dataset, as well as the new Canary and Parakeet models, are now available on Hugging Face. How Granary Addresses Data Scarcity To develop the Granary dataset, the NVIDIA speech AI team collaborated with researchers from Carnegie Mellon University and Fondazione Bruno Kessler. The team passed unlabeled audio through an innovative processing pipeline powered by NVIDIA NeMo Speech Data Processor toolkit that turned it into structured, high-quality data. This pipeline allowed the researchers to enhance public speech data into a usable format for AI training, without the need for resource-intensive human annotation. It’s available in open source on GitHub. With Granary’s clean, ready-to-use data, developers can get a head start building models that tackle transcription and translation tasks in nearly all of the European Union’s 24 official languages, plus Russian and Ukrainian. For European languages underrepresented in human-annotated datasets, Granary provides a critical resource to develop more inclusive speech technologies that better reflect the linguistic diversity of the continent — all while using less training data. The team demonstrated in their Interspeech paper that, compared to other popular datasets, it takes around half as much Granary training data to achieve a target accuracy level for automatic speech recognitionand automatic speech translation. Tapping NVIDIA NeMo to Turbocharge Transcription The new Canary and Parakeet models offer examples of the kinds of models developers can build with Granary, customized to their target applications. Canary-1b-v2 is optimized for accuracy on complex tasks, while parakeet-tdt-0.6b-v3 is designed for high-speed, low-latency tasks. By sharing the methodology behind the Granary dataset and these two models, NVIDIA is enabling the global speech AI developer community to adapt this data processing workflow to other ASR or AST models or additional languages, accelerating speech AI innovation. Canary-1b-v2, available under a permissive license, expands the Canary family’s supported languages from four to 25. It offers transcription and translation quality comparable to models 3x larger while running inference up to 10x faster. NVIDIA NeMo, a modular software suite for managing the AI agent lifecycle, accelerated speech AI model development. NeMo Curator, part of the software suite, enabled the team to filter out synthetic examples from the source data so that only high-quality samples were used for model training. The team also harnessed the NeMo Speech Data Processor toolkit for tasks like aligning transcripts with audio files and converting data into the required formats. Parakeet-tdt-0.6b-v3 prioritizes high throughput and is capable of transcribing 24-minute audio segments in a single inference pass. The model automatically detects the input audio language and transcribes without additional prompting steps. Both Canary and Parakeet models provide accurate punctuation, capitalization and word-level timestamps in their outputs. on GitHub and get started with Granary on Hugging Face. #now #were #talking #nvidia #releases
    Now We’re Talking: NVIDIA Releases Open Dataset, Models for Multilingual Speech AI
    blogs.nvidia.com
    Of around 7,000 languages in the world, a tiny fraction are supported by AI language models. NVIDIA is tackling the problem with a new dataset and models that support the development of high-quality speech recognition and translation AI for 25 European languages — including languages with limited available data like Croatian, Estonian and Maltese. These tools will enable developers to more easily scale AI applications to support global users with fast, accurate speech technology for production-scale use cases such as multilingual chatbots, customer service voice agents and near-real-time translation services. They include: Granary, a massive, open-source corpus of multilingual speech datasets that contains around a million hours of audio, including nearly 650,000 hours for speech recognition and over 350,000 hours for speech translation. NVIDIA Canary-1b-v2, a billion-parameter model trained on Granary for high-quality transcription of European languages, plus translation between English and two dozen supported languages. NVIDIA Parakeet-tdt-0.6b-v3, a streamlined, 600-million-parameter model designed for real-time or large-volume transcription of Granary’s supported languages. The paper behind Granary will be presented at Interspeech, a language processing conference taking place in the Netherlands, Aug. 17-21. The dataset, as well as the new Canary and Parakeet models, are now available on Hugging Face. How Granary Addresses Data Scarcity To develop the Granary dataset, the NVIDIA speech AI team collaborated with researchers from Carnegie Mellon University and Fondazione Bruno Kessler. The team passed unlabeled audio through an innovative processing pipeline powered by NVIDIA NeMo Speech Data Processor toolkit that turned it into structured, high-quality data. This pipeline allowed the researchers to enhance public speech data into a usable format for AI training, without the need for resource-intensive human annotation. It’s available in open source on GitHub. With Granary’s clean, ready-to-use data, developers can get a head start building models that tackle transcription and translation tasks in nearly all of the European Union’s 24 official languages, plus Russian and Ukrainian. For European languages underrepresented in human-annotated datasets, Granary provides a critical resource to develop more inclusive speech technologies that better reflect the linguistic diversity of the continent — all while using less training data. The team demonstrated in their Interspeech paper that, compared to other popular datasets, it takes around half as much Granary training data to achieve a target accuracy level for automatic speech recognition (ASR) and automatic speech translation (AST). Tapping NVIDIA NeMo to Turbocharge Transcription The new Canary and Parakeet models offer examples of the kinds of models developers can build with Granary, customized to their target applications. Canary-1b-v2 is optimized for accuracy on complex tasks, while parakeet-tdt-0.6b-v3 is designed for high-speed, low-latency tasks. By sharing the methodology behind the Granary dataset and these two models, NVIDIA is enabling the global speech AI developer community to adapt this data processing workflow to other ASR or AST models or additional languages, accelerating speech AI innovation. Canary-1b-v2, available under a permissive license, expands the Canary family’s supported languages from four to 25. It offers transcription and translation quality comparable to models 3x larger while running inference up to 10x faster. https://blogs.nvidia.com/wp-content/uploads/2025/08/Canary-demo.mp4 NVIDIA NeMo, a modular software suite for managing the AI agent lifecycle, accelerated speech AI model development. NeMo Curator, part of the software suite, enabled the team to filter out synthetic examples from the source data so that only high-quality samples were used for model training. The team also harnessed the NeMo Speech Data Processor toolkit for tasks like aligning transcripts with audio files and converting data into the required formats. Parakeet-tdt-0.6b-v3 prioritizes high throughput and is capable of transcribing 24-minute audio segments in a single inference pass. The model automatically detects the input audio language and transcribes without additional prompting steps. Both Canary and Parakeet models provide accurate punctuation, capitalization and word-level timestamps in their outputs. Read more on GitHub and get started with Granary on Hugging Face.
    2 Commentaires ·0 Parts
ollo https://www.ollo.ws