• Take a Look at This Impressive Recreation of Kowloon Walled City in Minecraft

    3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #take #look #this #impressive #recreation
    Take a Look at This Impressive Recreation of Kowloon Walled City in Minecraft
    3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #take #look #this #impressive #recreation
    Take a Look at This Impressive Recreation of Kowloon Walled City in Minecraft
    80.lv
    3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 2 التعليقات ·0 المشاركات
  • Bring Your MetaHumans to Life Using Houdini with the Latest UE5 Update

    Epic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #bring #your #metahumans #life #using
    Bring Your MetaHumans to Life Using Houdini with the Latest UE5 Update
    Epic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #bring #your #metahumans #life #using
    Bring Your MetaHumans to Life Using Houdini with the Latest UE5 Update
    80.lv
    Epic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 2 التعليقات ·0 المشاركات
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 التعليقات ·0 المشاركات
  • Gaming Meets Streaming: Inside the Shift

    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone.
    #gaming #meets #streaming #inside #shift
    Gaming Meets Streaming: Inside the Shift
    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone. #gaming #meets #streaming #inside #shift
    Gaming Meets Streaming: Inside the Shift
    80.lv
    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams (Among Us, Vampire Survivors, Only Up! – all made big by streamers).Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate $15.32 billion in revenue. By 2030, that figure is expected to reach $18.92 billion, growing at an annual rate of 4.32%.The average revenue per user (ARPU) in 2025 stands at $10.51, showing consistent monetization across platforms.China remains the single largest market, expected to bring in $2.92 billion this year alone.Source: Statista Market Insights, 2025Viewership & Daily HabitsThe number of users in the live game streaming market is forecast to hit 1.8 billion by 2030, with user penetration rising from 18.6% in 2025 to 22.6% by the end of the decade.In 2023, average daily time spent watching game streams rose to 2.5 hours per user, up 12% year-over-year — a clear sign of streaming becoming part of gamers’ daily routines.Sources: Statista Market Insights, 2025; SNS Insider, 2024What People Are WatchingThe most-watched games on Twitch include League of Legends, GTA V, and Counter-Strike — all regularly topping charts for both viewers and streamers.When it comes to creators, the most-streamed games are Fortnite, Valorant, and Call of Duty: Warzone, showing a strong overlap between what streamers love to broadcast and what audiences enjoy watching.In Q1 2024, Twitch users spent over 249 million hours watching new game releases, while total gaming-related content reached around 3.3 billion hours.Sources: SullyGnome, 2025; Statista, 2025Global Trends & Regional PlatformsChina’s local platforms like Huya (31M MAU) and Douyu (26.6M MAU) remain key players in the domestic market.In South Korea, following Twitch’s 2023 exit, local services like AfreecaTV and newcomer Chzzk have positioned themselves as alternatives.Meanwhile, Japan and Europe continue to see steady engagement driven by strong gaming scenes and dedicated fan communities.Source: Statista, 2025Event Livestreaming Hits New HighsNintendo Direct was the most-watched gaming showcase in 2024, with an average minute audience of 2.6 million.The 2024 Streamer Awards drew over 645,000 peak viewers, highlighting how creator-focused events now rival traditional game showcases.Source: Statista, 2025As game streaming continues to evolve, its role in the broader gaming ecosystem is becoming clearer. It hasn’t replaced traditional gameplay – instead, it’s added a new dimension to how people engage with games, offering a space for connection, discovery, and commentary. For players, creators, and industry leaders alike, streaming now sits alongside playing as a core part of the modern gaming experience – one that continues to grow and shift with the industry itself.
    Like
    Love
    Wow
    Sad
    Angry
    615
    · 2 التعليقات ·0 المشاركات
  • Creating a Detailed Helmet Inspired by Fallout Using Substance 3D

    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    80.lv
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter (currently under NDA). Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    701
    · 2 التعليقات ·0 المشاركات
  • Romeo Is A Dead Man Is Grasshopper Manufacture Firing On All Cylinders

    Romeo Is A Deadman is Grasshopper Manufacture latest action game created by Suda 51 and Ren Yamazaki. It’s an all new IP from the developer, and it feels like Grasshopper is firing on all cylinders. Kurt Indovina got hands-on with the game, and got to fight through zombies, monsters, and a giant naked headless woman. Here’s the preview.
    #romeo #dead #man #grasshopper #manufacture
    Romeo Is A Dead Man Is Grasshopper Manufacture Firing On All Cylinders
    Romeo Is A Deadman is Grasshopper Manufacture latest action game created by Suda 51 and Ren Yamazaki. It’s an all new IP from the developer, and it feels like Grasshopper is firing on all cylinders. Kurt Indovina got hands-on with the game, and got to fight through zombies, monsters, and a giant naked headless woman. Here’s the preview. #romeo #dead #man #grasshopper #manufacture
    Romeo Is A Dead Man Is Grasshopper Manufacture Firing On All Cylinders
    www.gamespot.com
    Romeo Is A Deadman is Grasshopper Manufacture latest action game created by Suda 51 and Ren Yamazaki. It’s an all new IP from the developer, and it feels like Grasshopper is firing on all cylinders. Kurt Indovina got hands-on with the game, and got to fight through zombies, monsters, and a giant naked headless woman (which he was a big fan of). Here’s the preview.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 2 التعليقات ·0 المشاركات
  • Lost In Space Limited Edition 4K Blu-Ray Preorders Are 50% Off

    Lost in Space Limited Edition| Releases September 2 Preorder Amazon is offering a 50% discount on Arrow Video's upcoming 4K Blu-ray restoration of Lost in Space. Lost in Space Limited Edition is available to preorder for onlyahead of its September 2 release.Directed by Stephen Hopkins, the sci-fi film starring Gary Oldman and William Hurt wasn't warmly received in 1998. To be fair, the original 1960s TV series it was based on wasn't a critical success either. Nevertheless, the film and series were commercial successes, and both are considered cult classics today. Netflix even created a reimagining of the original series back in 2018 that ran for three seasons.If you enjoyed the Netflix series but haven't watched the film, the new 4K Blu-ray edition should be the best way to watch it going forward. For longtime Lost in Space fans, the Limited Edition, like all of Arrow Video's restorations of classic films, looks like a cool collector's item.Continue Reading at GameSpot
    #lost #space #limited #edition #bluray
    Lost In Space Limited Edition 4K Blu-Ray Preorders Are 50% Off
    Lost in Space Limited Edition| Releases September 2 Preorder Amazon is offering a 50% discount on Arrow Video's upcoming 4K Blu-ray restoration of Lost in Space. Lost in Space Limited Edition is available to preorder for onlyahead of its September 2 release.Directed by Stephen Hopkins, the sci-fi film starring Gary Oldman and William Hurt wasn't warmly received in 1998. To be fair, the original 1960s TV series it was based on wasn't a critical success either. Nevertheless, the film and series were commercial successes, and both are considered cult classics today. Netflix even created a reimagining of the original series back in 2018 that ran for three seasons.If you enjoyed the Netflix series but haven't watched the film, the new 4K Blu-ray edition should be the best way to watch it going forward. For longtime Lost in Space fans, the Limited Edition, like all of Arrow Video's restorations of classic films, looks like a cool collector's item.Continue Reading at GameSpot #lost #space #limited #edition #bluray
    Lost In Space Limited Edition 4K Blu-Ray Preorders Are 50% Off
    www.gamespot.com
    Lost in Space Limited Edition (4K Blu-ray) $25 (was $50) | Releases September 2 Preorder at Amazon Amazon is offering a 50% discount on Arrow Video's upcoming 4K Blu-ray restoration of Lost in Space. Lost in Space Limited Edition is available to preorder for only $25 (was $50) ahead of its September 2 release.Directed by Stephen Hopkins, the sci-fi film starring Gary Oldman and William Hurt wasn't warmly received in 1998. To be fair, the original 1960s TV series it was based on wasn't a critical success either. Nevertheless, the film and series were commercial successes, and both are considered cult classics today. Netflix even created a reimagining of the original series back in 2018 that ran for three seasons.If you enjoyed the Netflix series but haven't watched the film, the new 4K Blu-ray edition should be the best way to watch it going forward. For longtime Lost in Space fans, the Limited Edition, like all of Arrow Video's restorations of classic films, looks like a cool collector's item.Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 2 التعليقات ·0 المشاركات
  • Joseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level Contest

    IntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn ChampionsConsole Trailer: YouTubeWildshade Fantasy Horse RacesiOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie
    #joseph #jegedes #journey #into #environment
    Joseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level Contest
    IntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn ChampionsConsole Trailer: YouTubeWildshade Fantasy Horse RacesiOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie #joseph #jegedes #journey #into #environment
    Joseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level Contest
    80.lv
    IntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn Champions (PlayStation, Xbox, Nintendo Switch)Console Trailer: YouTubeWildshade Fantasy Horse Races (iOS, Android)iOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie
    Like
    Love
    Wow
    Sad
    Angry
    199
    · 2 التعليقات ·0 المشاركات
  • Kiss takes the stage in World of Tanks’ Metal Fest

    We love summer music festivals, so here at Wargaming, our team at World of Tanks Modern Armor has made it a mission to bring you a hard-rockin’ annual music event that truly shakes the battlefield: Metal Fest.New tanks, new 3D Commanders, new Challenges and events: they’re all part of what Metal Fest offers each summer. But this is our third year of the event, coming to you on PS4 and PS5 starting August 26. We knew we had to go bigger and louder than ever.To borrow some lyrics you might know, we wanted the best—and we got the best.This year, our featured act is none other than the legendary band Kiss! Not only that; we’ve got the actual voices of core members Gene Simmons and Paul Stanley in the game.This is how it all shook out. 

    Play Video

    Shout It Out Loud

    Ever since the band’s shows at The Daisy in March 1973, when they debuted the character designs they’d become known for, Kiss has been more than a group of skilled musicians. They’ve been icons and personas.So even though Metal Fest 2025 features four new Kiss-inspired Premium tanks, we knew specifically that the 3D Commanders representing the four classic Kiss personashad to be absolutely right and larger than life.Fortunately, as World of Tanks’ senior producer JJ Bakken explains, the band was all in. “Geneand Paulwere gracious enough to give us some of their time for the game, as they represent the highest profile characters in Kiss … Both Gene and Paul saw all our concepts as we created them for characters and tanks.brought the idea to us to really lean into the fantastical elements of each character.” 

    Our art team worked to get those fantastical elements down, whether we’re talking the feline claws and nimble animations given to The Catman 3D Commander or the enormous pair of bat-like wings that Tanks’ art director Andy Dorizas suggested for The Demon 3D Commander.But as any tanker knows, when it comes to our 3D Commanders, it’s not just about the look. Our players’ favorite Commanders speak with custom-written voiceover lines, so of course that’s the case for all four of our Kiss Commanders.“Kiss themselves made the decision to have Paul and Gene featured as voices in the game,” says the game’s audio director, Brendan Blewett. “They were very particular in that the Kiss ‘characters’ are just that—characters, not real-life individuals. Each of them has traits and those are portrayed, in the instance of The Starchild and The Demon, by Paul and Gene.” 

    So what was it like, working with legendary musicians to bring the voices of their world-famous characters to our console battlefield?“Working with Paul and Gene was an absolute blast,” says Blewett. “These guys are obviously seasoned studio vets and really made the sessions fun and engaging.”He adds, “Gene lived up to his reputation as a master of trivia and kept us entertained between takes regaling us with stories from the road and factoids. Paul was absolutely a gracious, friendly individual and belted out an incredibly intense vocal performance and kept it going for the whole session. We even quipped that it was ‘like six months of shows in two hours.’ Impressive!”As for the voiceover for The Starman and The Catman, tankers and Kiss fans should rest rock out assured that these Commanders have received the same attention to detail. According to Blewett, “We worked with Kiss to understand the character profiles of The Catman and The Spaceman and came up with casting guidelines from there. For instance, The Catman is a smaller guy, witty and agile, while The Spaceman is older and wiser. The word ‘sagacious’ was used in session to describe the personality of The Spaceman.”

    War MachineIf you think the Kiss 3D Commanders sound impressive, be sure to recruit them during Metal Fest, and pair them up with our four Premium Kiss tanks, also inspired and named after the characters: The Demon, The Starchild, The Spaceman, and The Catman.

    Each of these tanks not only takes visual inspiration from Kiss; it also has abilities inspired by a specific band member’s persona. You’d better believe that The Demon is a tank that mounts a flamethrower!

    All of this is in addition to the Challenges, special event battles, daily login rewards, and more that Metal Fest offers. Rock out while you can, and don’t miss any of it—Metal Fest takes place in World of Tanks Modern Armor from August 26 through September 15 on PS4 and PS5!
    #kiss #takes #stage #world #tanks
    Kiss takes the stage in World of Tanks’ Metal Fest
    We love summer music festivals, so here at Wargaming, our team at World of Tanks Modern Armor has made it a mission to bring you a hard-rockin’ annual music event that truly shakes the battlefield: Metal Fest.New tanks, new 3D Commanders, new Challenges and events: they’re all part of what Metal Fest offers each summer. But this is our third year of the event, coming to you on PS4 and PS5 starting August 26. We knew we had to go bigger and louder than ever.To borrow some lyrics you might know, we wanted the best—and we got the best.This year, our featured act is none other than the legendary band Kiss! Not only that; we’ve got the actual voices of core members Gene Simmons and Paul Stanley in the game.This is how it all shook out.  Play Video Shout It Out Loud Ever since the band’s shows at The Daisy in March 1973, when they debuted the character designs they’d become known for, Kiss has been more than a group of skilled musicians. They’ve been icons and personas.So even though Metal Fest 2025 features four new Kiss-inspired Premium tanks, we knew specifically that the 3D Commanders representing the four classic Kiss personashad to be absolutely right and larger than life.Fortunately, as World of Tanks’ senior producer JJ Bakken explains, the band was all in. “Geneand Paulwere gracious enough to give us some of their time for the game, as they represent the highest profile characters in Kiss … Both Gene and Paul saw all our concepts as we created them for characters and tanks.brought the idea to us to really lean into the fantastical elements of each character.”  Our art team worked to get those fantastical elements down, whether we’re talking the feline claws and nimble animations given to The Catman 3D Commander or the enormous pair of bat-like wings that Tanks’ art director Andy Dorizas suggested for The Demon 3D Commander.But as any tanker knows, when it comes to our 3D Commanders, it’s not just about the look. Our players’ favorite Commanders speak with custom-written voiceover lines, so of course that’s the case for all four of our Kiss Commanders.“Kiss themselves made the decision to have Paul and Gene featured as voices in the game,” says the game’s audio director, Brendan Blewett. “They were very particular in that the Kiss ‘characters’ are just that—characters, not real-life individuals. Each of them has traits and those are portrayed, in the instance of The Starchild and The Demon, by Paul and Gene.”  So what was it like, working with legendary musicians to bring the voices of their world-famous characters to our console battlefield?“Working with Paul and Gene was an absolute blast,” says Blewett. “These guys are obviously seasoned studio vets and really made the sessions fun and engaging.”He adds, “Gene lived up to his reputation as a master of trivia and kept us entertained between takes regaling us with stories from the road and factoids. Paul was absolutely a gracious, friendly individual and belted out an incredibly intense vocal performance and kept it going for the whole session. We even quipped that it was ‘like six months of shows in two hours.’ Impressive!”As for the voiceover for The Starman and The Catman, tankers and Kiss fans should rest rock out assured that these Commanders have received the same attention to detail. According to Blewett, “We worked with Kiss to understand the character profiles of The Catman and The Spaceman and came up with casting guidelines from there. For instance, The Catman is a smaller guy, witty and agile, while The Spaceman is older and wiser. The word ‘sagacious’ was used in session to describe the personality of The Spaceman.” War MachineIf you think the Kiss 3D Commanders sound impressive, be sure to recruit them during Metal Fest, and pair them up with our four Premium Kiss tanks, also inspired and named after the characters: The Demon, The Starchild, The Spaceman, and The Catman. Each of these tanks not only takes visual inspiration from Kiss; it also has abilities inspired by a specific band member’s persona. You’d better believe that The Demon is a tank that mounts a flamethrower! All of this is in addition to the Challenges, special event battles, daily login rewards, and more that Metal Fest offers. Rock out while you can, and don’t miss any of it—Metal Fest takes place in World of Tanks Modern Armor from August 26 through September 15 on PS4 and PS5! #kiss #takes #stage #world #tanks
    Kiss takes the stage in World of Tanks’ Metal Fest
    blog.playstation.com
    We love summer music festivals, so here at Wargaming, our team at World of Tanks Modern Armor has made it a mission to bring you a hard-rockin’ annual music event that truly shakes the battlefield: Metal Fest.New tanks, new 3D Commanders, new Challenges and events: they’re all part of what Metal Fest offers each summer. But this is our third year of the event, coming to you on PS4 and PS5 starting August 26. We knew we had to go bigger and louder than ever.To borrow some lyrics you might know, we wanted the best—and we got the best.This year, our featured act is none other than the legendary band Kiss! Not only that; we’ve got the actual voices of core members Gene Simmons and Paul Stanley in the game.This is how it all shook out.  Play Video Shout It Out Loud Ever since the band’s shows at The Daisy in March 1973, when they debuted the character designs they’d become known for, Kiss has been more than a group of skilled musicians. They’ve been icons and personas.So even though Metal Fest 2025 features four new Kiss-inspired Premium tanks, we knew specifically that the 3D Commanders representing the four classic Kiss personas (The Demon, The Starchild, The Spaceman, and The Catman) had to be absolutely right and larger than life.Fortunately, as World of Tanks’ senior producer JJ Bakken explains, the band was all in. “Gene [Simmons] and Paul [Stanley] were gracious enough to give us some of their time for the game, as they represent the highest profile characters in Kiss … Both Gene and Paul saw all our concepts as we created them for characters and tanks. [They] brought the idea to us to really lean into the fantastical elements of each character.”  Our art team worked to get those fantastical elements down, whether we’re talking the feline claws and nimble animations given to The Catman 3D Commander or the enormous pair of bat-like wings that Tanks’ art director Andy Dorizas suggested for The Demon 3D Commander.But as any tanker knows, when it comes to our 3D Commanders, it’s not just about the look. Our players’ favorite Commanders speak with custom-written voiceover lines, so of course that’s the case for all four of our Kiss Commanders.“Kiss themselves made the decision to have Paul and Gene featured as voices in the game,” says the game’s audio director, Brendan Blewett. “They were very particular in that the Kiss ‘characters’ are just that—characters, not real-life individuals. Each of them has traits and those are portrayed, in the instance of The Starchild and The Demon, by Paul and Gene.”  So what was it like, working with legendary musicians to bring the voices of their world-famous characters to our console battlefield?“Working with Paul and Gene was an absolute blast,” says Blewett. “These guys are obviously seasoned studio vets and really made the sessions fun and engaging.”He adds, “Gene lived up to his reputation as a master of trivia and kept us entertained between takes regaling us with stories from the road and factoids. Paul was absolutely a gracious, friendly individual and belted out an incredibly intense vocal performance and kept it going for the whole session. We even quipped that it was ‘like six months of shows in two hours.’ Impressive!”As for the voiceover for The Starman and The Catman, tankers and Kiss fans should rest rock out assured that these Commanders have received the same attention to detail. According to Blewett, “We worked with Kiss to understand the character profiles of The Catman and The Spaceman and came up with casting guidelines from there. For instance, The Catman is a smaller guy, witty and agile, while The Spaceman is older and wiser. The word ‘sagacious’ was used in session to describe the personality of The Spaceman.” War Machine(s) If you think the Kiss 3D Commanders sound impressive (and yes, I’m biased, but they are), be sure to recruit them during Metal Fest, and pair them up with our four Premium Kiss tanks, also inspired and named after the characters: The Demon, The Starchild, The Spaceman, and The Catman. Each of these tanks not only takes visual inspiration from Kiss; it also has abilities inspired by a specific band member’s persona. You’d better believe that The Demon is a tank that mounts a flamethrower! All of this is in addition to the Challenges, special event battles, daily login rewards, and more that Metal Fest offers. Rock out while you can, and don’t miss any of it—Metal Fest takes place in World of Tanks Modern Armor from August 26 through September 15 on PS4 and PS5!
    2 التعليقات ·0 المشاركات
  • Sonic Racing: CrossWorlds – Pac-Man crossover and Open Network Test details

    Rev your engines, racers, there’s new Sonic Racing: CrossWorlds news that just dropped at Gamescom. Over the past few days, Sonic has dominated the newsfeeds with not one, but two trailer drops, featuring huge reveals. 

    Play Video

    The first was a crossover no one saw coming – two iconic retro gaming heroes coming together in a universe-shattering collaboration: Sonic the Hedgehog and Pac-Man playable in each other’s upcoming games. Sonic joins Pac-Man in Pac-Man World 2 Re-Pac for an explosive birthday celebration, featuring Sonic-inspired levels, costumes, and more. 

    Then, Pac-Man and Team Ghost put their racing skills to the test in Sonic Racing: CrossWorlds as part of the growing roster of guest characters included in the Season Pass.  Players can venture through Pac-Village, eat up Pac-Dots, & escape the iconic Maze in the Pac-Man Mobile.

    Play Video

    The all-new Competition Trailer showcased even more Sonic Racing action and detailed the various game modes players can look forward to experiencing once the game is released. In Sonic Racing: CrossWorlds, there are tons of ways to compete:

    Grand Prix – Compete solo or with friends in local splitscreen co-op for first place in one of 7 cups. Racers are awarded points based on their placements at the end of races, and the racer with the most points at the end wins.

    World Match – Test your skills and compete online against 11 other players. Earn Rank Points, increase your World Rank, and aim for the top.

    Friend Match – Play with up to 11 other players online in custom lobbies where you can control all aspects of a race such as Speed, Team Size, Course/CrossWorlds, AI Difficulty, Frenzy Gates, Items, and Rule Sets.

    Race Park – Change up the rules and teams online and offline in this party mode that features six unique race formats.

    Time Trial – Compete for the best time on individual courses and aim for the top of the Leaderboard Rankings.

    Custom Match – Play with up to 4 players offline split-screen where you can control all aspects of a race such as Speed, Team Size, Courses/CrossWorlds, AI Difficulty, Frenzy Gates, Items, and Rule Sets.

    Eagle-eyed fans will also notice that the trailer revealed a few new tracks based on Sonic’s most recent adventures. Kronos Island from Sonic Frontiers makes its debut, featuring iconic ancient architecture strewn across the open fields. Northstar Island from Sonic Superstars arrives colorfully on the scene, with many nods to stage obstacles, local fauna, and even a… giant mechanical dinosaur whale? And Shadow fans, rejoice. The White Space from Shadow Generations has been recreated in glorious detail, down to the Doom’s Eye looming menacingly overhead. 

    On the racetrack, there’s no shortage of competitive trash talk. Just as Sonic, Shadow, Espio and Jet are shown jockeying for first place and taking their rivalries to the next level, players can look forward to over a thousand voice lines and interactions between their favorite Sonic characters. Ever wonder what a race between Amy and Big the Cat would sound like? You may find out. 

    On top of all that, it was announced that fans can get their hands on Sonic Racing: CrossWorlds a little sooner than expected during the Open Network Test. This free, limited-time event will take place from August 29 to September 1. Players on PS5 will be able to race online with 12 iconic Sonic characters, compete on 16 courses, and mix and match 42 gadgets to create the ultimate racing machine. Put your driving skills to the test against players worldwide. There might even be a couple of surprises in store… 

    Come Race on our level when Sonic Racing CrossWorlds zooms onto PS5 and PS4 on September 25!
    #sonic #racing #crossworlds #pacman #crossover
    Sonic Racing: CrossWorlds – Pac-Man crossover and Open Network Test details
    Rev your engines, racers, there’s new Sonic Racing: CrossWorlds news that just dropped at Gamescom. Over the past few days, Sonic has dominated the newsfeeds with not one, but two trailer drops, featuring huge reveals.  Play Video The first was a crossover no one saw coming – two iconic retro gaming heroes coming together in a universe-shattering collaboration: Sonic the Hedgehog and Pac-Man playable in each other’s upcoming games. Sonic joins Pac-Man in Pac-Man World 2 Re-Pac for an explosive birthday celebration, featuring Sonic-inspired levels, costumes, and more.  Then, Pac-Man and Team Ghost put their racing skills to the test in Sonic Racing: CrossWorlds as part of the growing roster of guest characters included in the Season Pass.  Players can venture through Pac-Village, eat up Pac-Dots, & escape the iconic Maze in the Pac-Man Mobile. Play Video The all-new Competition Trailer showcased even more Sonic Racing action and detailed the various game modes players can look forward to experiencing once the game is released. In Sonic Racing: CrossWorlds, there are tons of ways to compete: Grand Prix – Compete solo or with friends in local splitscreen co-op for first place in one of 7 cups. Racers are awarded points based on their placements at the end of races, and the racer with the most points at the end wins. World Match – Test your skills and compete online against 11 other players. Earn Rank Points, increase your World Rank, and aim for the top. Friend Match – Play with up to 11 other players online in custom lobbies where you can control all aspects of a race such as Speed, Team Size, Course/CrossWorlds, AI Difficulty, Frenzy Gates, Items, and Rule Sets. Race Park – Change up the rules and teams online and offline in this party mode that features six unique race formats. Time Trial – Compete for the best time on individual courses and aim for the top of the Leaderboard Rankings. Custom Match – Play with up to 4 players offline split-screen where you can control all aspects of a race such as Speed, Team Size, Courses/CrossWorlds, AI Difficulty, Frenzy Gates, Items, and Rule Sets. Eagle-eyed fans will also notice that the trailer revealed a few new tracks based on Sonic’s most recent adventures. Kronos Island from Sonic Frontiers makes its debut, featuring iconic ancient architecture strewn across the open fields. Northstar Island from Sonic Superstars arrives colorfully on the scene, with many nods to stage obstacles, local fauna, and even a… giant mechanical dinosaur whale? And Shadow fans, rejoice. The White Space from Shadow Generations has been recreated in glorious detail, down to the Doom’s Eye looming menacingly overhead.  On the racetrack, there’s no shortage of competitive trash talk. Just as Sonic, Shadow, Espio and Jet are shown jockeying for first place and taking their rivalries to the next level, players can look forward to over a thousand voice lines and interactions between their favorite Sonic characters. Ever wonder what a race between Amy and Big the Cat would sound like? You may find out.  On top of all that, it was announced that fans can get their hands on Sonic Racing: CrossWorlds a little sooner than expected during the Open Network Test. This free, limited-time event will take place from August 29 to September 1. Players on PS5 will be able to race online with 12 iconic Sonic characters, compete on 16 courses, and mix and match 42 gadgets to create the ultimate racing machine. Put your driving skills to the test against players worldwide. There might even be a couple of surprises in store…  Come Race on our level when Sonic Racing CrossWorlds zooms onto PS5 and PS4 on September 25! #sonic #racing #crossworlds #pacman #crossover
    Sonic Racing: CrossWorlds – Pac-Man crossover and Open Network Test details
    blog.playstation.com
    Rev your engines, racers, there’s new Sonic Racing: CrossWorlds news that just dropped at Gamescom. Over the past few days, Sonic has dominated the newsfeeds with not one, but two trailer drops, featuring huge reveals.  Play Video The first was a crossover no one saw coming – two iconic retro gaming heroes coming together in a universe-shattering collaboration: Sonic the Hedgehog and Pac-Man playable in each other’s upcoming games. Sonic joins Pac-Man in Pac-Man World 2 Re-Pac for an explosive birthday celebration, featuring Sonic-inspired levels, costumes, and more.  Then, Pac-Man and Team Ghost put their racing skills to the test in Sonic Racing: CrossWorlds as part of the growing roster of guest characters included in the Season Pass.  Players can venture through Pac-Village, eat up Pac-Dots, & escape the iconic Maze in the Pac-Man Mobile. Play Video The all-new Competition Trailer showcased even more Sonic Racing action and detailed the various game modes players can look forward to experiencing once the game is released. In Sonic Racing: CrossWorlds, there are tons of ways to compete: Grand Prix – Compete solo or with friends in local splitscreen co-op for first place in one of 7 cups (comprised of 4 races each). Racers are awarded points based on their placements at the end of races, and the racer with the most points at the end wins. World Match – Test your skills and compete online against 11 other players. Earn Rank Points, increase your World Rank, and aim for the top. Friend Match – Play with up to 11 other players online in custom lobbies where you can control all aspects of a race such as Speed, Team Size, Course/CrossWorlds, AI Difficulty, Frenzy Gates, Items, and Rule Sets. Race Park – Change up the rules and teams online and offline in this party mode that features six unique race formats. Time Trial – Compete for the best time on individual courses and aim for the top of the Leaderboard Rankings. Custom Match – Play with up to 4 players offline split-screen where you can control all aspects of a race such as Speed, Team Size, Courses/CrossWorlds, AI Difficulty, Frenzy Gates, Items, and Rule Sets. Eagle-eyed fans will also notice that the trailer revealed a few new tracks based on Sonic’s most recent adventures. Kronos Island from Sonic Frontiers makes its debut, featuring iconic ancient architecture strewn across the open fields. Northstar Island from Sonic Superstars arrives colorfully on the scene, with many nods to stage obstacles, local fauna, and even a… giant mechanical dinosaur whale? And Shadow fans, rejoice. The White Space from Shadow Generations has been recreated in glorious detail, down to the Doom’s Eye looming menacingly overhead.  On the racetrack, there’s no shortage of competitive trash talk. Just as Sonic, Shadow, Espio and Jet are shown jockeying for first place and taking their rivalries to the next level, players can look forward to over a thousand voice lines and interactions between their favorite Sonic characters. Ever wonder what a race between Amy and Big the Cat would sound like? You may find out.  On top of all that, it was announced that fans can get their hands on Sonic Racing: CrossWorlds a little sooner than expected during the Open Network Test. This free, limited-time event will take place from August 29 to September 1. Players on PS5 will be able to race online with 12 iconic Sonic characters, compete on 16 courses (9 main courses & 7 CrossWorlds), and mix and match 42 gadgets to create the ultimate racing machine. Put your driving skills to the test against players worldwide. There might even be a couple of surprises in store…  Come Race on our level when Sonic Racing CrossWorlds zooms onto PS5 and PS4 on September 25!
    2 التعليقات ·0 المشاركات
  • New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs

    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands.
    The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features.
    NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects.
    In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article.
    G-Assist Gets Smarter, Expands to More RTX PCs
    The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more.
    Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to:

    Run diagnostics to optimize game performance
    Display or chart frame rates, latency and GPU temperatures
    Adjust GPU or even peripheral settings, such as keyboard lighting

    The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops.
    Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate.
    Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS.
    Introducing the G-Assist Plug-In Hub With Mod.io
    NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones.
    With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins.
    With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in.
    The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with:

    Some finalists include:

    Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming
    Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity
    Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices

    The winners of the hackathon will be announced on Wednesday, Aug. 20.
    Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language.
    Mod It Like It’s Hot With RTX Remix 
    Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players.
    NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals.
    Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads.

    In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win in cash prizes. At Gamescom, NVIDIA announced the winners:

    Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams
    Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams

    Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk

    Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams

    Runner-Up: I-Ninja Remixed, by g.i.george333

    Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159

    These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets.
    For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them.
    The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through.
    All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server.
    Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time.
    To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #new #lightweight #model #project #gassist
    New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs
    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands. The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features. NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects. In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article. G-Assist Gets Smarter, Expands to More RTX PCs The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more. Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to: Run diagnostics to optimize game performance Display or chart frame rates, latency and GPU temperatures Adjust GPU or even peripheral settings, such as keyboard lighting The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops. Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate. Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS. Introducing the G-Assist Plug-In Hub With Mod.io NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones. With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins. With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in. The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with: Some finalists include: Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices The winners of the hackathon will be announced on Wednesday, Aug. 20. Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language. Mod It Like It’s Hot With RTX Remix  Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players. NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals. Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads. In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win in cash prizes. At Gamescom, NVIDIA announced the winners: Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: I-Ninja Remixed, by g.i.george333 Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159 These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets. For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them. The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through. All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server. Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time. To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #new #lightweight #model #project #gassist
    New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs
    blogs.nvidia.com
    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands. The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features. NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects. In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article. G-Assist Gets Smarter, Expands to More RTX PCs The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more. Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to: Run diagnostics to optimize game performance Display or chart frame rates, latency and GPU temperatures Adjust GPU or even peripheral settings, such as keyboard lighting The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops. Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate. Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS. Introducing the G-Assist Plug-In Hub With Mod.io NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones. With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins. With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in. The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with: Some finalists include: Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices The winners of the hackathon will be announced on Wednesday, Aug. 20. Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language. Mod It Like It’s Hot With RTX Remix  Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players. NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals. Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads. In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win $50,000 in cash prizes. At Gamescom, NVIDIA announced the winners: Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: I-Ninja Remixed, by g.i.george333 Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159 These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets. For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them. The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through. All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server. Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time. To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    2 التعليقات ·0 المشاركات
  • واش راكم يا الأصدقاء؟

    أخبار جديدة في عالم الألعاب! Roblox قررت تدير شوية تغييرات كبيرة في سياساتها بعد ما واجهت عديد من الدعاوى المتعلقة بسلامة الأطفال. الحكاية هي أنهم راهم يقيّدوا المحتوى اللي ينشئه المستخدمين، وبالخصوص الألعاب اللي ما عندهاش تصنيف. يعني المرة الجاية، الألعاب هذي رح تكون متاحة فقط للمطورين أو لناس تخدم معاهوم.

    شخصياً، نحب Roblox ونشوف فيها منصة رائعة للتعلم والإبداع، لكن سلامة الأطفال لازم تكون في الأولوية. كاين بزاف من الألعاب الممكن تكون ماشي في محلها، وهذي الخطوات يمكن تساعد في حماية أولادنا.

    خليونا نتابعوا كيفاش تتطور الأمور، ونتفكروا دايماً في الأمان أثناء اللعب!

    https://www.engadget.com/gaming/pc/roblox-cracks-down-on-its-user-created-content-following-multiple-child-safety-lawsuits-193452150.html?src=rss

    #Roblox #سلامة_الأطفال #Gaming #Sécurité #UserGeneratedContent
    واش راكم يا الأصدقاء؟ ✨ أخبار جديدة في عالم الألعاب! Roblox قررت تدير شوية تغييرات كبيرة في سياساتها بعد ما واجهت عديد من الدعاوى المتعلقة بسلامة الأطفال. الحكاية هي أنهم راهم يقيّدوا المحتوى اللي ينشئه المستخدمين، وبالخصوص الألعاب اللي ما عندهاش تصنيف. يعني المرة الجاية، الألعاب هذي رح تكون متاحة فقط للمطورين أو لناس تخدم معاهوم. شخصياً، نحب Roblox ونشوف فيها منصة رائعة للتعلم والإبداع، لكن سلامة الأطفال لازم تكون في الأولوية. كاين بزاف من الألعاب الممكن تكون ماشي في محلها، وهذي الخطوات يمكن تساعد في حماية أولادنا. خليونا نتابعوا كيفاش تتطور الأمور، ونتفكروا دايماً في الأمان أثناء اللعب! https://www.engadget.com/gaming/pc/roblox-cracks-down-on-its-user-created-content-following-multiple-child-safety-lawsuits-193452150.html?src=rss #Roblox #سلامة_الأطفال #Gaming #Sécurité #UserGeneratedContent
    www.engadget.com
    Following a wave of lawsuits alleging that Roblox doesn't provide a safe environment for its underage users, the gaming platform made a series of sweeping updates to its policies. To address recent concerns, Roblox published a post on its website
    1 التعليقات ·0 المشاركات
الصفحات المعززة
ollo https://www.ollo.ws