


80 Level® is the best place for game developers, digital artists, animators, video game enthusiasts, CGI
8811 people like this
21 Posts
0 Photos
0 Videos
-
Watch Mr. Snake Get "Split in Half" in This Behind-the-Scenes Demo from The Bad Guys 2
The topic of cheating in 3D animation will likely always be one of the most fascinating parts of the craft. The notion that, unlike in games, for example, 3D animations can be bent and twisted in all sorts of ways behind the scenes, coupled with the fact that these mesmerizing imperfections usually remain unseen, makes it all the more exciting when distorted models and messed-up rigs do come to light.Joining our growing library of such showcases is this impressive under-the-hood demo featuring Mr. Snake from The Bad Guys 2 being "split in half," figuratively speaking, shared by DreamWorks Animator David Guo. To make the character slither convincingly on screen, the creator simply used two rigs and skillfully hid the connection out of sight, giving the illusion of a single unified model. "The key to making him feel like a single, solid character was to make sure the overall 'volume' of his body felt consistent: when one part of the body enters, it pulls on another area of the body," the author notes.Earlier, Guo also compared this particular scene from the movie to how it looked during the blocking stage, for a more extensive behind-the-scenes look at some of DreamWorks' films, we highly encourage you to visit the artist's Instagram page:And here are some more mesmerizing instances of cheating in 3D animation for your viewing pleasure:Here's How You Can Cut Corners & Cheat in 3D AnimationBreaking 3D Models to Achieve the Perfect Animation ShotDisney Animator Shows That It's OK to Break Things to Get a Good ResultPixar Animator Shows How 3D Character Mouths Look From Different AnglesArtists Share Broken 3D Models That Look Good on CameraIf you would like to learn more about how 3D Animators cut corners, we also recommend checking out our interview with Kevin Temmer, the Lead Animator at Glitch Production, who told us more about The Amazing Digital Circus' animation workflows, explained what "cheating" in 3D animation is, and discussed how they twist TADC's characters behind the scenes.Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#watch #snake #get #quotsplit #halfquotWatch Mr. Snake Get "Split in Half" in This Behind-the-Scenes Demo from The Bad Guys 2The topic of cheating in 3D animation will likely always be one of the most fascinating parts of the craft. The notion that, unlike in games, for example, 3D animations can be bent and twisted in all sorts of ways behind the scenes, coupled with the fact that these mesmerizing imperfections usually remain unseen, makes it all the more exciting when distorted models and messed-up rigs do come to light.Joining our growing library of such showcases is this impressive under-the-hood demo featuring Mr. Snake from The Bad Guys 2 being "split in half," figuratively speaking, shared by DreamWorks Animator David Guo. To make the character slither convincingly on screen, the creator simply used two rigs and skillfully hid the connection out of sight, giving the illusion of a single unified model. "The key to making him feel like a single, solid character was to make sure the overall 'volume' of his body felt consistent: when one part of the body enters, it pulls on another area of the body," the author notes.Earlier, Guo also compared this particular scene from the movie to how it looked during the blocking stage, for a more extensive behind-the-scenes look at some of DreamWorks' films, we highly encourage you to visit the artist's Instagram page:And here are some more mesmerizing instances of cheating in 3D animation for your viewing pleasure:Here's How You Can Cut Corners & Cheat in 3D AnimationBreaking 3D Models to Achieve the Perfect Animation ShotDisney Animator Shows That It's OK to Break Things to Get a Good ResultPixar Animator Shows How 3D Character Mouths Look From Different AnglesArtists Share Broken 3D Models That Look Good on CameraIf you would like to learn more about how 3D Animators cut corners, we also recommend checking out our interview with Kevin Temmer, the Lead Animator at Glitch Production, who told us more about The Amazing Digital Circus' animation workflows, explained what "cheating" in 3D animation is, and discussed how they twist TADC's characters behind the scenes.Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #watch #snake #get #quotsplit #halfquot -
Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum
IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers, small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins
#designing #atmospheric #wwi #plane #crashDesigning Atmospheric WWI Plane Crash Scene In Abandoned German AsylumIntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers, small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins #designing #atmospheric #wwi #plane #crash -
Portfolio Review Live Event Today – Don't Miss!
80 Level Community80 Level CommunityPublished27 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayToday at 3 PM UTC | 8 AM PT | 11 AM EST, we will host a Portfolio Review with José Vega, founder and senior artist at Worldbuilders Workshop!José will review the portfolios of our Showcase Competition winners and deliver an in-depth live critique. Expect actionable feedback on your strengths, as well as practical advice on refining your work to better align with your career goals. This is a unique opportunity to receive expert guidance that can help your portfolio shine and set you apart in the industry.Join us here!
#portfolio #review #live #event #todayPortfolio Review Live Event Today – Don't Miss!80 Level Community80 Level CommunityPublished27 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayToday at 3 PM UTC | 8 AM PT | 11 AM EST, we will host a Portfolio Review with José Vega, founder and senior artist at Worldbuilders Workshop!José will review the portfolios of our Showcase Competition winners and deliver an in-depth live critique. Expect actionable feedback on your strengths, as well as practical advice on refining your work to better align with your career goals. This is a unique opportunity to receive expert guidance that can help your portfolio shine and set you apart in the industry.Join us here! #portfolio #review #live #event #today -
Take a Look at This Impressive Recreation of Kowloon Walled City in Minecraft
3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#take #look #this #impressive #recreationTake a Look at This Impressive Recreation of Kowloon Walled City in Minecraft3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #take #look #this #impressive #recreation -
Bring Your MetaHumans to Life Using Houdini with the Latest UE5 Update
Epic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#bring #your #metahumans #life #usingBring Your MetaHumans to Life Using Houdini with the Latest UE5 UpdateEpic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #bring #your #metahumans #life #using -
Take a Look at This Great Compound Bow Rig Set Up in Blender
Gloria LevineSenior EditorGloria LevineSenior EditorPublished27 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayRyan Lykos is on fire!Ryan Lykos stepped away from body parts creation and presented a new Blender rig: a nice compound bow model with moving cams – the wheels that shift when the string gets taut.Lykos duplicated the string curve for the cam and constrained an empty to it using the Follow Path constraint to hold the string on the cams, changing the offset value with drivers to move the string. Try it, maybe you'll make your own rig with this technique.You should check out Lykos's other works on X/Twitter, like these finger and body rigs, a gun with an automated magazine,Join our 80 Level Talent platform and our Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#take #look #this #great #compoundTake a Look at This Great Compound Bow Rig Set Up in BlenderGloria LevineSenior EditorGloria LevineSenior EditorPublished27 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayRyan Lykos is on fire!Ryan Lykos stepped away from body parts creation and presented a new Blender rig: a nice compound bow model with moving cams – the wheels that shift when the string gets taut.Lykos duplicated the string curve for the cam and constrained an empty to it using the Follow Path constraint to hold the string on the cams, changing the offset value with drivers to move the string. Try it, maybe you'll make your own rig with this technique.You should check out Lykos's other works on X/Twitter, like these finger and body rigs, a gun with an automated magazine,Join our 80 Level Talent platform and our Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #take #look #this #great #compound -
XPPen Quiz — Winners Revealed!
80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way.
#xppen #quiz #winners #revealedXPPen Quiz — Winners Revealed!80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way. #xppen #quiz #winners #revealed -
Fur Grooming Techniques For Realistic Stitch In Blender
IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
#fur #grooming #techniques #realistic #stitchFur Grooming Techniques For Realistic Stitch In BlenderIntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch -
Gaming Meets Streaming: Inside the Shift
After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone.
#gaming #meets #streaming #inside #shiftGaming Meets Streaming: Inside the ShiftAfter a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone. #gaming #meets #streaming #inside #shift -
Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
#creating #detailed #helmet #inspired #falloutCreating a Detailed Helmet Inspired by Fallout Using Substance 3DIntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine #creating #detailed #helmet #inspired #fallout -
Winners of the Showcase Competition
1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge!
#winners #showcase #competitionWinners of the Showcase Competition1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge! #winners #showcase #competition -
Joseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level Contest
IntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn ChampionsConsole Trailer: YouTubeWildshade Fantasy Horse RacesiOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie
#joseph #jegedes #journey #into #environmentJoseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level ContestIntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn ChampionsConsole Trailer: YouTubeWildshade Fantasy Horse RacesiOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie #joseph #jegedes #journey #into #environment -
Payments in the Americas
The Americas, led by the United States, Canada, and Brazil, now account for more than billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Laterprovider in the U.S. and beyond. Affirm has processed over billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. andCanada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here
#payments #americasPayments in the AmericasThe Americas, led by the United States, Canada, and Brazil, now account for more than billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Laterprovider in the U.S. and beyond. Affirm has processed over billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. andCanada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here #payments #americas2 Comments ·0 Shares -
UModeler X Personal Is Officially Launched
UModeler Inc., the team behind innovative 3D content creation tools, has officially launched UModeler X Personal, an all-in-one solution for modeling, painting, and animation directly inside the Unity Editor.“UModeler X Personal includes powerful features such as modeling tools, rigging, painting, curve mesh tools, and 3D text - all natively within the Unity Editor,” said the company. The tool is built to be beginner-friendly: most users can get up and running in under an hour, with significant savings in both time and production cost. UModeler X Personal is available now via UModeler’s official website, and new users can start with a free trial.To celebrate the launch, UModeler has announced exclusive upgrade offers for its existing Unity Asset Store users:UModeler X Plus purchasers will receive one year of UModeler X Personal for free, along with a Permanent Fallback License, which ensures continued access even without an active subscription.Original UModeler users will be eligible for a 30% discount on their first year of Personal. If they maintain a one-year subscription, they will also unlock a lifetime licenseat no additional cost.“Many Unity creators are looking for ways to simplify the complex 3D creation process,” said Jae-Sik Hwang, CEO of UModeler. “UModeler X Personal is built for them - providing a powerful yet intuitive environment to create freely and express their ideas without technical barriers.”You may try UModeler X Personal for free here. And, join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#umodeler #personal #officially #launchedUModeler X Personal Is Officially LaunchedUModeler Inc., the team behind innovative 3D content creation tools, has officially launched UModeler X Personal, an all-in-one solution for modeling, painting, and animation directly inside the Unity Editor.“UModeler X Personal includes powerful features such as modeling tools, rigging, painting, curve mesh tools, and 3D text - all natively within the Unity Editor,” said the company. The tool is built to be beginner-friendly: most users can get up and running in under an hour, with significant savings in both time and production cost. UModeler X Personal is available now via UModeler’s official website, and new users can start with a free trial.To celebrate the launch, UModeler has announced exclusive upgrade offers for its existing Unity Asset Store users:UModeler X Plus purchasers will receive one year of UModeler X Personal for free, along with a Permanent Fallback License, which ensures continued access even without an active subscription.Original UModeler users will be eligible for a 30% discount on their first year of Personal. If they maintain a one-year subscription, they will also unlock a lifetime licenseat no additional cost.“Many Unity creators are looking for ways to simplify the complex 3D creation process,” said Jae-Sik Hwang, CEO of UModeler. “UModeler X Personal is built for them - providing a powerful yet intuitive environment to create freely and express their ideas without technical barriers.”You may try UModeler X Personal for free here. And, join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #umodeler #personal #officially #launched2 Comments ·0 Shares -
Tear & Stretch This Cloth Simulation In Your Browser
Honestly, there's no such thing as too many interactive browser-based simulations. Software Engineer Michal Zalobny has created a cloth simulation from scratch, rendered using his custom WebGL2 engine.To boost performance, all points and joints are instanced, and Michal has applied quaternion math to accurately define the orientation of the joints. He was inspired by an article by Marian Pekár on how to use Verlet integration to write a simple 2D cloth simulation with C++.Give it a try here, and have a look at Claudio Z.'s similar cloth simulation that tears when dragged by the mouse:Of course, Claudio Z. has also made his project available online:Join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#tear #ampamp #stretch #this #clothTear & Stretch This Cloth Simulation In Your BrowserHonestly, there's no such thing as too many interactive browser-based simulations. Software Engineer Michal Zalobny has created a cloth simulation from scratch, rendered using his custom WebGL2 engine.To boost performance, all points and joints are instanced, and Michal has applied quaternion math to accurately define the orientation of the joints. He was inspired by an article by Marian Pekár on how to use Verlet integration to write a simple 2D cloth simulation with C++.Give it a try here, and have a look at Claudio Z.'s similar cloth simulation that tears when dragged by the mouse:Of course, Claudio Z. has also made his project available online:Join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #tear #ampamp #stretch #this #cloth2 Comments ·0 Shares -
Setting Up an Explorable Desert Environment With Emperia's Creator Tools Plug-in
IntroductionHi, I'm Berkay Dobrucali, Co-Founder of JustBStudios. I also work full-time as the Unity Team Lead at Hivemind Studios. Since my last interview with 80 Level, we won a Unity Award in the Best Artistic Content category with our Stylized Nature pack on behalf of Hivemind. These days, we've been focusing on both Unreal and Unity environments, working on some exciting new projects. I'm also continuing to bridge the gap between platforms by converting Unreal content into Unity. Becoming an Environment ArtistSince I studied interior architecture, I started modeling houses and furniture using 3ds Max during my early university years. After graduation, I worked professionally in the field for a while. Later on, I joined an asset creator studio, where I began learning Unity. Over time, I found myself becoming more interested in environment design, lighting, and optimization rather than just modelingThe biggest challenge was starting from scratch and having to learn so many different aspects at once. But as time passed, I improved steadily and began to develop a broader understanding across various areas of environment art.I initially started with 3ds Max and became quite proficient in it. Later on, I added Photoshop to my workflow. As I specialized in both Unreal and Unity, I also found myself using Photoshop and Premiere Pro occasionally. For tasks like texture editing and video editing when needed.In the beginning, I benefited a lot from Udemy and free YouTube courses. Watching a lot of tutorials and repeating the process over and over was key to improving my skills.Environment art has always fascinated me because it has the power to directly convey atmosphere to the player. As an interior architect, I'm especially drawn to how a space can contribute to storytelling and immerse the player in a narrative. I often find inspiration from modern games and artistic platforms like ArtStation. Additionally, abandoned real-world structures and nature serve as important sources of inspiration for me. Art-to-ExperienceI was involved in the project as the Art Director. The project was completed by three people. One of my teammates, who is also my Co-Founder, Begüm Dobrucali, was responsible for modeling and texturing the assets in the pack.She first created high-poly meshes in Blender, then baked them down to low-poly versions. For texturing, she used Substance 3D Painter. Once the asset and texture creation were complete, we asked another team member, Sude Kömür, to create the level design using the assets, based on the references we gathered. We stayed in close communication throughout this process and managed to create a compelling level. Afterward, we moved on to lighting. Since it was a desert environment, we used warm-toned directional lighting to illuminate the scene.The main challenge was placing the level elements correctly and ensuring everything was scaled properly to human proportions. Otherwise, the pack would have looked inconsistent.Using Emperia's Creator Tools plug-in was a smooth and enjoyable experience for me. The plug-in came with clear and detailed guidelines, which allowed me to follow each step carefully and complete the necessary tasks with ease. This comprehensive documentation made the entire process much faster and more efficient on my end. Overall, thanks to its user-friendly interface and logical workflow, working with the plugin was straightforward and significantly simplified my work.Thoughts on the Digital Art IndustryOne of the major issues in the industry is how quickly content is consumed and how often the artist's effort goes unrecognized. Additionally, the uncontrolled use of AI-generated content poses a threat to many artists. I believe we need more respect and recognition for the craft and the people behind it.Advice for BeginnersBeginners should focus on building a solid foundation and maintaining patience throughout the learning process. They should pay special attention to asset creation, level design, and lighting. I recommend starting with free resources on YouTube, and once they reach a certain level, diving deeper into paid courses for more advanced learning.Additionally, working on small but achievable projects is very important for growth.JustBStudios TeamInterview conducted by Theodore McKenzie
#setting #explorable #desert #environment #withSetting Up an Explorable Desert Environment With Emperia's Creator Tools Plug-inIntroductionHi, I'm Berkay Dobrucali, Co-Founder of JustBStudios. I also work full-time as the Unity Team Lead at Hivemind Studios. Since my last interview with 80 Level, we won a Unity Award in the Best Artistic Content category with our Stylized Nature pack on behalf of Hivemind. These days, we've been focusing on both Unreal and Unity environments, working on some exciting new projects. I'm also continuing to bridge the gap between platforms by converting Unreal content into Unity. Becoming an Environment ArtistSince I studied interior architecture, I started modeling houses and furniture using 3ds Max during my early university years. After graduation, I worked professionally in the field for a while. Later on, I joined an asset creator studio, where I began learning Unity. Over time, I found myself becoming more interested in environment design, lighting, and optimization rather than just modelingThe biggest challenge was starting from scratch and having to learn so many different aspects at once. But as time passed, I improved steadily and began to develop a broader understanding across various areas of environment art.I initially started with 3ds Max and became quite proficient in it. Later on, I added Photoshop to my workflow. As I specialized in both Unreal and Unity, I also found myself using Photoshop and Premiere Pro occasionally. For tasks like texture editing and video editing when needed.In the beginning, I benefited a lot from Udemy and free YouTube courses. Watching a lot of tutorials and repeating the process over and over was key to improving my skills.Environment art has always fascinated me because it has the power to directly convey atmosphere to the player. As an interior architect, I'm especially drawn to how a space can contribute to storytelling and immerse the player in a narrative. I often find inspiration from modern games and artistic platforms like ArtStation. Additionally, abandoned real-world structures and nature serve as important sources of inspiration for me. Art-to-ExperienceI was involved in the project as the Art Director. The project was completed by three people. One of my teammates, who is also my Co-Founder, Begüm Dobrucali, was responsible for modeling and texturing the assets in the pack.She first created high-poly meshes in Blender, then baked them down to low-poly versions. For texturing, she used Substance 3D Painter. Once the asset and texture creation were complete, we asked another team member, Sude Kömür, to create the level design using the assets, based on the references we gathered. We stayed in close communication throughout this process and managed to create a compelling level. Afterward, we moved on to lighting. Since it was a desert environment, we used warm-toned directional lighting to illuminate the scene.The main challenge was placing the level elements correctly and ensuring everything was scaled properly to human proportions. Otherwise, the pack would have looked inconsistent.Using Emperia's Creator Tools plug-in was a smooth and enjoyable experience for me. The plug-in came with clear and detailed guidelines, which allowed me to follow each step carefully and complete the necessary tasks with ease. This comprehensive documentation made the entire process much faster and more efficient on my end. Overall, thanks to its user-friendly interface and logical workflow, working with the plugin was straightforward and significantly simplified my work.Thoughts on the Digital Art IndustryOne of the major issues in the industry is how quickly content is consumed and how often the artist's effort goes unrecognized. Additionally, the uncontrolled use of AI-generated content poses a threat to many artists. I believe we need more respect and recognition for the craft and the people behind it.Advice for BeginnersBeginners should focus on building a solid foundation and maintaining patience throughout the learning process. They should pay special attention to asset creation, level design, and lighting. I recommend starting with free resources on YouTube, and once they reach a certain level, diving deeper into paid courses for more advanced learning.Additionally, working on small but achievable projects is very important for growth.JustBStudios TeamInterview conducted by Theodore McKenzie #setting #explorable #desert #environment #with2 Comments ·0 Shares -
Delve into a Calm Fantasy Atmosphere with This WIP UE5 Game
Solo Developer Heevak, the creator of the upcoming atmospheric narrative-driven game LIA, shared a new glimpse of the WIP gameplay, comparing two versions of the footage – with and without a footstep particle effect.The developer crafted a dreamy, fantasy-inspired world with beautiful nature, which, complete with high-quality sound effects, creates a truly wonderful atmosphere:LIA doesn't have a Steam page yet, and Heevak has been sharing updates on the project on X/Twitter. The game already features diverse environments with stunning stylized visuals brought to life using Unreal Engine 5:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#delve #into #calm #fantasy #atmosphereDelve into a Calm Fantasy Atmosphere with This WIP UE5 GameSolo Developer Heevak, the creator of the upcoming atmospheric narrative-driven game LIA, shared a new glimpse of the WIP gameplay, comparing two versions of the footage – with and without a footstep particle effect.The developer crafted a dreamy, fantasy-inspired world with beautiful nature, which, complete with high-quality sound effects, creates a truly wonderful atmosphere:LIA doesn't have a Steam page yet, and Heevak has been sharing updates on the project on X/Twitter. The game already features diverse environments with stunning stylized visuals brought to life using Unreal Engine 5:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #delve #into #calm #fantasy #atmosphere2 Comments ·0 Shares -
See How You Can Make EOTECH EXPS3 & G45 Scopes in 3D
Introduction My name is Ayush Banik, a 3D artist from India. I got into 3D through a random Blender tutorial back in late 2022, starting with that classic donut. I have worked on weapons for two games that are still under NDA, as well as some foliage and environment props for Dekogon. I'm still in engineering college, so working full-time is a bit difficult.EOTECH EXPS3 & G45The EOTECH EXPS3 scope and the G45 magnifier are part of my Full Internal AR-15 project. For references, I collected many high-quality images from EOTECH's official website as well as additional ones for texturing from The Weapon Room Discord channel. The aim of this project was to improve my texturing skills for black anodized and Cerakote coatings.I modeled this scope the same way I model all my other work: using a CAD/Blender live boolean workflow. For these two scopes in particular, I used Plasticity for modeling, MOI3D for exporting, and Blender for the low poly. The high-poly was derived in ZBrush using the process: DynaMesh > Polish > Decimate.For the topology, I exported the mesh using MOI3D and fixed the low poly in Blender. Some of the add-ons I used were Resample Mesh, LoopTools, etc. I always unwrap in Blender, straighten UVs using TexTools, and pack them with UVPackmaster. As for texture sets, each sight has its own UV set, since this is part of a weapon with removable attachments.MaterialsI love to talk about materials!The metal was very simple: a base with dark diffuse and specular, since it’s an anodized material, which is non-metallic and usually dark in nature. This was followed by a gloss and specular grain, and a metallic grain on the normal for the height.The next step is to put more normal details onto the surface, like milling.After that, I moved on to the color and gloss details. I added oil and fingerprint smudges layer by layer, as can be seen in the GIF below:After that, moving on to the dust, I did it in a way I really don’t recommend – manually. Every piece of dust was placed using a grainy noise stencil.Moving on to the sticker residue: I started with a little bit of glue and paper residue and worked my way up to dust.Parallax & Magnification EffectsFirst, I needed a height map, which can be created using just a simple black square. Next, for the depth, the following settings work very well, although depending on the size of the reticle, the “depth” option can be adjusted.Transmission set to Thin Surface works best, with “Use Albedo” turned on. Reflectivity should be set to Specular, with Conserve Energy turned on. And, of course, the alpha of the reticle should be in emission. Anything below 0.5 transparency works fine.One more important thing you will need in Marmoset Toolbag for this to work is in the Render settings: transmission needs to be at least 3 for the magnification to work.Magnification:The single most important setting for magnification is the Refractive Index under Transmission. Refractive Index at 1 = no magnification. As you increase the RI, you get higher and higher magnification, along with more distortion.RenderingRendering the asset was a huge challenge. I didn’t want to do a flat render on a flat background. There was also the element of showing off my WIP gun, as the EOTECH project is just part of the full AR-15 project.Here’s my rendering setup: 20% of a SWAT guy holding a gun. A major mistake I made was not scaling the SWAT model – my man is 6’9” tall. I should have scaled him to 6’.I use a lot of directional lights to light my scene because I don’t like to depend on HDRIs, so here’s my light setup without any directional lights and using just a simple lighting HDRI.After this, I added my fill lights:Then I added a few more key lights to just brighten everything up:Here are my lights. ConclusionA part of the appeal of creating weapons is to stand out, and I think that’s the most important factor if you want to get more reach. Most of the weapons have already been made over the past 30 years of first-person shooter history. The trick is to either make the best version yet or at least add some elements of your own that no one has thought of before. I added the sticker and markers as part of that, not to mention the environment renders compared to renders on a flat white or black background.One of the biggest challenges is making your work stand out. For example, when someone searches for an M4 on ArtStation, there will be a thousand thumbnails that appear, but the one from KEAL will always stand out to both beginners and veterans alike.Ayush Banik, Hard Surface ArtistInterview conducted by Gloria Levine
#see #how #you #can #makeSee How You Can Make EOTECH EXPS3 & G45 Scopes in 3DIntroduction My name is Ayush Banik, a 3D artist from India. I got into 3D through a random Blender tutorial back in late 2022, starting with that classic donut. I have worked on weapons for two games that are still under NDA, as well as some foliage and environment props for Dekogon. I'm still in engineering college, so working full-time is a bit difficult.EOTECH EXPS3 & G45The EOTECH EXPS3 scope and the G45 magnifier are part of my Full Internal AR-15 project. For references, I collected many high-quality images from EOTECH's official website as well as additional ones for texturing from The Weapon Room Discord channel. The aim of this project was to improve my texturing skills for black anodized and Cerakote coatings.I modeled this scope the same way I model all my other work: using a CAD/Blender live boolean workflow. For these two scopes in particular, I used Plasticity for modeling, MOI3D for exporting, and Blender for the low poly. The high-poly was derived in ZBrush using the process: DynaMesh > Polish > Decimate.For the topology, I exported the mesh using MOI3D and fixed the low poly in Blender. Some of the add-ons I used were Resample Mesh, LoopTools, etc. I always unwrap in Blender, straighten UVs using TexTools, and pack them with UVPackmaster. As for texture sets, each sight has its own UV set, since this is part of a weapon with removable attachments.MaterialsI love to talk about materials!The metal was very simple: a base with dark diffuse and specular, since it’s an anodized material, which is non-metallic and usually dark in nature. This was followed by a gloss and specular grain, and a metallic grain on the normal for the height.The next step is to put more normal details onto the surface, like milling.After that, I moved on to the color and gloss details. I added oil and fingerprint smudges layer by layer, as can be seen in the GIF below:After that, moving on to the dust, I did it in a way I really don’t recommend – manually. Every piece of dust was placed using a grainy noise stencil.Moving on to the sticker residue: I started with a little bit of glue and paper residue and worked my way up to dust.Parallax & Magnification EffectsFirst, I needed a height map, which can be created using just a simple black square. Next, for the depth, the following settings work very well, although depending on the size of the reticle, the “depth” option can be adjusted.Transmission set to Thin Surface works best, with “Use Albedo” turned on. Reflectivity should be set to Specular, with Conserve Energy turned on. And, of course, the alpha of the reticle should be in emission. Anything below 0.5 transparency works fine.One more important thing you will need in Marmoset Toolbag for this to work is in the Render settings: transmission needs to be at least 3 for the magnification to work.Magnification:The single most important setting for magnification is the Refractive Index under Transmission. Refractive Index at 1 = no magnification. As you increase the RI, you get higher and higher magnification, along with more distortion.RenderingRendering the asset was a huge challenge. I didn’t want to do a flat render on a flat background. There was also the element of showing off my WIP gun, as the EOTECH project is just part of the full AR-15 project.Here’s my rendering setup: 20% of a SWAT guy holding a gun. A major mistake I made was not scaling the SWAT model – my man is 6’9” tall. I should have scaled him to 6’.I use a lot of directional lights to light my scene because I don’t like to depend on HDRIs, so here’s my light setup without any directional lights and using just a simple lighting HDRI.After this, I added my fill lights:Then I added a few more key lights to just brighten everything up:Here are my lights. ConclusionA part of the appeal of creating weapons is to stand out, and I think that’s the most important factor if you want to get more reach. Most of the weapons have already been made over the past 30 years of first-person shooter history. The trick is to either make the best version yet or at least add some elements of your own that no one has thought of before. I added the sticker and markers as part of that, not to mention the environment renders compared to renders on a flat white or black background.One of the biggest challenges is making your work stand out. For example, when someone searches for an M4 on ArtStation, there will be a thousand thumbnails that appear, but the one from KEAL will always stand out to both beginners and veterans alike.Ayush Banik, Hard Surface ArtistInterview conducted by Gloria Levine #see #how #you #can #make2 Comments ·0 Shares -
Major Update Released For Stack-O-Bot Sample In Unreal Engine 5.6
Stack-O-Bot, originally released in 2022, has been completely rebuilt for Unreal Engine 5.6. This update introduces a range of new gameplay features, including dynamic cameras and intelligent NPCs powered by StateTree. It also expands the project with advanced systems like Procedural Content GenerationChaos Physics and Destruction, and more.This free sample project is perfect for newcomers to Unreal Engine as well as seasoned developers eager to explore the latest features. Built entirely with Blueprints, Stack-O-Bot is fully customizable without writing a single line of code, and developer comment bubbles throughout the project provide clear insights into how everything works. At the heart of the sample is a fully rigged third-person character capable of moving, jumping, and interacting with the environment.Stack-O-BotStack-O-BotStack-O-Bot empowers you to build intelligently with minimal effort and uses Level Instances, Hierarchical Level of Detail, smart UV-free materials, and integrates a variety of physics-based features, including telekinetic physics grab, destructible environments, and AI-driven NPCs designed for interaction. The sample also demonstrates lots of different ways to keep your players engaged, and it should run smoothly across everything from current-gen consoles to mobile devices.Epic Games is inviting everyone to join the Mega Stack-O-Jam, where you can work on an Unreal Engine project using the new Stack-o-Bot, learn from the experts behind it, and compete for prizes. Hurry and register by clicking this link.Download Stack-O-Bot 2.0 here and join our 80 Level Talent platform and our Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#major #update #released #stackobot #sampleMajor Update Released For Stack-O-Bot Sample In Unreal Engine 5.6Stack-O-Bot, originally released in 2022, has been completely rebuilt for Unreal Engine 5.6. This update introduces a range of new gameplay features, including dynamic cameras and intelligent NPCs powered by StateTree. It also expands the project with advanced systems like Procedural Content GenerationChaos Physics and Destruction, and more.This free sample project is perfect for newcomers to Unreal Engine as well as seasoned developers eager to explore the latest features. Built entirely with Blueprints, Stack-O-Bot is fully customizable without writing a single line of code, and developer comment bubbles throughout the project provide clear insights into how everything works. At the heart of the sample is a fully rigged third-person character capable of moving, jumping, and interacting with the environment.Stack-O-BotStack-O-BotStack-O-Bot empowers you to build intelligently with minimal effort and uses Level Instances, Hierarchical Level of Detail, smart UV-free materials, and integrates a variety of physics-based features, including telekinetic physics grab, destructible environments, and AI-driven NPCs designed for interaction. The sample also demonstrates lots of different ways to keep your players engaged, and it should run smoothly across everything from current-gen consoles to mobile devices.Epic Games is inviting everyone to join the Mega Stack-O-Jam, where you can work on an Unreal Engine project using the new Stack-o-Bot, learn from the experts behind it, and compete for prizes. Hurry and register by clicking this link.Download Stack-O-Bot 2.0 here and join our 80 Level Talent platform and our Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #major #update #released #stackobot #sample2 Comments ·0 Shares -
This Blood Skeleton Explodes in Impressive Attack VFX
One of the reasons why it's fun to play for magic users is the creative spells they have, like the one VFX artist Gabriel Domingues made for a challenge hosted by MadVFX, an educational platform focused on real-time VFX training.The topic was explosions, and he truly delivered, presenting a skeleton conjured from blood and detonating next to the enemy."Aimed for practicing blood shaders and some liquid feeling emitters, as well as using decals. Ended up creating these sort of living skeleton walking bomb," Domingues said.This is not his first project for MadVFX. For the Square Challenge, he created a cool Rubik's Cube projectile effect, also in Unreal Engine. You can enjoy Domingues's other works on LinkedIn and ArtStation. Join our 80 Level Talent platform and our Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#this #blood #skeleton #explodes #impressiveThis Blood Skeleton Explodes in Impressive Attack VFXOne of the reasons why it's fun to play for magic users is the creative spells they have, like the one VFX artist Gabriel Domingues made for a challenge hosted by MadVFX, an educational platform focused on real-time VFX training.The topic was explosions, and he truly delivered, presenting a skeleton conjured from blood and detonating next to the enemy."Aimed for practicing blood shaders and some liquid feeling emitters, as well as using decals. Ended up creating these sort of living skeleton walking bomb," Domingues said.This is not his first project for MadVFX. For the Square Challenge, he created a cool Rubik's Cube projectile effect, also in Unreal Engine. You can enjoy Domingues's other works on LinkedIn and ArtStation. Join our 80 Level Talent platform and our Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #this #blood #skeleton #explodes #impressive2 Comments ·0 Shares -
Vibrant Painterly-Style Magical Sphere Created with Blender
Take a moment to admire this vibrant setup created by Digital Artist and Software Engineer David Lettier. What makes this work stand out is its stunning painterly graphic style with violet and cyan reflections, combined with dynamically changing animated frames. To create this fascinating prop, the artist utilized Blender.David Lettier's portfolio features a lot of appealing hand-painted-style works, such as this kitchen interior, a 3D lamp, Christmas decorations, and more:Also, check out the amazing works of Yuasa Yuu that look like paintings:Follow David Lettier on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
#vibrant #painterlystyle #magical #sphere #createdVibrant Painterly-Style Magical Sphere Created with BlenderTake a moment to admire this vibrant setup created by Digital Artist and Software Engineer David Lettier. What makes this work stand out is its stunning painterly graphic style with violet and cyan reflections, combined with dynamically changing animated frames. To create this fascinating prop, the artist utilized Blender.David Lettier's portfolio features a lot of appealing hand-painted-style works, such as this kitchen interior, a 3D lamp, Christmas decorations, and more:Also, check out the amazing works of Yuasa Yuu that look like paintings:Follow David Lettier on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #vibrant #painterlystyle #magical #sphere #created4 Comments ·0 Shares
More Stories