• هل عمرك تفكرت كيف شوف الأطفال للأماكن يختلف عنّا؟

    في مقال رائع، نتكلموا على كيفاش الأطفال يعيشوا الفضاء بطريقة خاصة، مليانة إحساس وفضول. العمارة مش غير جدران وسقف، بل فرصة لخلق بيئات تشجع على التعلم والاكتشاف. لما نشوف أطفال يلعبوا تحت ضوء الشمس أو يكتشفوا زوايا جديدة في قاعة، نبدأ نفهم كيف أن كل تفاصيل صغيرة تأثر في شعورهم بالأمان والانتماء. هل كاين حاجة تانية تقدروا تشوفوها كيما هذي؟

    حبيت نشارك معاكم هالمقال، لأنه فعلاً يفتح آفاق جديدة للتفكير في كيفاش نقدروا نصمموا فضاءات تناسب الفئات الصغرى.

    https://www.archdaily.com/1033238/environments-of-curiosity-designing-for-children-teaching-and-imagination

    #تعليم #Architecture #Enfance #فضاء #Curiosity
    🌟 هل عمرك تفكرت كيف شوف الأطفال للأماكن يختلف عنّا؟ 🤔 في مقال رائع، نتكلموا على كيفاش الأطفال يعيشوا الفضاء بطريقة خاصة، مليانة إحساس وفضول. العمارة مش غير جدران وسقف، بل فرصة لخلق بيئات تشجع على التعلم والاكتشاف. لما نشوف أطفال يلعبوا تحت ضوء الشمس أو يكتشفوا زوايا جديدة في قاعة، نبدأ نفهم كيف أن كل تفاصيل صغيرة تأثر في شعورهم بالأمان والانتماء. هل كاين حاجة تانية تقدروا تشوفوها كيما هذي؟ حبيت نشارك معاكم هالمقال، لأنه فعلاً يفتح آفاق جديدة للتفكير في كيفاش نقدروا نصمموا فضاءات تناسب الفئات الصغرى. https://www.archdaily.com/1033238/environments-of-curiosity-designing-for-children-teaching-and-imagination #تعليم #Architecture #Enfance #فضاء #Curiosity
    www.archdaily.com
    Children encounter space differently from adults. For them, the world is not yet rationalized into function and circulation but is experienced through emotion and curiosity. Where adults may navigate rooms through habit, children inhabit the
    Like
    Love
    Wow
    Angry
    Sad
    234
    · 1 Commentaires ·0 Parts
  • Take a Look at This Impressive Recreation of Kowloon Walled City in Minecraft

    3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #take #look #this #impressive #recreation
    Take a Look at This Impressive Recreation of Kowloon Walled City in Minecraft
    3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #take #look #this #impressive #recreation
    Take a Look at This Impressive Recreation of Kowloon Walled City in Minecraft
    80.lv
    3D creator Sluda Builds unveiled this impressive recreation of a real-life Kowloon Walled City located in Hong Kong, made within Minecraft.The artist recreated dense urban environments of the city using the game's blocks, trying to capture the gritty aesthetics that the dangerous and overcrowded city had. In a time-lapse video, Sluda Builds showcased the entire creation process, explaining each step, including 3D modeling, topography, the making of buildings, floors, and stairs, facades, rooftops, surroundings, and other details.Have a closer look:Sluda Builds' profession is an architect, and the creator transferred it into the digital world by creating amazing projects in Minecraft:Also, check out another Minecraft-inspired project with voxel blocks mapped onto a spherical planet by 3D Artist Bowerbyte:Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 2 Commentaires ·0 Parts
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Commentaires ·0 Parts
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Commentaires ·0 Parts
  • يا جماعة، حريق مهول في غابات الشريعة بالبليدة!

    شوفوا الفيديو الجديد على قناة الشروق نيوز، وين نتابعوا معاً عمليات الإخماد اللي راها متواصلة لمحاولة إنقاذ الغابات من النيران. هاد الموضوع مهم جداً لأنه يمس البيئة اللي نعيشوا فيها وكلنا نعرفوا قداش الغابات تساهم في صحتنا.

    بصراحة، حبيتها فكرة أنو الناس متطوعين وفرق الحماية يشتغلوا كفريق واحد في هاد الظروف. تذكروا، كل واحد منا يقدر يساهم في حماية البيئة بحماية الطبيعة والمحافظة عليها.

    لازم نفكروا في المستقبل ونشجعوا بعضنا على التوعية حول أهمية الغابات.

    شوفوا الفيديو هنا:
    https://www.youtube.com/watch?v=b7ETuTC-2JQ

    #حرائق #غابات #EchoroukNews #Environment #NatureProtection
    🔥 يا جماعة، حريق مهول في غابات الشريعة بالبليدة! 🙁 شوفوا الفيديو الجديد على قناة الشروق نيوز، وين نتابعوا معاً عمليات الإخماد اللي راها متواصلة لمحاولة إنقاذ الغابات من النيران. هاد الموضوع مهم جداً لأنه يمس البيئة اللي نعيشوا فيها وكلنا نعرفوا قداش الغابات تساهم في صحتنا. بصراحة، حبيتها فكرة أنو الناس متطوعين وفرق الحماية يشتغلوا كفريق واحد في هاد الظروف. تذكروا، كل واحد منا يقدر يساهم في حماية البيئة بحماية الطبيعة والمحافظة عليها. لازم نفكروا في المستقبل ونشجعوا بعضنا على التوعية حول أهمية الغابات. شوفوا الفيديو هنا: https://www.youtube.com/watch?v=b7ETuTC-2JQ #حرائق #غابات #EchoroukNews #Environment #NatureProtection
    Like
    Love
    Wow
    Sad
    Angry
    968
    · 1 Commentaires ·0 Parts
  • Create a Manga Style Animation with Grease Pencil (BLAME! Edition)

    Dive into creating Manga-style NPR rendering with this super extensive and free tutorial series by Harry Helps.
    Hello Everyone!
    In this comprehensive tutorial, we'll create a subtle looping animation of a manga style environment in Blender! We'll be using a panel from the manga "BLAME!" as our reference!
    We'll start with a ready-made starter file and design custom materials to give it a manga aesthetic. Then, using the grease pencil tool, we'll draw hand-crafted elements directly into the scene. To enhance the manga aesthetic, we'll apply compositing overlays that give it the feel of a manga page. In the final step, we'll animate the camera and export the finished piece as both a video and an animated GIF!
    I hope you enjoy the tutorial!
    See the full 17-part playlist here:
    #create #manga #style #animation #with
    Create a Manga Style Animation with Grease Pencil (BLAME! Edition)
    Dive into creating Manga-style NPR rendering with this super extensive and free tutorial series by Harry Helps. Hello Everyone! In this comprehensive tutorial, we'll create a subtle looping animation of a manga style environment in Blender! We'll be using a panel from the manga "BLAME!" as our reference! We'll start with a ready-made starter file and design custom materials to give it a manga aesthetic. Then, using the grease pencil tool, we'll draw hand-crafted elements directly into the scene. To enhance the manga aesthetic, we'll apply compositing overlays that give it the feel of a manga page. In the final step, we'll animate the camera and export the finished piece as both a video and an animated GIF! I hope you enjoy the tutorial! See the full 17-part playlist here: #create #manga #style #animation #with
    Create a Manga Style Animation with Grease Pencil (BLAME! Edition)
    www.blendernation.com
    Dive into creating Manga-style NPR rendering with this super extensive and free tutorial series by Harry Helps. Hello Everyone! In this comprehensive tutorial, we'll create a subtle looping animation of a manga style environment in Blender! We'll be using a panel from the manga "BLAME!" as our reference! We'll start with a ready-made starter file and design custom materials to give it a manga aesthetic. Then, using the grease pencil tool, we'll draw hand-crafted elements directly into the scene. To enhance the manga aesthetic, we'll apply compositing overlays that give it the feel of a manga page. In the final step, we'll animate the camera and export the finished piece as both a video and an animated GIF! I hope you enjoy the tutorial! See the full 17-part playlist here:
    Like
    Love
    Wow
    Sad
    Angry
    816
    · 2 Commentaires ·0 Parts
  • Tencent claims its new AI tool will reduce art production timeframes from days to minutes

    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 22, 20252 Min ReadImage via Tencent Tencent debuted a new AI creation tool called VISVISE at Gamescom 2025 that it claims will accelerate video game art production by automating repetitive tasks. The Chinese conglomerate billed VISVISE as an end-to-end AI game creation suit that will "dramatically cut down game art design time from days or even months, down to minutes." "With capabilities spanning animation and modeling to the creation of intelligent NPCs, or managing digital assets, VISVISE provides game developers and designers with a complete AIGC-powered toolset to accelerate workflows," it added. Tencent said the tool will specifically allow developers to rapidly skin and animate characters in a matter of minutes—a process it claims usually takes up to three-and-a-half days. In addition, the company claimed skeletal animations can be produced in just 10 seconds with VISVISE. Tencent said that process usually takes between three and seven days. "This results in an eightfold improvement in character skinning throughput and transforms animation into a fully automated process of 'keyframe generation + intelligent in-betweening,'" it continued. Tencent claims it doesn't want VISVISE to replace 'human ingenuity'Tencent Games VISVISE expert Zijiao Zeng delivered a keynote at Devcomand shared more details on VISVISE's two key core technologies: VISVISE GoSkinning and VISVISE MotionBlink. Related:GoSkinning works by leveraging a universal AI model to automatically adapt to different skeletal structures. Tencent told Game Developer the tool is based on an AI model developed in-house. The company explained GoSkinning achieves around 85 percent automation and uses a two-step process of bone chain prediction and weight refinement, while its proprietary 'Skirt AI' addresses "complex garment deformation issues." MotionBlink, meanwhile, uses a self-regressive diffusion architecture to rapidly generate keyframes combined with pre-trained CVAE and contrastive learning to produce smooth motion transitions that Tencent claims will rival optimal motion capture and eliminate common issues such as foot sliding and jitter. VISVISE tools have already been integrated into the development of over 90 titles, including PUBG Mobile. Addressing the widespread concerns surrounding AI technology and automation, Tencent said it doesn't envision a future in which VISVIE replaces workers and instead explained it views the technology as a "supporting tool." "VISVISE is designed to automate repetitive tasks with human oversight, enabling creative teams to focus on core artistic and design elements that define a great game," a Tencent spokesperson told Game Developer. "Human ingenuity, intuition and connection continue to be pillars of our industry's success, and the keys to developing engaging, emotionally resonant games. With AI, we hope to accelerate creativity, building a collaborative environment where we can continue to create, play and sell quality games."Related:Game Developer attended Gamescom 2025 via the Gamescom Media Ambassador Program, which covered flights and accommodation. about:GamescomAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #tencent #claims #its #new #tool
    Tencent claims its new AI tool will reduce art production timeframes from days to minutes
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 22, 20252 Min ReadImage via Tencent Tencent debuted a new AI creation tool called VISVISE at Gamescom 2025 that it claims will accelerate video game art production by automating repetitive tasks. The Chinese conglomerate billed VISVISE as an end-to-end AI game creation suit that will "dramatically cut down game art design time from days or even months, down to minutes." "With capabilities spanning animation and modeling to the creation of intelligent NPCs, or managing digital assets, VISVISE provides game developers and designers with a complete AIGC-powered toolset to accelerate workflows," it added. Tencent said the tool will specifically allow developers to rapidly skin and animate characters in a matter of minutes—a process it claims usually takes up to three-and-a-half days. In addition, the company claimed skeletal animations can be produced in just 10 seconds with VISVISE. Tencent said that process usually takes between three and seven days. "This results in an eightfold improvement in character skinning throughput and transforms animation into a fully automated process of 'keyframe generation + intelligent in-betweening,'" it continued. Tencent claims it doesn't want VISVISE to replace 'human ingenuity'Tencent Games VISVISE expert Zijiao Zeng delivered a keynote at Devcomand shared more details on VISVISE's two key core technologies: VISVISE GoSkinning and VISVISE MotionBlink. Related:GoSkinning works by leveraging a universal AI model to automatically adapt to different skeletal structures. Tencent told Game Developer the tool is based on an AI model developed in-house. The company explained GoSkinning achieves around 85 percent automation and uses a two-step process of bone chain prediction and weight refinement, while its proprietary 'Skirt AI' addresses "complex garment deformation issues." MotionBlink, meanwhile, uses a self-regressive diffusion architecture to rapidly generate keyframes combined with pre-trained CVAE and contrastive learning to produce smooth motion transitions that Tencent claims will rival optimal motion capture and eliminate common issues such as foot sliding and jitter. VISVISE tools have already been integrated into the development of over 90 titles, including PUBG Mobile. Addressing the widespread concerns surrounding AI technology and automation, Tencent said it doesn't envision a future in which VISVIE replaces workers and instead explained it views the technology as a "supporting tool." "VISVISE is designed to automate repetitive tasks with human oversight, enabling creative teams to focus on core artistic and design elements that define a great game," a Tencent spokesperson told Game Developer. "Human ingenuity, intuition and connection continue to be pillars of our industry's success, and the keys to developing engaging, emotionally resonant games. With AI, we hope to accelerate creativity, building a collaborative environment where we can continue to create, play and sell quality games."Related:Game Developer attended Gamescom 2025 via the Gamescom Media Ambassador Program, which covered flights and accommodation. about:GamescomAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #tencent #claims #its #new #tool
    Tencent claims its new AI tool will reduce art production timeframes from days to minutes
    www.gamedeveloper.com
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 22, 20252 Min ReadImage via Tencent Tencent debuted a new AI creation tool called VISVISE at Gamescom 2025 that it claims will accelerate video game art production by automating repetitive tasks. The Chinese conglomerate billed VISVISE as an end-to-end AI game creation suit that will "dramatically cut down game art design time from days or even months, down to minutes." "With capabilities spanning animation and modeling to the creation of intelligent NPCs, or managing digital assets, VISVISE provides game developers and designers with a complete AIGC-powered toolset to accelerate workflows," it added. Tencent said the tool will specifically allow developers to rapidly skin and animate characters in a matter of minutes—a process it claims usually takes up to three-and-a-half days. In addition, the company claimed skeletal animations can be produced in just 10 seconds with VISVISE. Tencent said that process usually takes between three and seven days. "This results in an eightfold improvement in character skinning throughput and transforms animation into a fully automated process of 'keyframe generation + intelligent in-betweening,'" it continued. Tencent claims it doesn't want VISVISE to replace 'human ingenuity'Tencent Games VISVISE expert Zijiao Zeng delivered a keynote at Devcom (soon to be rebranded as Gamescom Dev) and shared more details on VISVISE's two key core technologies: VISVISE GoSkinning and VISVISE MotionBlink. Related:GoSkinning works by leveraging a universal AI model to automatically adapt to different skeletal structures. Tencent told Game Developer the tool is based on an AI model developed in-house. The company explained GoSkinning achieves around 85 percent automation and uses a two-step process of bone chain prediction and weight refinement, while its proprietary 'Skirt AI' addresses "complex garment deformation issues." MotionBlink, meanwhile, uses a self-regressive diffusion architecture to rapidly generate keyframes combined with pre-trained CVAE and contrastive learning to produce smooth motion transitions that Tencent claims will rival optimal motion capture and eliminate common issues such as foot sliding and jitter. VISVISE tools have already been integrated into the development of over 90 titles, including PUBG Mobile. Addressing the widespread concerns surrounding AI technology and automation, Tencent said it doesn't envision a future in which VISVIE replaces workers and instead explained it views the technology as a "supporting tool." "VISVISE is designed to automate repetitive tasks with human oversight, enabling creative teams to focus on core artistic and design elements that define a great game," a Tencent spokesperson told Game Developer. "Human ingenuity, intuition and connection continue to be pillars of our industry's success, and the keys to developing engaging, emotionally resonant games. With AI, we hope to accelerate creativity, building a collaborative environment where we can continue to create, play and sell quality games."Related:Game Developer attended Gamescom 2025 via the Gamescom Media Ambassador Program, which covered flights and accommodation.Read more about:GamescomAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    Like
    Love
    Wow
    Sad
    Angry
    943
    · 2 Commentaires ·0 Parts
  • Metal Gear Solid Delta: Snake Eater developer interview

    Metal Gear Solid Delta: Snake Eater, launching August 28 on PlayStation 5, is a remake of the 2004 PlayStation 2 classic, Metal Gear Solid 3: Snake Eater. I had a conversation with the developers during a Tokyo press event to discuss the upcoming remake and its development process. 

    ​​

    Faithfully replicating the thrill and impact of the original

    PlayStation Blog: How important was it to your team to create a game that stayed true to the original?

    Noriaki Okamura: We began this project with the intention of bringing a 20-year-old game to the present day. While we updated the graphics and certain game mechanics to ensure today’s players could fully enjoy the experience, we wanted to stay true to the original as much as possible.

    What challenges did your team face during development, and what specific adjustments were implemented?

    Okamura: I had no intention of altering the original story, so I insisted that we can just update the game graphics. Korekado disagreed and warned me that that approach will not work, but I initially had the team re-create the game just with new character models. Although the graphics improved, they appeared doll-like and unrealistic, so I finally realized that my plan was inadequate.

    Yuji Korekado: We began by reworking the animation and game mechanics. We implemented animation programming that didn’t exist two decades ago to make the game more realistic, but that also meant we couldn’t reproduce the original game mechanics. Metal Gear is a stealth game, so it’s crucial for players to be able to make precise movements. We put in a lot of effort to replicate the same feel as the original, while maintaining realism.

    Are there any areas of the game that you wanted to recreate as faithfully as possible?

    Korekado: We made sure that the jungle looked as realistic as possible. We devoted a lot of time modeling every fine detail like leaves, grass, and moss covering the ground. Since the perspective shifts along with the character’s movements, players will get a closer look at the ground when they’re crouching or crawling. To make sure the environment was immersive from every angle, we carefully crafted every element with great precision.

    ​​

    Have any enhancements been made compared to the original PS2 version?

    Korekado: We enhanced the visuals to be more intuitive. Thanks to increased memory and much faster speeds the user experience has improved significantly, including faster transitions to the Survival Viewer or having a quick menu to swap uniforms. On top of that, the audio improvements are remarkable. Sound absorption rates vary depending on the materials of the walls and floors, which allows players to detect enemies behind walls or nearby animals intuitively. In areas like caves and long corridors, unique echo parameters help distinguish different environments, which I think is a major advancement for stealth gameplay.

    Extra content for players to enjoy diverse gameplay

    The remake features Fox Hunt, a new online multiplayer mode. Why did you include this in the game instead of Metal Gear Online?

    Yu Sahara: The remake features significantly enhanced graphics, so we explored various online modes that aligned with these improvements. We decided to focus on stealth, sneaking, and survival, since those are also the key pillars of the main game. We landed on a concept that is based on hide-and-seek, that is classic Metal Gear, while also being reminiscent of the stealth missions featured in the earlier MGO.

    Can players earn rewards by playing the Fox Hunt mode?

    Sahara: While there are no items that can be transferred to the main game, players can unlock rewards like new camouflage options by playing Fox Hunt multiple times.

    Were there any challenges or specific areas of focus while remaking Snake vs Monkey mode?

    Taiga Ishigami: Our main goal was to make Pipo Monkey even more charming, cute, and entertaining. We developed new character actions, including the “Gotcha!” motion, and each animation and sound effect were carefully reviewed to ensure it captured Pipo Monkey’s personality. If anything felt off, we made changes right away.

    I heard the new Snake vs Monkey mode features an Astro Bot collab.

    Ishigami: Yes, a couple of bots from the Astro Bot game will make an appearance, and you can capture them just like the Pipo Monkeys. Capturing these bots isn’t required to finish the levels, but you’ll receive unique rewards if you do. Depending on the level, either a standard bot or a Pipo Monkey bot will be hidden away, so be sure to keep an eye out for them.

    ​​

    Do you have any final words for new players as well as longtime fans of the original game?

    Okamura: I rarely cry when playing games, but I remember bawling my eyes out while playing the original Metal Gear Solid 3. The development of Metal Gear Solid Delta: Snake Eater was driven by our goal to faithfully capture the impact and thrill that players felt two decades ago. Metal Gear Solid 3 is the ultimate example of storytelling in games, and having dreamed of making a game like this, I now feel a sense of fulfillment. I hope everyone enjoys the story as much as I do.

    Metal Gear Solid Delta: Snake Eater arrives on PS5 on August 28. 

    Read a new hands-on report with the game.
    #metal #gear #solid #delta #snake
    Metal Gear Solid Delta: Snake Eater developer interview
    Metal Gear Solid Delta: Snake Eater, launching August 28 on PlayStation 5, is a remake of the 2004 PlayStation 2 classic, Metal Gear Solid 3: Snake Eater. I had a conversation with the developers during a Tokyo press event to discuss the upcoming remake and its development process.  ​​ Faithfully replicating the thrill and impact of the original PlayStation Blog: How important was it to your team to create a game that stayed true to the original? Noriaki Okamura: We began this project with the intention of bringing a 20-year-old game to the present day. While we updated the graphics and certain game mechanics to ensure today’s players could fully enjoy the experience, we wanted to stay true to the original as much as possible. What challenges did your team face during development, and what specific adjustments were implemented? Okamura: I had no intention of altering the original story, so I insisted that we can just update the game graphics. Korekado disagreed and warned me that that approach will not work, but I initially had the team re-create the game just with new character models. Although the graphics improved, they appeared doll-like and unrealistic, so I finally realized that my plan was inadequate. Yuji Korekado: We began by reworking the animation and game mechanics. We implemented animation programming that didn’t exist two decades ago to make the game more realistic, but that also meant we couldn’t reproduce the original game mechanics. Metal Gear is a stealth game, so it’s crucial for players to be able to make precise movements. We put in a lot of effort to replicate the same feel as the original, while maintaining realism. Are there any areas of the game that you wanted to recreate as faithfully as possible? Korekado: We made sure that the jungle looked as realistic as possible. We devoted a lot of time modeling every fine detail like leaves, grass, and moss covering the ground. Since the perspective shifts along with the character’s movements, players will get a closer look at the ground when they’re crouching or crawling. To make sure the environment was immersive from every angle, we carefully crafted every element with great precision. ​​ Have any enhancements been made compared to the original PS2 version? Korekado: We enhanced the visuals to be more intuitive. Thanks to increased memory and much faster speeds the user experience has improved significantly, including faster transitions to the Survival Viewer or having a quick menu to swap uniforms. On top of that, the audio improvements are remarkable. Sound absorption rates vary depending on the materials of the walls and floors, which allows players to detect enemies behind walls or nearby animals intuitively. In areas like caves and long corridors, unique echo parameters help distinguish different environments, which I think is a major advancement for stealth gameplay. Extra content for players to enjoy diverse gameplay The remake features Fox Hunt, a new online multiplayer mode. Why did you include this in the game instead of Metal Gear Online? Yu Sahara: The remake features significantly enhanced graphics, so we explored various online modes that aligned with these improvements. We decided to focus on stealth, sneaking, and survival, since those are also the key pillars of the main game. We landed on a concept that is based on hide-and-seek, that is classic Metal Gear, while also being reminiscent of the stealth missions featured in the earlier MGO. Can players earn rewards by playing the Fox Hunt mode? Sahara: While there are no items that can be transferred to the main game, players can unlock rewards like new camouflage options by playing Fox Hunt multiple times. Were there any challenges or specific areas of focus while remaking Snake vs Monkey mode? Taiga Ishigami: Our main goal was to make Pipo Monkey even more charming, cute, and entertaining. We developed new character actions, including the “Gotcha!” motion, and each animation and sound effect were carefully reviewed to ensure it captured Pipo Monkey’s personality. If anything felt off, we made changes right away. I heard the new Snake vs Monkey mode features an Astro Bot collab. Ishigami: Yes, a couple of bots from the Astro Bot game will make an appearance, and you can capture them just like the Pipo Monkeys. Capturing these bots isn’t required to finish the levels, but you’ll receive unique rewards if you do. Depending on the level, either a standard bot or a Pipo Monkey bot will be hidden away, so be sure to keep an eye out for them. ​​ Do you have any final words for new players as well as longtime fans of the original game? Okamura: I rarely cry when playing games, but I remember bawling my eyes out while playing the original Metal Gear Solid 3. The development of Metal Gear Solid Delta: Snake Eater was driven by our goal to faithfully capture the impact and thrill that players felt two decades ago. Metal Gear Solid 3 is the ultimate example of storytelling in games, and having dreamed of making a game like this, I now feel a sense of fulfillment. I hope everyone enjoys the story as much as I do. Metal Gear Solid Delta: Snake Eater arrives on PS5 on August 28.  Read a new hands-on report with the game. #metal #gear #solid #delta #snake
    Metal Gear Solid Delta: Snake Eater developer interview
    blog.playstation.com
    Metal Gear Solid Delta: Snake Eater, launching August 28 on PlayStation 5, is a remake of the 2004 PlayStation 2 classic, Metal Gear Solid 3: Snake Eater. I had a conversation with the developers during a Tokyo press event to discuss the upcoming remake and its development process.  ​​ Faithfully replicating the thrill and impact of the original PlayStation Blog: How important was it to your team to create a game that stayed true to the original? Noriaki Okamura (Metal Gear Series Producer): We began this project with the intention of bringing a 20-year-old game to the present day. While we updated the graphics and certain game mechanics to ensure today’s players could fully enjoy the experience, we wanted to stay true to the original as much as possible. What challenges did your team face during development, and what specific adjustments were implemented? Okamura: I had no intention of altering the original story, so I insisted that we can just update the game graphics. Korekado disagreed and warned me that that approach will not work, but I initially had the team re-create the game just with new character models. Although the graphics improved, they appeared doll-like and unrealistic, so I finally realized that my plan was inadequate. Yuji Korekado (Creative Producer): We began by reworking the animation and game mechanics. We implemented animation programming that didn’t exist two decades ago to make the game more realistic, but that also meant we couldn’t reproduce the original game mechanics. Metal Gear is a stealth game, so it’s crucial for players to be able to make precise movements. We put in a lot of effort to replicate the same feel as the original, while maintaining realism. Are there any areas of the game that you wanted to recreate as faithfully as possible? Korekado: We made sure that the jungle looked as realistic as possible. We devoted a lot of time modeling every fine detail like leaves, grass, and moss covering the ground. Since the perspective shifts along with the character’s movements, players will get a closer look at the ground when they’re crouching or crawling. To make sure the environment was immersive from every angle, we carefully crafted every element with great precision. ​​ Have any enhancements been made compared to the original PS2 version? Korekado: We enhanced the visuals to be more intuitive. Thanks to increased memory and much faster speeds the user experience has improved significantly, including faster transitions to the Survival Viewer or having a quick menu to swap uniforms. On top of that, the audio improvements are remarkable. Sound absorption rates vary depending on the materials of the walls and floors, which allows players to detect enemies behind walls or nearby animals intuitively. In areas like caves and long corridors, unique echo parameters help distinguish different environments, which I think is a major advancement for stealth gameplay. Extra content for players to enjoy diverse gameplay The remake features Fox Hunt, a new online multiplayer mode. Why did you include this in the game instead of Metal Gear Online (MGO)? Yu Sahara (Fox Hunt Director): The remake features significantly enhanced graphics, so we explored various online modes that aligned with these improvements. We decided to focus on stealth, sneaking, and survival, since those are also the key pillars of the main game. We landed on a concept that is based on hide-and-seek, that is classic Metal Gear, while also being reminiscent of the stealth missions featured in the earlier MGO. Can players earn rewards by playing the Fox Hunt mode? Sahara: While there are no items that can be transferred to the main game, players can unlock rewards like new camouflage options by playing Fox Hunt multiple times. Were there any challenges or specific areas of focus while remaking Snake vs Monkey mode? Taiga Ishigami (Planner): Our main goal was to make Pipo Monkey even more charming, cute, and entertaining. We developed new character actions, including the “Gotcha!” motion, and each animation and sound effect were carefully reviewed to ensure it captured Pipo Monkey’s personality. If anything felt off, we made changes right away. I heard the new Snake vs Monkey mode features an Astro Bot collab. Ishigami: Yes, a couple of bots from the Astro Bot game will make an appearance, and you can capture them just like the Pipo Monkeys. Capturing these bots isn’t required to finish the levels, but you’ll receive unique rewards if you do. Depending on the level, either a standard bot or a Pipo Monkey bot will be hidden away, so be sure to keep an eye out for them. ​​ Do you have any final words for new players as well as longtime fans of the original game? Okamura: I rarely cry when playing games, but I remember bawling my eyes out while playing the original Metal Gear Solid 3. The development of Metal Gear Solid Delta: Snake Eater was driven by our goal to faithfully capture the impact and thrill that players felt two decades ago. Metal Gear Solid 3 is the ultimate example of storytelling in games, and having dreamed of making a game like this, I now feel a sense of fulfillment. I hope everyone enjoys the story as much as I do. Metal Gear Solid Delta: Snake Eater arrives on PS5 on August 28.  Read a new hands-on report with the game.
    Like
    Love
    Wow
    Sad
    Angry
    455
    · 2 Commentaires ·0 Parts
  • Metal Gear Solid Delta: Snake Eater hands-on report

    It’s been over two decades since Metal Gear Solid 3: Snake Eater was first released on PlayStation 2. The game was praised for its story, characters, and possibly one of the greatest themes in video game history. After some brumation, it sheds its skin and emerges as Metal Gear Solid Delta: Snake Eater on August 28, aiming to recapture the spirit that made the original a beloved classic. After about eight hours of playing the game on PS5 Pro, I’m thrilled to share how it captures and modernizes the original’s spirit, and then some.

    Play Video

    Delta is a true from-the-ground-up remake that is extremely faithful to the original work in most aspects of the game, but what was immediately apparent was the level of detail the updated visuals and textures add to the experience. 

    A new level of visual fidelity

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    View and download image

    Download the image

    close
    Close

    Download this image

    This updated version of Snake Eater is a visual feast on PS5 Pro, especially in the lush details. For example, rain droplets trickle realistically down a poncho, and Snake’s camouflage and uniforms become dirty with mud or forest debris. This filth even carries over into cutscenes, adding an appreciated level of realism.

    The Metal Gear series showcases a range of grizzled warriors, many with scars that tell a tale. If you’re familiar with Snake Eater, you understand that scars hold a lot of importance throughout, and the devs took great care to make them stand out. One of the most notable examples is Colonel Volgin’s harrowingly scarred face. The believable tissue and its deformation when he speaks create a tragically beautiful portrait. 

    Speaking of portraits, a new photo mode has been added with all the latest bells and whistles. Like most Metal Gear games, Delta definitely has its fair share of silly moments, and you can capture them all. With plenty of filters and settings, create a masterpiece on the mountainside, or dress up in a crocodile head and let antics ensue. Photo Mode is the perfect way to capture all the little details hiding within.

    Game controls – New Style vs. Legacy

    A new control scheme has been introduced to bring Snake Eater to the modern gaming era, dubbed New Style. Before starting a new game, players can choose between the New Style and Legacy, which retains the controls mapped after the original PS2 release. You can switch between styles, but be warned, this will reload the level/map and take you back to the beginning of the section.

    New Style is geared for people who have never played the game before, or who might prefer a more modern playstyle. The control option provides a free-moving camera that lets you view your environment in 360 degrees, making it easier to avoid getting lost or having enemies catch you unprepared.

    Combat and shooting feel reminiscent of Metal Gear Solid V, with a third-person over-the-shoulder camera. By default, aim assist is turned on, but can be toggled off. Even in New Style, you can still switch to a classic first-person view and still fully move around as if playing a FPS title. First-person view is especially valuable when lining up the perfect shot through a chainlink fence, which I couldn’t pull off in third-person.

    The biggest saving grace in the updated control scheme is the remapped directional buttons. Holding left brings up your non-combat inventory, and holding right brings up your currently equipped weapons. Up brings up the quick-change camouflage menu, while down brings up your radio —a hugely welcome shortcut. No more digging through menus to change outfits based on your environment.

    Snake sneaks through a range of environments in Snake Eater, each suited to different camouflage options The quick change menu conveniently shows the optimal face and body combo from your collection based on the current environmentIn one instance, I managed to seamlessly transition from a green texture to a stone grey-black getup, then to a rust-colored camouflage, all along the same crawl route. This new quality-of-life option keeps the action flowing.

    Another great accessibility feature is the ability to fine-tune game hints. From always-on to none at all. I had it set to show helpful hints when they were relevant, like swimming controls appearing by a body of water and hanging controls on the cliffside. This is particularly helpful in rare gameplay situations, as it kept me from panicking in high-stress situations. 

    What a thrill

    The voice cast still delivers, and The Cobra Unit is just as compelling, with big moments still having the right impact. The ladder scene took me right back to playing the original on my grandmother’s floor all those years ago. 

    Paradoxes, easter eggs, and all the details I’d expect are still in place. I didn’t encounter any moments that felt off or deviated too far in any way from the script. The opening theme and intro movie have been remixed, and while it will come down to personal taste, every note still hits for me. 

    Metal Gear Solid Delta: Snake Eater launches on August 28 for PS5, and is a day to mark on your calendar whether you’re a longtime fan or series newcomer interested in discovering the celebrated origins of the storyline.  

    Metal Gear Solid Delta: Snake Eater developers discuss the game in length in a new interview.
    #metal #gear #solid #delta #snake
    Metal Gear Solid Delta: Snake Eater hands-on report
    It’s been over two decades since Metal Gear Solid 3: Snake Eater was first released on PlayStation 2. The game was praised for its story, characters, and possibly one of the greatest themes in video game history. After some brumation, it sheds its skin and emerges as Metal Gear Solid Delta: Snake Eater on August 28, aiming to recapture the spirit that made the original a beloved classic. After about eight hours of playing the game on PS5 Pro, I’m thrilled to share how it captures and modernizes the original’s spirit, and then some. Play Video Delta is a true from-the-ground-up remake that is extremely faithful to the original work in most aspects of the game, but what was immediately apparent was the level of detail the updated visuals and textures add to the experience.  A new level of visual fidelity View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image This updated version of Snake Eater is a visual feast on PS5 Pro, especially in the lush details. For example, rain droplets trickle realistically down a poncho, and Snake’s camouflage and uniforms become dirty with mud or forest debris. This filth even carries over into cutscenes, adding an appreciated level of realism. The Metal Gear series showcases a range of grizzled warriors, many with scars that tell a tale. If you’re familiar with Snake Eater, you understand that scars hold a lot of importance throughout, and the devs took great care to make them stand out. One of the most notable examples is Colonel Volgin’s harrowingly scarred face. The believable tissue and its deformation when he speaks create a tragically beautiful portrait.  Speaking of portraits, a new photo mode has been added with all the latest bells and whistles. Like most Metal Gear games, Delta definitely has its fair share of silly moments, and you can capture them all. With plenty of filters and settings, create a masterpiece on the mountainside, or dress up in a crocodile head and let antics ensue. Photo Mode is the perfect way to capture all the little details hiding within. Game controls – New Style vs. Legacy A new control scheme has been introduced to bring Snake Eater to the modern gaming era, dubbed New Style. Before starting a new game, players can choose between the New Style and Legacy, which retains the controls mapped after the original PS2 release. You can switch between styles, but be warned, this will reload the level/map and take you back to the beginning of the section. New Style is geared for people who have never played the game before, or who might prefer a more modern playstyle. The control option provides a free-moving camera that lets you view your environment in 360 degrees, making it easier to avoid getting lost or having enemies catch you unprepared. Combat and shooting feel reminiscent of Metal Gear Solid V, with a third-person over-the-shoulder camera. By default, aim assist is turned on, but can be toggled off. Even in New Style, you can still switch to a classic first-person view and still fully move around as if playing a FPS title. First-person view is especially valuable when lining up the perfect shot through a chainlink fence, which I couldn’t pull off in third-person. The biggest saving grace in the updated control scheme is the remapped directional buttons. Holding left brings up your non-combat inventory, and holding right brings up your currently equipped weapons. Up brings up the quick-change camouflage menu, while down brings up your radio —a hugely welcome shortcut. No more digging through menus to change outfits based on your environment. Snake sneaks through a range of environments in Snake Eater, each suited to different camouflage options The quick change menu conveniently shows the optimal face and body combo from your collection based on the current environmentIn one instance, I managed to seamlessly transition from a green texture to a stone grey-black getup, then to a rust-colored camouflage, all along the same crawl route. This new quality-of-life option keeps the action flowing. Another great accessibility feature is the ability to fine-tune game hints. From always-on to none at all. I had it set to show helpful hints when they were relevant, like swimming controls appearing by a body of water and hanging controls on the cliffside. This is particularly helpful in rare gameplay situations, as it kept me from panicking in high-stress situations.  What a thrill The voice cast still delivers, and The Cobra Unit is just as compelling, with big moments still having the right impact. The ladder scene took me right back to playing the original on my grandmother’s floor all those years ago.  Paradoxes, easter eggs, and all the details I’d expect are still in place. I didn’t encounter any moments that felt off or deviated too far in any way from the script. The opening theme and intro movie have been remixed, and while it will come down to personal taste, every note still hits for me.  Metal Gear Solid Delta: Snake Eater launches on August 28 for PS5, and is a day to mark on your calendar whether you’re a longtime fan or series newcomer interested in discovering the celebrated origins of the storyline.   Metal Gear Solid Delta: Snake Eater developers discuss the game in length in a new interview. #metal #gear #solid #delta #snake
    Metal Gear Solid Delta: Snake Eater hands-on report
    blog.playstation.com
    It’s been over two decades since Metal Gear Solid 3: Snake Eater was first released on PlayStation 2. The game was praised for its story, characters, and possibly one of the greatest themes in video game history. After some brumation, it sheds its skin and emerges as Metal Gear Solid Delta: Snake Eater on August 28, aiming to recapture the spirit that made the original a beloved classic. After about eight hours of playing the game on PS5 Pro, I’m thrilled to share how it captures and modernizes the original’s spirit, and then some. Play Video Delta is a true from-the-ground-up remake that is extremely faithful to the original work in most aspects of the game, but what was immediately apparent was the level of detail the updated visuals and textures add to the experience.  A new level of visual fidelity View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image View and download image Download the image close Close Download this image This updated version of Snake Eater is a visual feast on PS5 Pro, especially in the lush details. For example, rain droplets trickle realistically down a poncho, and Snake’s camouflage and uniforms become dirty with mud or forest debris. This filth even carries over into cutscenes, adding an appreciated level of realism. The Metal Gear series showcases a range of grizzled warriors, many with scars that tell a tale. If you’re familiar with Snake Eater, you understand that scars hold a lot of importance throughout, and the devs took great care to make them stand out. One of the most notable examples is Colonel Volgin’s harrowingly scarred face. The believable tissue and its deformation when he speaks create a tragically beautiful portrait.  Speaking of portraits, a new photo mode has been added with all the latest bells and whistles. Like most Metal Gear games, Delta definitely has its fair share of silly moments, and you can capture them all. With plenty of filters and settings, create a masterpiece on the mountainside, or dress up in a crocodile head and let antics ensue. Photo Mode is the perfect way to capture all the little details hiding within. Game controls – New Style vs. Legacy A new control scheme has been introduced to bring Snake Eater to the modern gaming era, dubbed New Style. Before starting a new game, players can choose between the New Style and Legacy, which retains the controls mapped after the original PS2 release. You can switch between styles, but be warned, this will reload the level/map and take you back to the beginning of the section. New Style is geared for people who have never played the game before, or who might prefer a more modern playstyle. The control option provides a free-moving camera that lets you view your environment in 360 degrees, making it easier to avoid getting lost or having enemies catch you unprepared. Combat and shooting feel reminiscent of Metal Gear Solid V, with a third-person over-the-shoulder camera. By default, aim assist is turned on, but can be toggled off. Even in New Style, you can still switch to a classic first-person view and still fully move around as if playing a FPS title. First-person view is especially valuable when lining up the perfect shot through a chainlink fence, which I couldn’t pull off in third-person. The biggest saving grace in the updated control scheme is the remapped directional buttons. Holding left brings up your non-combat inventory, and holding right brings up your currently equipped weapons. Up brings up the quick-change camouflage menu, while down brings up your radio —a hugely welcome shortcut. No more digging through menus to change outfits based on your environment. Snake sneaks through a range of environments in Snake Eater, each suited to different camouflage options The quick change menu conveniently shows the optimal face and body combo from your collection based on the current environmentIn one instance, I managed to seamlessly transition from a green texture to a stone grey-black getup, then to a rust-colored camouflage, all along the same crawl route. This new quality-of-life option keeps the action flowing. Another great accessibility feature is the ability to fine-tune game hints. From always-on to none at all. I had it set to show helpful hints when they were relevant, like swimming controls appearing by a body of water and hanging controls on the cliffside. This is particularly helpful in rare gameplay situations, as it kept me from panicking in high-stress situations.  What a thrill The voice cast still delivers, and The Cobra Unit is just as compelling, with big moments still having the right impact. The ladder scene took me right back to playing the original on my grandmother’s floor all those years ago.  Paradoxes, easter eggs, and all the details I’d expect are still in place. I didn’t encounter any moments that felt off or deviated too far in any way from the script. The opening theme and intro movie have been remixed, and while it will come down to personal taste, every note still hits for me.  Metal Gear Solid Delta: Snake Eater launches on August 28 for PS5, and is a day to mark on your calendar whether you’re a longtime fan or series newcomer interested in discovering the celebrated origins of the storyline.   Metal Gear Solid Delta: Snake Eater developers discuss the game in length in a new interview.
    Like
    Love
    Wow
    Angry
    Sad
    387
    · 2 Commentaires ·0 Parts
  • Winners of the Showcase Competition

    1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge!
    #winners #showcase #competition
    Winners of the Showcase Competition
    1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge! #winners #showcase #competition
    Winners of the Showcase Competition
    80.lv
    1st place — Feixiang LongThis work is a secondary design inspired by Toothwu's original artwork, infused with numerous personal touches.2nd place — BaiTong LIAttic Studio is an interior Unreal Engine environment, inspired by the Resident Evil series.3rd place — Ouanes BilalThe main character of Grendizer: The Feast of the Wolves.Portfolio review with José VegaJose Vega is the founder and Senior Concept Artist of Worldbuilders Workshop, working mainly in the video games and film industry. Some of his clients include: Hirez Studios, Replicas, Shy the Sun, Wizards of the Coast, Cryptozoic Entertainment, Blur Entertainment, and others.The exact date of the Live Event will be announced soon. Stay tuned!A huge congratulations to our winners and heartfelt thanks to all the amazing creators who took part! Your talent keeps our community inspired and energized. Want to explore more outstanding projects? Join us on Discord and stay tuned for the next 80 Level challenge!
    Like
    Love
    Wow
    Angry
    Sad
    397
    · 2 Commentaires ·0 Parts
  • Joseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level Contest

    IntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn ChampionsConsole Trailer: YouTubeWildshade Fantasy Horse RacesiOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie
    #joseph #jegedes #journey #into #environment
    Joseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level Contest
    IntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn ChampionsConsole Trailer: YouTubeWildshade Fantasy Horse RacesiOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie #joseph #jegedes #journey #into #environment
    Joseph Jegede’s Journey into Environment Art & Approach to the Emperia x 80 Level Contest
    80.lv
    IntroductionHello, I am Joseph Jegede, born in Nigeria, lived and studied in London, which is also where I started my career as a games developer. Before my game dev career, I was making websites and graphic designs as a hobby but felt an urge to make static images animated and respond to user input. I studied Computer Science at London Metropolitan University for my bachelor’s degree.I worked at Tivola Publishing GmbH, where we developed:Wildshade: Unicorn Champions (PlayStation, Xbox, Nintendo Switch)Console Trailer: YouTubeWildshade Fantasy Horse Races (iOS, Android)iOS: App StoreAndroid: Google PlayThis project was initially developed for mobile platforms and was later ported to consoles, which recently launched.I also worked on a personal mobile game project:Shooty GunRelease Date: May 17, 2024PlayBecoming an Environment ArtistWith the release of Unreal Engine 5, the ease of creating and sculpting terrain, then using blueprints to quickly add responsive grass, dirt, and rock materials to my levels, made environment art very enticing and accessible for me. Being a programmer, I often felt the urge to explore more aspects of game development, since the barrier of entry has been completely shattered.I wouldn’t consider myself a full-blown artist just yet. I first learned Blender to build some basic 3D models. We can call it “programmer art” – just enough to get a prototype playable.The main challenges were that most 3D software required subscriptions, which wasn't ideal for someone just learning without commercial intent. Free trials helped at first, but I eventually ran out of emails to renew them. Blender was difficult to grasp initially, but I got through it with the help of countless YouTube tutorials.Whenever I wanted to build a model for a prototype, I would find a tutorial making something similar and follow along.On YouTube, I watched and subscribed to Stylized Station. I also browsed ArtStation regularly for references and inspiration for the types of levels I wanted to build.Environment art was a natural next step in my game dev journey. While I could program gameplay and other systems, I lacked the ability to build engaging levels to make my games feel polished. In the kinds of games I want to create, players will spend most of their time exploring environments. They need to look good and contain landmarks that resonate with the player.My main sources of inspiration are games I’ve played. Sometimes I want to recreate the worlds I've explored. I often return to ArtStation for inspiration and references.Deep Dive Into Art-To-Experience Contest's SubmissionThe project I submitted was originally made for the 80 Level x Emperia contest. Most of the assets were provided as part of the contest.The main character was created in Blender, and the enemy model was a variant of the main character with some minor changes and costume modifications. Animations were sourced from Mixamo and imported into Unreal Engine 5. Texturing and painting were done in Adobe Substance 3D Painter, and materials were created in UE5 from exported textures.Before creating the scene in UE5, I gathered references from ArtStation and Google Images. These were used to sculpt a terrain heightmap. Once the level’s starting point and boss area were defined, I added bamboo trees and planned walkable paths around the map.I created models in Blender and exported them to Substance 3D Painter. Using the Auto UV Unwrap tool, I prepared the models for texturing. Once painted, I exported the textures and applied them to the models in UE5. This workflow was smooth and efficient.In UE5, I converted any assets for level placement into foliage types. This allowed for both random distribution and precise placement using the foliage painter tool, which sped up design significantly.UE5 lighting looked great out-of-the-box. I adjusted the directional light, fog, and shadows to craft a forest atmosphere using the built-in day/night system.I was able to use Emperia's Creator Tools plug-in to set up my scene. The great thing about the tutorial is that it's interactive - as I complete the steps in the UE5 editor, the tutorial window updates and reassures me that I’ve completed the task correctly. This made the setup process easier and faster. Setting up panoramas was also simple - pretty much drag and drop.Advice For BeginnersOne major issue is the rise of AI tools that generate environment art. These tools may discourage beginners who fear they can’t compete. If people stop learning because they think AI will always outperform them, the industry may suffer a creativity drought.My advice to beginners:Choose a game engine you’re comfortable with – Unreal Engine, Unity, etc.Make your idea exist first, polish later. Use free assets from online stores to prototype.Focus on creating game levels with available resources. The important part is getting your world out of your head and into a playable form.Share your work with a community when you're happy with it.Have fun creating your environment – if you enjoy it, others likely will too.Joseph Jegede, Game DeveloperInterview conducted by Theodore McKenzie
    Like
    Love
    Wow
    Sad
    Angry
    199
    · 2 Commentaires ·0 Parts
  • Blender Developers Meeting Notes: 18 August 2025

    Blender Developers Meeting Notes: 18 August 2025

    By
    n8n

    on
    August 19, 2025

    Blender Development

    Notes for weekly communication of ongoing projects and modules.
    This is a selection of changes that happened over the last week. For a full overview including fixes, code only changes and more visit projects.blender.org.

    Reset various runtime data for writing files-Improve RNA performance tests flexibility.-Move VSync from an environment variable to an argument-Recognize ACES config un-tone-mapped view-Allow empty names in File Output node-Support strings sockets-Allow menu sockets for pixel nodes-Removing Sun Beams node-Don’t get/set PWD env var for working directory functions-Parallelize NURBS basis cache evaluation with Ocomplexity-Add cyclic curve offsets cache-Do not sample direct light when ray segment is invalid-Always add world as object-Create one box for vdb mesh instead of many-Render volume by ray marching through octrees-Compute volume transmittance using telescoping-Shade volume with null scattering-Volume Scattering Probability Guiding-Use RGBE for denoised guiding buffers to reduce memory usage-Use analytic formula for homogeneous volume-Add and update volume test files-Store octree parent nodes in a stack-oneAPI: Disable L0 copy optimization for several dGPUs-Use deterministic linear interpolation for velocity-Use one-tap stochastic interpolation for volume-Add material name collision mode-Shader:

    Add support for full template specialization-Rewrite default_argument_mutation using parser-Fix parser not being consistent-Replace template macro implementation by copy paste-Preprocess: Improve error reporting-Remove Shader Draw Parameter workaround-Add flag for shader debug info generation-Improve cyclical end cap rendering-Export other curve types to SVG-Edit Mode Pen Tool-Support extracting Vulkan & OpenGL args even when disabled-Add Apply Transforms option to obj exporter-Fix recursive resync incorrectly clearing hierarchy info.-Prevent matching collection items only by their index if a name and ID are provided.-Avoid quadratic vertex valence complexity for corner normals-Improve performance and compression, always compress-Allow Preferences editor to be opened in Maximized Area-Gray out or hide asset shelf toggle if not available-Center-align header modal status text-Prevent automatic mode switching in certain scenarios-Remove liboverride UI dead code, improve UI messages.-Prevent ‘liboverride’ ‘decorator’ button to control keyframes.-Widen Preferences Window-Theme: Move curve handle properties in common-Generalized Alert and Popup Block Error Indication-Tree View: Operator to delete with X key-Use UI_alert For Vulcan Fallback Warning-Remove unused theme properties-Warning When Dragging Non-Blend File Onto Executable-“Duplicate Strips” also duplicates referenced IDs-Add copy and paste operators to preview keymap-Improve Histogram scope for HDR content-Add scene assets through strip add menu-Clear Strip Keyframes from Preview-Enable Pie Menu on Drag for Preview Keyframe Insert-Add “Mirror” menu to preview strip menu-Disable descriptor buffers-Swap to system memory for device local memory-Destroy resources in submission thread-Update VMA to 3.3.0-Remove MoltenVK-Enable maintenance4 in VMA-Add message type for remote downloader messages to message bus-
    #blender #developers #meeting #notes #august
    Blender Developers Meeting Notes: 18 August 2025
    Blender Developers Meeting Notes: 18 August 2025 By n8n on August 19, 2025 Blender Development Notes for weekly communication of ongoing projects and modules. This is a selection of changes that happened over the last week. For a full overview including fixes, code only changes and more visit projects.blender.org. Reset various runtime data for writing files-Improve RNA performance tests flexibility.-Move VSync from an environment variable to an argument-Recognize ACES config un-tone-mapped view-Allow empty names in File Output node-Support strings sockets-Allow menu sockets for pixel nodes-Removing Sun Beams node-Don’t get/set PWD env var for working directory functions-Parallelize NURBS basis cache evaluation with Ocomplexity-Add cyclic curve offsets cache-Do not sample direct light when ray segment is invalid-Always add world as object-Create one box for vdb mesh instead of many-Render volume by ray marching through octrees-Compute volume transmittance using telescoping-Shade volume with null scattering-Volume Scattering Probability Guiding-Use RGBE for denoised guiding buffers to reduce memory usage-Use analytic formula for homogeneous volume-Add and update volume test files-Store octree parent nodes in a stack-oneAPI: Disable L0 copy optimization for several dGPUs-Use deterministic linear interpolation for velocity-Use one-tap stochastic interpolation for volume-Add material name collision mode-Shader: Add support for full template specialization-Rewrite default_argument_mutation using parser-Fix parser not being consistent-Replace template macro implementation by copy paste-Preprocess: Improve error reporting-Remove Shader Draw Parameter workaround-Add flag for shader debug info generation-Improve cyclical end cap rendering-Export other curve types to SVG-Edit Mode Pen Tool-Support extracting Vulkan & OpenGL args even when disabled-Add Apply Transforms option to obj exporter-Fix recursive resync incorrectly clearing hierarchy info.-Prevent matching collection items only by their index if a name and ID are provided.-Avoid quadratic vertex valence complexity for corner normals-Improve performance and compression, always compress-Allow Preferences editor to be opened in Maximized Area-Gray out or hide asset shelf toggle if not available-Center-align header modal status text-Prevent automatic mode switching in certain scenarios-Remove liboverride UI dead code, improve UI messages.-Prevent ‘liboverride’ ‘decorator’ button to control keyframes.-Widen Preferences Window-Theme: Move curve handle properties in common-Generalized Alert and Popup Block Error Indication-Tree View: Operator to delete with X key-Use UI_alert For Vulcan Fallback Warning-Remove unused theme properties-Warning When Dragging Non-Blend File Onto Executable-“Duplicate Strips” also duplicates referenced IDs-Add copy and paste operators to preview keymap-Improve Histogram scope for HDR content-Add scene assets through strip add menu-Clear Strip Keyframes from Preview-Enable Pie Menu on Drag for Preview Keyframe Insert-Add “Mirror” menu to preview strip menu-Disable descriptor buffers-Swap to system memory for device local memory-Destroy resources in submission thread-Update VMA to 3.3.0-Remove MoltenVK-Enable maintenance4 in VMA-Add message type for remote downloader messages to message bus- #blender #developers #meeting #notes #august
    Blender Developers Meeting Notes: 18 August 2025
    www.blendernation.com
    Blender Developers Meeting Notes: 18 August 2025 By n8n on August 19, 2025 Blender Development Notes for weekly communication of ongoing projects and modules. This is a selection of changes that happened over the last week. For a full overview including fixes, code only changes and more visit projects.blender.org. Reset various runtime data for writing files (commit) - (Hans Goudey) Improve RNA performance tests flexibility. (commit) - (Bastien Montagne) Move VSync from an environment variable to an argument (commit) - (Campbell Barton) Recognize ACES config un-tone-mapped view (commit) - (Brecht Van Lommel) Allow empty names in File Output node (commit) - (Omar Emara) Support strings sockets (commit) - (Omar Emara) Allow menu sockets for pixel nodes (commit) - (Omar Emara) Removing Sun Beams node (commit) - (Mohamed Hassan) Don’t get/set PWD env var for working directory functions (commit) - (Jesse Yurkovich) Parallelize NURBS basis cache evaluation with O(n) complexity (commit) - (Mattias Fredriksson) Add cyclic curve offsets cache (commit) - (Hans Goudey) Do not sample direct light when ray segment is invalid (commit) - (Weizhen Huang) Always add world as object (commit) - (Weizhen Huang) Create one box for vdb mesh instead of many (commit) - (Weizhen Huang) Render volume by ray marching through octrees (commit) - (Weizhen Huang) Compute volume transmittance using telescoping (commit) - (Weizhen Huang) Shade volume with null scattering (commit) - (Weizhen Huang) Volume Scattering Probability Guiding (commit) - (Weizhen Huang) Use RGBE for denoised guiding buffers to reduce memory usage (commit) - (Weizhen Huang) Use analytic formula for homogeneous volume (commit) - (Weizhen Huang) Add and update volume test files (commit) - (Weizhen Huang) Store octree parent nodes in a stack (commit) - (Weizhen Huang) oneAPI: Disable L0 copy optimization for several dGPUs (commit) - (Nikita Sirgienko) Use deterministic linear interpolation for velocity (commit) - (Weizhen Huang) Use one-tap stochastic interpolation for volume (commit) - (Weizhen Huang) Add material name collision mode (commit) - (Oxicid) Shader: Add support for full template specialization (commit) - (Clément Foucault) Rewrite default_argument_mutation using parser (commit) - (Clément Foucault) Fix parser not being consistent (commit) - (Clément Foucault) Replace template macro implementation by copy paste (commit) - (Clément Foucault) Preprocess: Improve error reporting (commit) - (Clément Foucault) Remove Shader Draw Parameter workaround (commit) - (Clément Foucault) Add flag for shader debug info generation (commit) - (Christoph Neuhauser) Improve cyclical end cap rendering (commit) - (Casey Bianco-Davis) Export other curve types to SVG (commit) - (Casey Bianco-Davis) Edit Mode Pen Tool (commit) - (Casey Bianco-Davis) Support extracting Vulkan & OpenGL args even when disabled (commit) - (Campbell Barton) Add Apply Transforms option to obj exporter (commit) - (Thomas Hope) Fix recursive resync incorrectly clearing hierarchy info. (commit) - (Bastien Montagne) Prevent matching collection items only by their index if a name and ID are provided. (commit) - (Bastien Montagne) Avoid quadratic vertex valence complexity for corner normals (commit) - (_илья __) Improve performance and compression, always compress (commit) - (Aras Pranckevicius) Allow Preferences editor to be opened in Maximized Area (commit) - (Jonas Holzman) Gray out or hide asset shelf toggle if not available (commit) - (Julian Eisel) Center-align header modal status text (commit) - (Pablo Vazquez) Prevent automatic mode switching in certain scenarios (commit) - (Sean Kim) Remove liboverride UI dead code, improve UI messages. (commit) - (Bastien Montagne) Prevent ‘liboverride’ ‘decorator’ button to control keyframes. (commit) - (Bastien Montagne) Widen Preferences Window (commit) - (Pablo Vazquez) Theme: Move curve handle properties in common (commit) - (Nika Kutsniashvili) Generalized Alert and Popup Block Error Indication (commit) - (Harley Acheson) Tree View: Operator to delete with X key (commit) - (Pratik Borhade) Use UI_alert For Vulcan Fallback Warning (commit) - (Harley Acheson) Remove unused theme properties (commit) - (Nika Kutsniashvili) Warning When Dragging Non-Blend File Onto Executable (commit) - (Harley Acheson) “Duplicate Strips” also duplicates referenced IDs (commit) - (Falk David) Add copy and paste operators to preview keymap (commit) - (Ramon Klauck) Improve Histogram scope for HDR content (commit) - (Aras Pranckevicius) Add scene assets through strip add menu (commit) - (Falk David) Clear Strip Keyframes from Preview (commit) - (Ramon Klauck) Enable Pie Menu on Drag for Preview Keyframe Insert (commit) - (Ramon Klauck) Add “Mirror” menu to preview strip menu (commit) - (Ramon Klauck) Disable descriptor buffers (commit) - (Jeroen Bakker) Swap to system memory for device local memory (commit) - (Jeroen Bakker) Destroy resources in submission thread (commit) - (Jeroen Bakker) Update VMA to 3.3.0 (commit) - (Jeroen Bakker) Remove MoltenVK (commit) - (Jeroen Bakker) Enable maintenance4 in VMA (commit) - (Jeroen Bakker) Add message type for remote downloader messages to message bus (commit) - (Julian Eisel)
    2 Commentaires ·0 Parts
Plus de résultats
ollo https://www.ollo.ws