• هل فكرت يومًا كيف ممكن نعيش في عمارة وتكون روح المجتمع فيها قوية؟

    في مقالي الجديد، نحكي عن مشروع "Lagoon View Residential Complex" وكيف استطاعوا فريق mobil arquitectos وÁlvaro Arancibia Arquitecto يعيدوا التفكير في الحياة العمودية. الفكرة هنا مش مجرد بناء، بل خلق مجتمع حقيقي داخل مبنى، وين المصعد ما يكونش مجرد وسيلة تنقل، بل جزء من الحياة اليومية اللي تجمعنا.

    شخصيًا، نحب العلاقات الإنسانية، ونعرف كيف تكون صعبة في الأماكن المكتظة. لكن، لما نشوف مشاريع كيف هذه، نشعر بفرصة جديدة للعيش مع الآخرين.

    خلينا نفكر كيف نقدر نطبق هالأفكار في حياتنا اليومية، حتى بلا ما نبني عمارة جديدة.

    https://www.archdaily.com/1033496/lagoon-view-residential-complex-mobil-arquitectos-plus-alvaro-arancibia-arquitecto
    #مجتمع #عمارة #Architecture #CommunityLiving #Innovations
    🚀 هل فكرت يومًا كيف ممكن نعيش في عمارة وتكون روح المجتمع فيها قوية؟ في مقالي الجديد، نحكي عن مشروع "Lagoon View Residential Complex" وكيف استطاعوا فريق mobil arquitectos وÁlvaro Arancibia Arquitecto يعيدوا التفكير في الحياة العمودية. الفكرة هنا مش مجرد بناء، بل خلق مجتمع حقيقي داخل مبنى، وين المصعد ما يكونش مجرد وسيلة تنقل، بل جزء من الحياة اليومية اللي تجمعنا. شخصيًا، نحب العلاقات الإنسانية، ونعرف كيف تكون صعبة في الأماكن المكتظة. لكن، لما نشوف مشاريع كيف هذه، نشعر بفرصة جديدة للعيش مع الآخرين. خلينا نفكر كيف نقدر نطبق هالأفكار في حياتنا اليومية، حتى بلا ما نبني عمارة جديدة. https://www.archdaily.com/1033496/lagoon-view-residential-complex-mobil-arquitectos-plus-alvaro-arancibia-arquitecto #مجتمع #عمارة #Architecture #CommunityLiving #Innovations
    www.archdaily.com
    The concept of the project emerged from the challenge of rethinking social integration in high-density buildings, exploring new forms of vertical community living. Prior to its development, there were few Chilean precedents addressing this i
    Like
    Love
    Wow
    Sad
    Angry
    730
    · 1 Comments ·0 Shares
  • شكون قال بلي البرمجة صعيبة؟ اليوم جبتلكم موضوع يشدّك من الأول! في المقال الجديد، نتحدثو على "Scoped Values in Java 24".

    ال scoped values تعطيك إمكانية مشاركة البيانات الثابتة بين الميثودات داخل الثريد، وحتى مع الثريدات الفرعية بطريقة آمنة وسهلة. هاد الشي مفيد بزاف مقارنةً مع ال thread-local variables، وكيما يقولوا، كل ما كان الكود منظم، كل ما كان العمل أسهل!

    من تجربتي الشخصية، كي بديت نستعمل هاد الفكرة، حسيت بفرق كبير في الكفاءة ووضوح الكود.

    فكروا في كيفاش يمكن لأسلوب جديد يغير طريقة البرمجة عندكم.

    https://nipafx.dev/inside-java-newscast-86
    #Java #برمجة #ScopedValues #TechTrends #تكنولوجيا
    🌟 شكون قال بلي البرمجة صعيبة؟ اليوم جبتلكم موضوع يشدّك من الأول! في المقال الجديد، نتحدثو على "Scoped Values in Java 24". ال scoped values تعطيك إمكانية مشاركة البيانات الثابتة بين الميثودات داخل الثريد، وحتى مع الثريدات الفرعية بطريقة آمنة وسهلة. هاد الشي مفيد بزاف مقارنةً مع ال thread-local variables، وكيما يقولوا، كل ما كان الكود منظم، كل ما كان العمل أسهل! من تجربتي الشخصية، كي بديت نستعمل هاد الفكرة، حسيت بفرق كبير في الكفاءة ووضوح الكود. فكروا في كيفاش يمكن لأسلوب جديد يغير طريقة البرمجة عندكم. https://nipafx.dev/inside-java-newscast-86 #Java #برمجة #ScopedValues #TechTrends #تكنولوجيا
    nipafx.dev
    Scoped values enable a method to share immutable data both with its callees within a thread and with child threads in a convenient, safe, scalable way, particular in comparison to thread-local variables.
    Like
    Love
    Wow
    Sad
    Angry
    589
    · 1 Comments ·0 Shares
  • يا جماعة، تدرّو أنو الذكاء الاصطناعي ولى جزء كبير من حياتنا اليومية؟ بس، لازم نكون واعيين! المقال الجديد يتناول "احترس.. الذكاء الاصطناعي يهدد حياتك أحيانا" وكيف أن التطبيقات الطبية الافتراضية في الشرق الأوسط تشهد انتشار غير مسبوق. ولكن، الأطباء يقولوا أنه ما فيه حتى بديل عن الفحص السريري.

    شخصياً، جربت خدمة طبية عبر الإنترنت ووجدتها مفيدة في بعض الأحيان، لكن في مواقف معينة، ما كنتش نحب نترك صحة جسمي في يد الذكاء الاصطناعي. لازم نكون دقيقين ونتأكد من مصدر المعلومات.

    تذكروا، التكنولوجيا حلوة لكن صحتكم أغلى!

    https://news.google.com/atom/articles/CBMirgRBVV95cUxNQ25BS1ByclF0UjMzRmFUamJuSnkwMXVaRUFIVzUwWkRZeEpuN2hwMEZjWmUzTEMtS3NjSVFZTk1hMGQyYkYzRmx2Ykp4
    💡 يا جماعة، تدرّو أنو الذكاء الاصطناعي ولى جزء كبير من حياتنا اليومية؟ بس، لازم نكون واعيين! المقال الجديد يتناول "احترس.. الذكاء الاصطناعي يهدد حياتك أحيانا" وكيف أن التطبيقات الطبية الافتراضية في الشرق الأوسط تشهد انتشار غير مسبوق. ولكن، الأطباء يقولوا أنه ما فيه حتى بديل عن الفحص السريري. شخصياً، جربت خدمة طبية عبر الإنترنت ووجدتها مفيدة في بعض الأحيان، لكن في مواقف معينة، ما كنتش نحب نترك صحة جسمي في يد الذكاء الاصطناعي. لازم نكون دقيقين ونتأكد من مصدر المعلومات. تذكروا، التكنولوجيا حلوة لكن صحتكم أغلى! https://news.google.com/atom/articles/CBMirgRBVV95cUxNQ25BS1ByclF0UjMzRmFUamJuSnkwMXVaRUFIVzUwWkRZeEpuN2hwMEZjWmUzTEMtS3NjSVFZTk1hMGQyYkYzRmx2Ykp4
    Like
    Love
    Wow
    Angry
    Sad
    379
    · 1 Comments ·0 Shares
  • واه، شفتوا الفيديو الجديد مع Dara Ladjevardian CEO تاع Delphi؟ حاجة تخلّي الواحد يفكر!

    في هذا الفيديو، Dara رح يشهدنا على كيفاش الذكاء الاصطناعي ما يبدلناش، بل يعكسنا. يعني، كيفاش نقدروا نستغلوا الذكاء الاصطناعي باش يعاوننا نطوّروا مواهبنا وقدراتنا، بلا ما يعوّضنا.

    حبيت الفكرة تاع أن Delphi تركز على التفاعل بين البشر والذكاء الاصطناعي. في عالمنا اليوم، وين التكنولوجيا تتطور كل يوم، من المهم نبقاوا متواصلين ونتعلموا من بعضنا. يخي، كل ما نتفاعلوا مع الديجيتال مايندز، نتعلموا أكثر ونكبروا مع كل حوار.

    فكروا في المستقبل وكيفاش راح تتغير هويتنا الرقمية وحقوقنا.

    https://www.youtube.com/watch?v=UeyzvmRr1CY

    #الذكاء_الاصطناعي #DigitalMinds #HumanInTheLoop #Delphi #تكنولوجيا
    🌟 واه، شفتوا الفيديو الجديد مع Dara Ladjevardian CEO تاع Delphi؟ حاجة تخلّي الواحد يفكر! في هذا الفيديو، Dara رح يشهدنا على كيفاش الذكاء الاصطناعي ما يبدلناش، بل يعكسنا. يعني، كيفاش نقدروا نستغلوا الذكاء الاصطناعي باش يعاوننا نطوّروا مواهبنا وقدراتنا، بلا ما يعوّضنا. حبيت الفكرة تاع أن Delphi تركز على التفاعل بين البشر والذكاء الاصطناعي. في عالمنا اليوم، وين التكنولوجيا تتطور كل يوم، من المهم نبقاوا متواصلين ونتعلموا من بعضنا. يخي، كل ما نتفاعلوا مع الديجيتال مايندز، نتعلموا أكثر ونكبروا مع كل حوار. فكروا في المستقبل وكيفاش راح تتغير هويتنا الرقمية وحقوقنا. 🔗 https://www.youtube.com/watch?v=UeyzvmRr1CY #الذكاء_الاصطناعي #DigitalMinds #HumanInTheLoop #Delphi #تكنولوجيا
    Like
    Love
    Wow
    Sad
    Angry
    484
    · 1 Comments ·0 Shares
  • Romeo is a Dead Man: A sneak peak of what to expect

    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes.

    Play Video

    OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”.

    Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts.

    And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before.

    As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off!

    Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year!

    Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission.

    That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time!
    #romeo #dead #man #sneak #peak
    Romeo is a Dead Man: A sneak peak of what to expect
    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes. Play Video OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”. Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts. And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before. As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off! Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year! Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission. That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time! #romeo #dead #man #sneak #peak
    Romeo is a Dead Man: A sneak peak of what to expect
    blog.playstation.com
    What’s up, everyone? I’m gonna assume you’ve already seen the announcement trailer for Grasshopper Manufacture’s all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. It’s cool – I’ll wait two and a half minutes. Play Video OK, so you get that there’s gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes… Today, we’ll give you a teensy little glimpse of how Romeo Stargazer – aka “DeadMan”, a special agent in the FBI division known as the Space-Time Police – goes about his “investigations”. Romeo Is A Dead Man, abbreviated as… I don’t know, RiaDM? or maybe RoDeMa, if you’re nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, it’s all over the place – but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and we’ve thrown everything but the kitchen sink into the more story-based parts. And don’t worry, GhM fans – we promise: for as much work as we’ve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, you’re guaranteed to see some stuff you’ve never seen before. As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. You’re gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis you’re rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if that’s your thing. On top of the kinda-humanoid creatures you’ve already seen, known as “Rotters”, we’ve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off! Now, let’s look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is “Everyday Is Like Monday”, because of course it is. It’s on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. It’s one of the most feelgood beatdowns of the year! Also, being a member of something called the “Space-Time Police” means that obviously Romeo is gonna be visiting all sorts of weird, “…what?”-type places. And awaiting him at these weird, “…what?”-type places are a range of weird, “…what?”-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission. That’s all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time!
    Like
    Love
    Wow
    Sad
    Angry
    773
    · 2 Comments ·0 Shares
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Comments ·0 Shares
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Comments ·0 Shares
  • Shinobi: Art Of Vengeance Review - Ninja Master

    You spend years waiting for a new 2D action platformer starring ninjas to come along, and then two show up within a month of each other. Both Ninja Gaiden: Ragebound and Shinobi: Art of Vengeance revitalize their respective, long-dormant franchises by successfully harkening back to their roots. There are obvious similarities between the two games, but they're also wildly different. While Ragebound is deliberately old-school, Art of Vengeance feels more modern, paying homage to the past while dragging the absent series into the current gaming landscape.From its luscious hand-drawn art style to its deep, combo-laden action, developer Lizardcube has accomplished with Shinobi what it previously achieved with Wonder Boy and Streets of Rage. The Parisian studio knows how to resurrect Sega's past hits with remarkable aplomb, and Art of Vengeance is no different.Shinobi: Art of VengeanceEquipped with a katana in one hand and a sharpened batch of kunai in the other, Art of Vengeance reintroduces legendary protagonist Joe Musashi after an extended exile. As the game's title suggests, this is a story about Joe's quest for vengeance, as the opening moments see his village burned to the ground and his ninja clan turned to stone. ENE Corp, an evil paramilitary organisation led by the antagonistic Lord Ruse and his demonic minions, is behind the attack, setting in motion a straightforward tale that sees you hunt down Lord Ruse while disrupting his various operations.Continue Reading at GameSpot
    #shinobi #art #vengeance #review #ninja
    Shinobi: Art Of Vengeance Review - Ninja Master
    You spend years waiting for a new 2D action platformer starring ninjas to come along, and then two show up within a month of each other. Both Ninja Gaiden: Ragebound and Shinobi: Art of Vengeance revitalize their respective, long-dormant franchises by successfully harkening back to their roots. There are obvious similarities between the two games, but they're also wildly different. While Ragebound is deliberately old-school, Art of Vengeance feels more modern, paying homage to the past while dragging the absent series into the current gaming landscape.From its luscious hand-drawn art style to its deep, combo-laden action, developer Lizardcube has accomplished with Shinobi what it previously achieved with Wonder Boy and Streets of Rage. The Parisian studio knows how to resurrect Sega's past hits with remarkable aplomb, and Art of Vengeance is no different.Shinobi: Art of VengeanceEquipped with a katana in one hand and a sharpened batch of kunai in the other, Art of Vengeance reintroduces legendary protagonist Joe Musashi after an extended exile. As the game's title suggests, this is a story about Joe's quest for vengeance, as the opening moments see his village burned to the ground and his ninja clan turned to stone. ENE Corp, an evil paramilitary organisation led by the antagonistic Lord Ruse and his demonic minions, is behind the attack, setting in motion a straightforward tale that sees you hunt down Lord Ruse while disrupting his various operations.Continue Reading at GameSpot #shinobi #art #vengeance #review #ninja
    Shinobi: Art Of Vengeance Review - Ninja Master
    www.gamespot.com
    You spend years waiting for a new 2D action platformer starring ninjas to come along, and then two show up within a month of each other. Both Ninja Gaiden: Ragebound and Shinobi: Art of Vengeance revitalize their respective, long-dormant franchises by successfully harkening back to their roots. There are obvious similarities between the two games, but they're also wildly different. While Ragebound is deliberately old-school, Art of Vengeance feels more modern, paying homage to the past while dragging the absent series into the current gaming landscape.From its luscious hand-drawn art style to its deep, combo-laden action, developer Lizardcube has accomplished with Shinobi what it previously achieved with Wonder Boy and Streets of Rage. The Parisian studio knows how to resurrect Sega's past hits with remarkable aplomb, and Art of Vengeance is no different.Shinobi: Art of VengeanceEquipped with a katana in one hand and a sharpened batch of kunai in the other, Art of Vengeance reintroduces legendary protagonist Joe Musashi after an extended exile. As the game's title suggests, this is a story about Joe's quest for vengeance, as the opening moments see his village burned to the ground and his ninja clan turned to stone. ENE Corp, an evil paramilitary organisation led by the antagonistic Lord Ruse and his demonic minions, is behind the attack, setting in motion a straightforward tale that sees you hunt down Lord Ruse while disrupting his various operations.Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    Angry
    566
    · 2 Comments ·0 Shares
  • (For Southeast Asia) NBA 2K26: Hands-on report and PS5 bundle details

    The official start of the season may be two months away, but basketball is back with NBA 2K26 hitting PS5 and PS4 September 5. The latest entry brings a new gameplay system powered by machine learning, studying today’s superstars, and fun pick-up and play options. 2K invited me to go hands-on with the game  before it launches September 5 on PS5, and I’m here to share what I learned on the court.

    Also launching starting on September 12and on September 18is the PlayStation 5 Console – NBA 2K26 Bundle. Read on for full details

    Better ball

    2K26 puts considerable effort into improving both sides of the floor, with notable offensive and defensive enhancements. New machine-learning technology helps capture the fundamentals of the game. While playing, I noticed players would run and get set by firmly planting their feet, instead of a gliding effect. While driving into the paint, they would also stop and accurately respond to a defender in their lane. These details add a realistic weight to the sport.

    Enhanced Rhythm Shooting

    You can still flick down-up on the right analog stick or simply press square to start your shooting motion, then release at the correct timing for the individual player’s shoot release. However, now the tempo of the play, like in real life, affects your shot. When a good defender bogged me down, I could quickly release my shot and intentionally release it early for a decisive bucket. With a high basketball IQ, any shot has the potential to be a good shot. 

    Defensive battles

    Players can swing a game in their favor if the shots aren’t falling, thanks to new improvements centered around real-world tactics. Around the player’s feet, you will see new Rebound Timing Feedback as a green meter that will flash to indicate a well-timed rebound. Learning Chet Holmgren’s rebound timing made me nearly unstoppable under the rim and made me focus on an aspect of the game I had neglected before. 

    Collisions and interior defense both benefit from a revamped system-driven tech that allows for more real-time interactions instead of scripted mocap animations. If you want to stop a fast break or crowd the lane, players will stop, adjust, and even collide realistically. The game rewards paying attention to the action when the ball isn’t in your hand.

    Arena atmosphere

    The devs also upgraded the game spectacle during downtime and timeouts with new crowd variety, interactions, and on-court performances. Cheerleader routines and mascot antics are fun, but my favorite by far was the dance cam. These moments captured the feel of attending a game live and the sense of community that attending a sporting event can create.

    MyTEAM updates 

    MyTEAM has received a significant remodel with Triple Threat Park turning Sunset Beach into a nighttime venue. Players are greeted with neon lights, fireworks, and other details that can only be appreciated after dark. Pulling cards and collecting players has also become an even bigger spectacle with dramatic reveals and added flair.  

    The biggest change to MyTEAM is that WNBA players join the action for the first time in series history. Newcomers like Angel Reese and Caitlin Clark take to the hardwood along with legends like Lisa Leslie. Attributes and Badges are identical for all players, no matter what league they hail from. Also, there is a WNBA Domination tier where your squad will be exclusively WNBA players as you challenge teams to earn Domination stars and crests. 

    Another first is 2v2 games in Triple Threat Park. Two half courts have been added in the middle of the street, where you can run your favorite two-person team-ups. The park also features four 3v3 courts, including a new option with a beach backdrop, and three 3v3 courts for 6-player co-op matches. These games capture the essence of streetball, featuring players calling their fouls, checking the ball at midcourt, and engaging in some lively trash talk—a great way to mix and match your favorite ball players and have some quick, high-energy games. 

    All-Star Team Up is now part of MyTEAM, where 10 players duke it out in 5v5 co-op matches. Take your favorite NBA or WNBA players for some very high-level play where being a good role player is the key to success. Earn individual rewards with the new Season Ladder and earn rewards as a team by winning matches. Find the right chemistry with your teammates, because for every five games you win with the same team lineup, everyone will receive rewards, even if the wins aren’t consecutive. 

    Discover all the new enhancements coming to the court when NBA 2K 26 launches September 5 on PS5.

    Vertical Stand sold separately

    PS5 Console – NBA 2K26 BundleWe’re pleased to announce the PlayStation 5 Console – NBA 2K26 Bundle is launching in Singapore, Malaysia, Thailand starting September 12 and is launching in Indonesia, the Philippines and Vietnam starting September 18. Release dates and availability may vary by region, please check your local retailer for availability and release dates.

    Players can feel the on-the-court immersion made possible by the DualSense wireless controller’s haptic feedback and adaptive triggers. Experience NBA 2K26’s authenticity with lifelike animations, heightened player fidelity and authentic atmosphere with 4K resolution*, and enjoy shortened load times and return to the action faster with the PS5 console’s high-speed SSD.

    In Singapore, Malaysia and Thailand, the bundle includes a PlayStation 5 console, DualSense wireless controller, and a digital voucher** for NBA 2K26 Standard Edition. In Indonesia, the Philippines and Vietnam, the bundle includes a PlayStation 5 console, DualSense wireless controller, and a disc version for NBA 2K26 Standard Edition

    With a robust focus on features and the game aspects that don’t rely on the players, it’s great to play and watch. No matter your height, you should hit the court when NBA 2K26 comes to PS5 and PS4 on September 5.

    *4K and HDR require a 4K and HDR compatible TV or display.

    **Account for PlayStation and internet connection required to redeem voucher
    #southeast #asia #nba #2k26 #handson
    (For Southeast Asia) NBA 2K26: Hands-on report and PS5 bundle details
    The official start of the season may be two months away, but basketball is back with NBA 2K26 hitting PS5 and PS4 September 5. The latest entry brings a new gameplay system powered by machine learning, studying today’s superstars, and fun pick-up and play options. 2K invited me to go hands-on with the game  before it launches September 5 on PS5, and I’m here to share what I learned on the court. Also launching starting on September 12and on September 18is the PlayStation 5 Console – NBA 2K26 Bundle. Read on for full details Better ball 2K26 puts considerable effort into improving both sides of the floor, with notable offensive and defensive enhancements. New machine-learning technology helps capture the fundamentals of the game. While playing, I noticed players would run and get set by firmly planting their feet, instead of a gliding effect. While driving into the paint, they would also stop and accurately respond to a defender in their lane. These details add a realistic weight to the sport. Enhanced Rhythm Shooting You can still flick down-up on the right analog stick or simply press square to start your shooting motion, then release at the correct timing for the individual player’s shoot release. However, now the tempo of the play, like in real life, affects your shot. When a good defender bogged me down, I could quickly release my shot and intentionally release it early for a decisive bucket. With a high basketball IQ, any shot has the potential to be a good shot.  Defensive battles Players can swing a game in their favor if the shots aren’t falling, thanks to new improvements centered around real-world tactics. Around the player’s feet, you will see new Rebound Timing Feedback as a green meter that will flash to indicate a well-timed rebound. Learning Chet Holmgren’s rebound timing made me nearly unstoppable under the rim and made me focus on an aspect of the game I had neglected before.  Collisions and interior defense both benefit from a revamped system-driven tech that allows for more real-time interactions instead of scripted mocap animations. If you want to stop a fast break or crowd the lane, players will stop, adjust, and even collide realistically. The game rewards paying attention to the action when the ball isn’t in your hand. Arena atmosphere The devs also upgraded the game spectacle during downtime and timeouts with new crowd variety, interactions, and on-court performances. Cheerleader routines and mascot antics are fun, but my favorite by far was the dance cam. These moments captured the feel of attending a game live and the sense of community that attending a sporting event can create. MyTEAM updates  MyTEAM has received a significant remodel with Triple Threat Park turning Sunset Beach into a nighttime venue. Players are greeted with neon lights, fireworks, and other details that can only be appreciated after dark. Pulling cards and collecting players has also become an even bigger spectacle with dramatic reveals and added flair.   The biggest change to MyTEAM is that WNBA players join the action for the first time in series history. Newcomers like Angel Reese and Caitlin Clark take to the hardwood along with legends like Lisa Leslie. Attributes and Badges are identical for all players, no matter what league they hail from. Also, there is a WNBA Domination tier where your squad will be exclusively WNBA players as you challenge teams to earn Domination stars and crests.  Another first is 2v2 games in Triple Threat Park. Two half courts have been added in the middle of the street, where you can run your favorite two-person team-ups. The park also features four 3v3 courts, including a new option with a beach backdrop, and three 3v3 courts for 6-player co-op matches. These games capture the essence of streetball, featuring players calling their fouls, checking the ball at midcourt, and engaging in some lively trash talk—a great way to mix and match your favorite ball players and have some quick, high-energy games.  All-Star Team Up is now part of MyTEAM, where 10 players duke it out in 5v5 co-op matches. Take your favorite NBA or WNBA players for some very high-level play where being a good role player is the key to success. Earn individual rewards with the new Season Ladder and earn rewards as a team by winning matches. Find the right chemistry with your teammates, because for every five games you win with the same team lineup, everyone will receive rewards, even if the wins aren’t consecutive.  Discover all the new enhancements coming to the court when NBA 2K 26 launches September 5 on PS5. Vertical Stand sold separately PS5 Console – NBA 2K26 BundleWe’re pleased to announce the PlayStation 5 Console – NBA 2K26 Bundle is launching in Singapore, Malaysia, Thailand starting September 12 and is launching in Indonesia, the Philippines and Vietnam starting September 18. Release dates and availability may vary by region, please check your local retailer for availability and release dates. Players can feel the on-the-court immersion made possible by the DualSense wireless controller’s haptic feedback and adaptive triggers. Experience NBA 2K26’s authenticity with lifelike animations, heightened player fidelity and authentic atmosphere with 4K resolution*, and enjoy shortened load times and return to the action faster with the PS5 console’s high-speed SSD. In Singapore, Malaysia and Thailand, the bundle includes a PlayStation 5 console, DualSense wireless controller, and a digital voucher** for NBA 2K26 Standard Edition. In Indonesia, the Philippines and Vietnam, the bundle includes a PlayStation 5 console, DualSense wireless controller, and a disc version for NBA 2K26 Standard Edition With a robust focus on features and the game aspects that don’t rely on the players, it’s great to play and watch. No matter your height, you should hit the court when NBA 2K26 comes to PS5 and PS4 on September 5. *4K and HDR require a 4K and HDR compatible TV or display. **Account for PlayStation and internet connection required to redeem voucher #southeast #asia #nba #2k26 #handson
    (For Southeast Asia) NBA 2K26: Hands-on report and PS5 bundle details
    blog.playstation.com
    The official start of the season may be two months away, but basketball is back with NBA 2K26 hitting PS5 and PS4 September 5. The latest entry brings a new gameplay system powered by machine learning, studying today’s superstars, and fun pick-up and play options. 2K invited me to go hands-on with the game  before it launches September 5 on PS5, and I’m here to share what I learned on the court. Also launching starting on September 12 (Singapore, Malaysia, Thailand) and on September 18 (Indonesia, Philippines and Vietnam) is the PlayStation 5 Console – NBA 2K26 Bundle. Read on for full details Better ball 2K26 puts considerable effort into improving both sides of the floor, with notable offensive and defensive enhancements. New machine-learning technology helps capture the fundamentals of the game. While playing, I noticed players would run and get set by firmly planting their feet, instead of a gliding effect. While driving into the paint, they would also stop and accurately respond to a defender in their lane. These details add a realistic weight to the sport. Enhanced Rhythm Shooting You can still flick down-up on the right analog stick or simply press square to start your shooting motion, then release at the correct timing for the individual player’s shoot release. However, now the tempo of the play, like in real life, affects your shot. When a good defender bogged me down, I could quickly release my shot and intentionally release it early for a decisive bucket. With a high basketball IQ, any shot has the potential to be a good shot.  Defensive battles Players can swing a game in their favor if the shots aren’t falling, thanks to new improvements centered around real-world tactics. Around the player’s feet, you will see new Rebound Timing Feedback as a green meter that will flash to indicate a well-timed rebound. Learning Chet Holmgren’s rebound timing made me nearly unstoppable under the rim and made me focus on an aspect of the game I had neglected before.  Collisions and interior defense both benefit from a revamped system-driven tech that allows for more real-time interactions instead of scripted mocap animations. If you want to stop a fast break or crowd the lane, players will stop, adjust, and even collide realistically. The game rewards paying attention to the action when the ball isn’t in your hand. Arena atmosphere The devs also upgraded the game spectacle during downtime and timeouts with new crowd variety, interactions, and on-court performances. Cheerleader routines and mascot antics are fun, but my favorite by far was the dance cam. These moments captured the feel of attending a game live and the sense of community that attending a sporting event can create. MyTEAM updates  MyTEAM has received a significant remodel with Triple Threat Park turning Sunset Beach into a nighttime venue. Players are greeted with neon lights, fireworks, and other details that can only be appreciated after dark. Pulling cards and collecting players has also become an even bigger spectacle with dramatic reveals and added flair.   The biggest change to MyTEAM is that WNBA players join the action for the first time in series history. Newcomers like Angel Reese and Caitlin Clark take to the hardwood along with legends like Lisa Leslie. Attributes and Badges are identical for all players, no matter what league they hail from. Also, there is a WNBA Domination tier where your squad will be exclusively WNBA players as you challenge teams to earn Domination stars and crests.  Another first is 2v2 games in Triple Threat Park. Two half courts have been added in the middle of the street, where you can run your favorite two-person team-ups. The park also features four 3v3 courts, including a new option with a beach backdrop, and three 3v3 courts for 6-player co-op matches. These games capture the essence of streetball, featuring players calling their fouls, checking the ball at midcourt, and engaging in some lively trash talk—a great way to mix and match your favorite ball players and have some quick, high-energy games.  All-Star Team Up is now part of MyTEAM, where 10 players duke it out in 5v5 co-op matches. Take your favorite NBA or WNBA players for some very high-level play where being a good role player is the key to success. Earn individual rewards with the new Season Ladder and earn rewards as a team by winning matches. Find the right chemistry with your teammates, because for every five games you win with the same team lineup, everyone will receive rewards, even if the wins aren’t consecutive.  Discover all the new enhancements coming to the court when NBA 2K 26 launches September 5 on PS5. Vertical Stand sold separately PS5 Console – NBA 2K26 Bundle (Southeast Asia details) We’re pleased to announce the PlayStation 5 Console – NBA 2K26 Bundle is launching in Singapore, Malaysia, Thailand starting September 12 and is launching in Indonesia, the Philippines and Vietnam starting September 18. Release dates and availability may vary by region, please check your local retailer for availability and release dates. Players can feel the on-the-court immersion made possible by the DualSense wireless controller’s haptic feedback and adaptive triggers. Experience NBA 2K26’s authenticity with lifelike animations, heightened player fidelity and authentic atmosphere with 4K resolution*, and enjoy shortened load times and return to the action faster with the PS5 console’s high-speed SSD. In Singapore, Malaysia and Thailand, the bundle includes a PlayStation 5 console, DualSense wireless controller, and a digital voucher** for NBA 2K26 Standard Edition. In Indonesia, the Philippines and Vietnam, the bundle includes a PlayStation 5 console, DualSense wireless controller, and a disc version for NBA 2K26 Standard Edition With a robust focus on features and the game aspects that don’t rely on the players, it’s great to play and watch. No matter your height, you should hit the court when NBA 2K26 comes to PS5 and PS4 on September 5. *4K and HDR require a 4K and HDR compatible TV or display. **Account for PlayStation and internet connection required to redeem voucher
    Like
    Love
    Wow
    Sad
    Angry
    738
    · 2 Comments ·0 Shares
  • Creating a Detailed Helmet Inspired by Fallout Using Substance 3D

    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    80.lv
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter (currently under NDA). Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    701
    · 2 Comments ·0 Shares
  • يا جماعة، شحال من واحد فينا يحب يسول في البركة وين يعيش حياة إيكونوميك!

    في مقال جديد، نتعرف على Eva Rotky، YouTuber حكايتنا اليوم، اللي تتكلم عن الأشياء اللي ناقصة في build/buy في Sims 4. مقال يتناول كيف سيمز 4 كانت حاضرة في التوجه نحو الحياة المستدامة واسترجاع المساحات الصناعية.

    شخصياً، كلي حماس على فكرة إنشاء منازل صديقة للبيئة، وواش شفتو كيفاش يمكن نحققوا أحلامنا في بناء عالم أحلى، حتى في اللعبة! نحب نطبق هاد الأفكار في حياتي اليومية، سواء من خلال اختيار مواد بناء مستدامة أو تزيين الفضاءات دون ضرر للبيئة.

    فكروا كيف يمكن تتجسد هاد الأفكار في ألعابنا وحياتنا.

    https://www.vg247.com/the-sims-4-enchanted-by-nature-interview
    #Sims4 #EspritEco #EvaRotky #JeuxVideo #Durabilité
    يا جماعة، شحال من واحد فينا يحب يسول في البركة وين يعيش حياة إيكونوميك! 🌱✨ في مقال جديد، نتعرف على Eva Rotky، YouTuber حكايتنا اليوم، اللي تتكلم عن الأشياء اللي ناقصة في build/buy في Sims 4. مقال يتناول كيف سيمز 4 كانت حاضرة في التوجه نحو الحياة المستدامة واسترجاع المساحات الصناعية. شخصياً، كلي حماس على فكرة إنشاء منازل صديقة للبيئة، وواش شفتو كيفاش يمكن نحققوا أحلامنا في بناء عالم أحلى، حتى في اللعبة! نحب نطبق هاد الأفكار في حياتي اليومية، سواء من خلال اختيار مواد بناء مستدامة أو تزيين الفضاءات دون ضرر للبيئة. فكروا كيف يمكن تتجسد هاد الأفكار في ألعابنا وحياتنا. https://www.vg247.com/the-sims-4-enchanted-by-nature-interview #Sims4 #EspritEco #EvaRotky #JeuxVideo #Durabilité
    Sims 4 YouTuber Eva Rotky talks what's desperately missing in build/buy and the eco-friendly fairytale of Enchanted by Nature
    www.vg247.com
    Since the beginning of the decade, The Sims 4 has shown a consistent preoccupation with sustainable living and reclaiming industrial spaces. Read more
    Like
    Love
    Wow
    Sad
    Angry
    907
    · 1 Comments ·0 Shares
  • يا سلام على الـClojure! إذا كنت من عشاق البرمجة، لازم عليك تشوف هذا المقال عن "tap>" وكيفاش توصل للـprivate vars. الفكرة هنا هي أنك تقدر تستعمل tap> بطريقة سهلة وفعالة، وهذا راح يساعدك تطور من قدراتك كمطور.

    شخصيًا، تجربتي مع الـClojure كانت مليانة بالتحديات، لكن كل مرة نكتشف فيها حاجة جديدة تزيد من حماستي. tap> هي وحدة من الأدوات اللي أعجبتني بزاف، وخلتني نرجع للـcode تاعي بطريقة جديدة.

    المقال هذا مليان معلومات مفيدة، وراح يخليك تفكر أكثر في كيفية تحسين طريقة البرمجة تاعك. لا تفوت الفرصة!

    https://andersmurphy.com/2019/06/01/clojure-intro-to-tap-and-accessing-private-vars.html

    #Clojure #Programming #SoftwareDevelopment #TapFunction #PrivateVars
    يا سلام على الـClojure! 🤩 إذا كنت من عشاق البرمجة، لازم عليك تشوف هذا المقال عن "tap>" وكيفاش توصل للـprivate vars. الفكرة هنا هي أنك تقدر تستعمل tap> بطريقة سهلة وفعالة، وهذا راح يساعدك تطور من قدراتك كمطور. شخصيًا، تجربتي مع الـClojure كانت مليانة بالتحديات، لكن كل مرة نكتشف فيها حاجة جديدة تزيد من حماستي. tap> هي وحدة من الأدوات اللي أعجبتني بزاف، وخلتني نرجع للـcode تاعي بطريقة جديدة. المقال هذا مليان معلومات مفيدة، وراح يخليك تفكر أكثر في كيفية تحسين طريقة البرمجة تاعك. لا تفوت الفرصة! https://andersmurphy.com/2019/06/01/clojure-intro-to-tap-and-accessing-private-vars.html #Clojure #Programming #SoftwareDevelopment #TapFunction #PrivateVars
    Like
    Love
    Wow
    Angry
    Sad
    828
    · 1 Comments ·0 Shares
More Results
ollo https://www.ollo.ws