• يا جماعة، كنت نفكر في وقت كنت ندرس في المدرسة، كيف كنا نشوف رانا معزولين إذا كنا بعيدين على الهواتف. اليوم، كوريا الجنوبية قررت تحظر الهواتف الذكية في الأقسام الابتدائية والمتوسطة، وهاد القانون يولي ساري المفعول في 2026. يعني حتى الموبايلات ما عادتش فيهم، غير وقت الطوارئ أو للدروس.

    حسب الدراسة، 43% من الأطفال والشباب عندهم صعوبة في التحكم في وقتهم مع الهواتف، وهذا واضح في حياتهم اليومية. وانا شخصياً، نعرف كيف الهواتف يقدروا يشوشوا التركيز، خاصة في وقت الدراسة.

    بالرغم من أن بعض الناس يعارضون هذا القرار ويقولوا أنه ينتهك حقوق الطلبة، إلا أن التجربة هذي ممكن تخلي الأولاد يرجعوا للكتب ويتفاعلوا أكثر مع بعضهم.

    بالحق، كل واحد فينا لازم يفكر في تأثير الهواتف على حياتهم اليومية.

    https://www.engadget.com/big-tech/south-korea-bans-smartphones-in-all-middle-and-elementary-school-classrooms-153742244.html?src=rss
    يا جماعة، كنت نفكر في وقت كنت ندرس في المدرسة، كيف كنا نشوف رانا معزولين إذا كنا بعيدين على الهواتف. اليوم، كوريا الجنوبية قررت تحظر الهواتف الذكية في الأقسام الابتدائية والمتوسطة، وهاد القانون يولي ساري المفعول في 2026. يعني حتى الموبايلات ما عادتش فيهم، غير وقت الطوارئ أو للدروس. حسب الدراسة، 43% من الأطفال والشباب عندهم صعوبة في التحكم في وقتهم مع الهواتف، وهذا واضح في حياتهم اليومية. وانا شخصياً، نعرف كيف الهواتف يقدروا يشوشوا التركيز، خاصة في وقت الدراسة. بالرغم من أن بعض الناس يعارضون هذا القرار ويقولوا أنه ينتهك حقوق الطلبة، إلا أن التجربة هذي ممكن تخلي الأولاد يرجعوا للكتب ويتفاعلوا أكثر مع بعضهم. بالحق، كل واحد فينا لازم يفكر في تأثير الهواتف على حياتهم اليومية. https://www.engadget.com/big-tech/south-korea-bans-smartphones-in-all-middle-and-elementary-school-classrooms-153742244.html?src=rss
    www.engadget.com
    South Korean lawmakers have banned smartphones and other smart devices in elementary and middle school classrooms, The New York Times reports. The law goes into effect in 2026. The legislation only outlaws these devices during class hours and there a
    Like
    Love
    Wow
    Sad
    126
    · 1 Commentaires ·0 Parts
  • Bring Your MetaHumans to Life Using Houdini with the Latest UE5 Update

    Epic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #bring #your #metahumans #life #using
    Bring Your MetaHumans to Life Using Houdini with the Latest UE5 Update
    Epic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #bring #your #metahumans #life #using
    Bring Your MetaHumans to Life Using Houdini with the Latest UE5 Update
    80.lv
    Epic Games has announced exciting updates to its Unreal Engine's MetaHuman Creator. The latest release integrates it with SideFX Houdini, allowing you to combine the power of both toolsets and bring your MetaHuman characters to life using Houdini's fascinating effects.With the latest MetaHuman Character Rig HDA update and expanded grooming tools, you can easily bring your MetaHumans to Houdini and use the entire arsenal of its stunning procedural tools, adding complex animation and effects. The creators can import and assemble the head, body, and textures of MetaHuman characters created in Unreal Engine using MetaHuman Creator.Also, there's an update to Houdini's existing groom tools. You can now craft hairstyles that are compatible with MetaHuman Creator directly on your MetaHuman character, removing the need to switch back and forth with Unreal Engine. Please note that the MetaHuman Character Rig HDA requires Houdini 21.0 or later.Yesterday, we shared with you August's free learning content from Epic Games, which includes tutorials on animating MetaHumans, creating Blueprint-controlled particle effects, and uncovering the ways Epic Online Services can be used in your projects. Also, if you want to learn more about MetaHumans, check out Marlon R. Nunez's experiment on testing his Live Link from an iPhone in Unreal Engine:Learn more about the MetaHuman Character Rig HDA update here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 2 Commentaires ·0 Parts
  • Developer Rec Room lays off 'about half' its staff

    Diego Argüello, Contributing Editor, News, GameDeveloper.comAugust 26, 20253 Min ReadImage via Rec RoomDeveloper Rec Room, the team behind the namesake user-generated contentdriven social game, has laid off "about half" its staff.Announced yesterday via the official site, CEO and co-founder Nick Fajt wrote that both he and CCO and co-founder Cameron Brown made the decision, which they called a "business necessity based on the financial trajectory of the company" that doesn't reflect on the individuals affected."This is not a reflection on the talent or dedication of those departing—we wish we could keep every one of them," reads the announcement. "I’m gonna say that again, to make it clear this isn’t just 'one of those things you say in a layoff message'. We TRULY wish we could keep every one of these people on the team. But we can't. This is a reflection of the tough reality we face as a business and the change needed to give Rec Room a chance to thrive in the years ahead."According to the post, the laid off workers will continue to be paid for the next three months, receive health benefits for the next six months, and have the option to keep their laptop or desktop computer. Rec Room didn't specify how many people were affected.'The writing on the wall became very clear'Back in December 2021, Rec Room raised million for its social platform, bringing the company's lifetime raised funds to around million. According to Brown, the team invested "heavily in creation tools across PC, VR, consoles, and mobile," but the reality "has been harsh." The CEO claims the mobile and console versions never got to the point where "those devices were good for building stuff." Some of the efforts to bridge the gap, including the Maker AI tool, frustrated the studio's "more impactful creators." Related:At the same time, the lower-powered devices still fostered "millions of pieces of content," which reportedly put a strain on the team that had to come up with procedures to review it all. "Making all this run across every device was a massive technical challenge and burden. While our most skilled creators optimized their content cleverly, most creators didn’t—couldn’t, really, because we didn’t provide them with the necessary tooling."Last month, Fajt announced that Rec Room hit a "record-breaking month" for UGC sales thanks to the creations from players, with creator token earnings from room and Watch store sales increasing 47 percent year-over-year."We deliberately started with a small group of creators as the Avatar Studio tool is still in the early stages," Fajt wrote at the time. "All of the early joiners helped us iron out the workflow and onboarding, providing feedback on how to improve our systems and processes. With creators already finding success, we’re ready to expand."Related:Today's announcement continues by saying that supporting the aforementioned scope stretched the team thin, and began to "dig a financial hole that was getting larger every day." The CEO says the studio has been stuck in an "uncomfortable middle ground" during the past few years, wondering whether to keep pushing the internal UGC vision while potentially increasing the frustration of players and the team, or scale back the vision by cutting the team in half."Both paths were painful," Brown wrote. "But ultimately we got to a point where it was clear that staying the course meant low growth, a high burn rate, and no clear path forward. In a word: Unsustainable. The writing on the wall became very clear."Looking forward, Brown says the team will focus on "empowering our very best creators" and "ensuring Rec Room is a great experience for our players.""For those leaving—you will always be part of the Rec Room story," Brown wrote as a closing note about the layoffs. "We thank you for everything, and wish you the best for your next chapter. For those staying—we know this sucks. We know this hurts. Thank you for pushing forward with us—we have hard work ahead, but with a new focus we believe strongly in the future we can build together."Related:Game Developer has reached out to Rec Room for clarification on the number of workers affected. about:Top StoriesLayoffs & Studio ClosuresAbout the AuthorDiego ArgüelloContributing Editor, News, GameDeveloper.comDiego Nicolás Argüello is a freelance journalist and critic from Argentina. Video games helped him to learn English, so now he covers them for places like The New York Times, NPR, Rolling Stone, and more. He also runs Into the Spine, a site dedicated to fostering and supporting new writers, and co-hosted Turnabout Breakdown, a podcast about the Ace Attorney series. He’s most likely playing a rhythm game as you read this.See more from Diego ArgüelloDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #developer #rec #room #lays #off
    Developer Rec Room lays off 'about half' its staff
    Diego Argüello, Contributing Editor, News, GameDeveloper.comAugust 26, 20253 Min ReadImage via Rec RoomDeveloper Rec Room, the team behind the namesake user-generated contentdriven social game, has laid off "about half" its staff.Announced yesterday via the official site, CEO and co-founder Nick Fajt wrote that both he and CCO and co-founder Cameron Brown made the decision, which they called a "business necessity based on the financial trajectory of the company" that doesn't reflect on the individuals affected."This is not a reflection on the talent or dedication of those departing—we wish we could keep every one of them," reads the announcement. "I’m gonna say that again, to make it clear this isn’t just 'one of those things you say in a layoff message'. We TRULY wish we could keep every one of these people on the team. But we can't. This is a reflection of the tough reality we face as a business and the change needed to give Rec Room a chance to thrive in the years ahead."According to the post, the laid off workers will continue to be paid for the next three months, receive health benefits for the next six months, and have the option to keep their laptop or desktop computer. Rec Room didn't specify how many people were affected.'The writing on the wall became very clear'Back in December 2021, Rec Room raised million for its social platform, bringing the company's lifetime raised funds to around million. According to Brown, the team invested "heavily in creation tools across PC, VR, consoles, and mobile," but the reality "has been harsh." The CEO claims the mobile and console versions never got to the point where "those devices were good for building stuff." Some of the efforts to bridge the gap, including the Maker AI tool, frustrated the studio's "more impactful creators." Related:At the same time, the lower-powered devices still fostered "millions of pieces of content," which reportedly put a strain on the team that had to come up with procedures to review it all. "Making all this run across every device was a massive technical challenge and burden. While our most skilled creators optimized their content cleverly, most creators didn’t—couldn’t, really, because we didn’t provide them with the necessary tooling."Last month, Fajt announced that Rec Room hit a "record-breaking month" for UGC sales thanks to the creations from players, with creator token earnings from room and Watch store sales increasing 47 percent year-over-year."We deliberately started with a small group of creators as the Avatar Studio tool is still in the early stages," Fajt wrote at the time. "All of the early joiners helped us iron out the workflow and onboarding, providing feedback on how to improve our systems and processes. With creators already finding success, we’re ready to expand."Related:Today's announcement continues by saying that supporting the aforementioned scope stretched the team thin, and began to "dig a financial hole that was getting larger every day." The CEO says the studio has been stuck in an "uncomfortable middle ground" during the past few years, wondering whether to keep pushing the internal UGC vision while potentially increasing the frustration of players and the team, or scale back the vision by cutting the team in half."Both paths were painful," Brown wrote. "But ultimately we got to a point where it was clear that staying the course meant low growth, a high burn rate, and no clear path forward. In a word: Unsustainable. The writing on the wall became very clear."Looking forward, Brown says the team will focus on "empowering our very best creators" and "ensuring Rec Room is a great experience for our players.""For those leaving—you will always be part of the Rec Room story," Brown wrote as a closing note about the layoffs. "We thank you for everything, and wish you the best for your next chapter. For those staying—we know this sucks. We know this hurts. Thank you for pushing forward with us—we have hard work ahead, but with a new focus we believe strongly in the future we can build together."Related:Game Developer has reached out to Rec Room for clarification on the number of workers affected. about:Top StoriesLayoffs & Studio ClosuresAbout the AuthorDiego ArgüelloContributing Editor, News, GameDeveloper.comDiego Nicolás Argüello is a freelance journalist and critic from Argentina. Video games helped him to learn English, so now he covers them for places like The New York Times, NPR, Rolling Stone, and more. He also runs Into the Spine, a site dedicated to fostering and supporting new writers, and co-hosted Turnabout Breakdown, a podcast about the Ace Attorney series. He’s most likely playing a rhythm game as you read this.See more from Diego ArgüelloDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #developer #rec #room #lays #off
    Developer Rec Room lays off 'about half' its staff
    www.gamedeveloper.com
    Diego Argüello, Contributing Editor, News, GameDeveloper.comAugust 26, 20253 Min ReadImage via Rec RoomDeveloper Rec Room, the team behind the namesake user-generated content (UGC) driven social game, has laid off "about half" its staff.Announced yesterday via the official site, CEO and co-founder Nick Fajt wrote that both he and CCO and co-founder Cameron Brown made the decision, which they called a "business necessity based on the financial trajectory of the company" that doesn't reflect on the individuals affected."This is not a reflection on the talent or dedication of those departing—we wish we could keep every one of them," reads the announcement. "I’m gonna say that again, to make it clear this isn’t just 'one of those things you say in a layoff message'. We TRULY wish we could keep every one of these people on the team. But we can't. This is a reflection of the tough reality we face as a business and the change needed to give Rec Room a chance to thrive in the years ahead."According to the post, the laid off workers will continue to be paid for the next three months, receive health benefits for the next six months, and have the option to keep their laptop or desktop computer. Rec Room didn't specify how many people were affected.'The writing on the wall became very clear'Back in December 2021, Rec Room raised $145 million for its social platform, bringing the company's lifetime raised funds to around $294 million. According to Brown, the team invested "heavily in creation tools across PC, VR, consoles, and mobile," but the reality "has been harsh." The CEO claims the mobile and console versions never got to the point where "those devices were good for building stuff." Some of the efforts to bridge the gap, including the Maker AI tool, frustrated the studio's "more impactful creators." Related:At the same time, the lower-powered devices still fostered "millions of pieces of content," which reportedly put a strain on the team that had to come up with procedures to review it all. "Making all this run across every device was a massive technical challenge and burden. While our most skilled creators optimized their content cleverly, most creators didn’t—couldn’t, really, because we didn’t provide them with the necessary tooling."Last month, Fajt announced that Rec Room hit a "record-breaking month" for UGC sales thanks to the creations from players, with creator token earnings from room and Watch store sales increasing 47 percent year-over-year."We deliberately started with a small group of creators as the Avatar Studio tool is still in the early stages," Fajt wrote at the time. "All of the early joiners helped us iron out the workflow and onboarding, providing feedback on how to improve our systems and processes. With creators already finding success, we’re ready to expand."Related:Today's announcement continues by saying that supporting the aforementioned scope stretched the team thin, and began to "dig a financial hole that was getting larger every day." The CEO says the studio has been stuck in an "uncomfortable middle ground" during the past few years, wondering whether to keep pushing the internal UGC vision while potentially increasing the frustration of players and the team, or scale back the vision by cutting the team in half."Both paths were painful," Brown wrote. "But ultimately we got to a point where it was clear that staying the course meant low growth, a high burn rate, and no clear path forward. In a word: Unsustainable. The writing on the wall became very clear."Looking forward, Brown says the team will focus on "empowering our very best creators" and "ensuring Rec Room is a great experience for our players.""For those leaving—you will always be part of the Rec Room story," Brown wrote as a closing note about the layoffs. "We thank you for everything, and wish you the best for your next chapter. For those staying—we know this sucks. We know this hurts. Thank you for pushing forward with us—we have hard work ahead, but with a new focus we believe strongly in the future we can build together."Related:Game Developer has reached out to Rec Room for clarification on the number of workers affected.Read more about:Top StoriesLayoffs & Studio ClosuresAbout the AuthorDiego ArgüelloContributing Editor, News, GameDeveloper.comDiego Nicolás Argüello is a freelance journalist and critic from Argentina. Video games helped him to learn English, so now he covers them for places like The New York Times, NPR, Rolling Stone, and more. He also runs Into the Spine, a site dedicated to fostering and supporting new writers, and co-hosted Turnabout Breakdown, a podcast about the Ace Attorney series. He’s most likely playing a rhythm game as you read this.See more from Diego ArgüelloDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    Like
    Love
    Wow
    Sad
    Angry
    636
    · 2 Commentaires ·0 Parts
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Commentaires ·0 Parts
  • Fur Grooming Techniques For Realistic Stitch In Blender

    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open.While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and noseSince the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the frontand a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail: In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming, I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical, the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics. This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch, this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new filmIt's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine #fur #grooming #techniques #realistic #stitch
    Fur Grooming Techniques For Realistic Stitch In Blender
    80.lv
    IntroductionHi everyone! My name is Oleh Yakushev, and I'm a 3D Artist from Ukraine. My journey into 3D began just three years ago, when I was working as a mobile phone salesperson at a shopping mall. In 2022, during one slow day at work, I noticed a colleague learning Python. We started talking about life goals. I told him I wanted to switch careers, to do something creative, but programming wasn't really my thing.He asked me a simple question: "Well, what do you actually enjoy doing?"I said, "Video games. I love video games. But I don't have time to learn how to make them, I've got a job, a family, and a kid."Then he hit me with something that really shifted my whole perspective."Oleh, do you play games on your PlayStation?"I said, "Of course."He replied, "Then why not take the time you spend playing and use it to learn how to make games?"That moment flipped a switch in my mind. I realized that I did have time, it was just a matter of how I used it. If I really wanted to learn, I could find a way. At the time, I didn't even own a computer. But where there's a will, there's a way: I borrowed my sister's laptop for a month and started following beginner 3D tutorials on YouTube. Every night after work, once my family went to sleep, I'd sit in the kitchen and study. I stayed up until 2 or 3 AM, learning Blender basics. Then I'd sleep for a few hours before waking up at 6 AM to go back to work. That's how I spent my first few months in 3D, studying every single night.3D completely took over my life. During lunch breaks, I watched 3D videos, on the bus, I scrolled through 3D TikToks, at home, I took 3D courses, and the word "3D" just became a constant in my vocabulary.After a few months of learning the basics, I started building my portfolio, which looks pretty funny to me now. But at the time, it was a real sign of how committed I was. Eventually, someone reached out to me through Behance, offering my first freelance opportunity. And thatэs how my journey began, from mall clerk to 3D artist. It's been a tough road, full of burnout, doubts, and late nights... but also full of curiosity, growth, and hope. And I wouldn't trade it for anything.The Stitch ProjectI've loved Stitch since I was a kid. I used to watch the cartoons, play the video games, and he always felt like such a warm, funny, chill, and at the same time, strong character. So once I reached a certain level in 3D, I decided to recreate Stitch.Back then, my skills only allowed me to make him in a stylized cartoonish style, no fur, no complex detailing, no advanced texturing, I just didn't have the experience. Surprisingly, the result turned out pretty decent. Even now, I sometimes get comments that my old Stitch still looks quite cute. Though honestly, I wouldn't say that myself anymore. Two years have passed since I made that first Stitch, it was back in 2023. And in 2025, I decided it was time to challenge myself.At that point, I had just completed an intense grooming course. Grooming always intimidated me, it felt really complex. I avoided it on commercial projects, made a few failed attempts for my portfolio, and overall tried to steer clear of any tasks where grooming was required. But eventually, I found the strength to face it.I pushed myself to learn how to make great fur, and I did. I finally understood how the grooming system works, grasped the logic, the tools, and the workflow. And after finishing the course, I wanted to lock in all that knowledge by creating a full personal project from scratch.So my goal was to make a character from the ground up, where the final stage would be grooming. And without thinking too long, I chose Stitch.First, because I truly love the character. Second, I wanted to clearly see my own progress over the past two years. Third, I needed to put my new skills to the test and find out whether my training had really paid off.ModelingI had a few ideas for how to approach the base mesh for this project. First, to model everything completely from scratch, starting with a sphere. Second, to reuse my old Stitch model and upgrade it.But then an idea struck me: why not test how well AI could handle a base mesh? I gathered some references and tried generating a base mesh using AI, uploading Stitch visuals as a guide. As you can see from the screenshot, the result was far from usable. So I basically ended up doing everything from scratch anyway.So, I went back to basics: digging through ArtStation and Pinterest, collecting references. Since over the last two years, I had not only learned grooming but also completely changed my overall approach to character creation, it was important for me to make a more detailed model, even if much of it would be hidden under fur.The first Stitch was sculpted in Blender, with all the limitations that come with sculpting in it. But since then, I've leveled up significantly and switched to more advanced tools. So this second version of Stitch was born in ZBrush. By the time I started working on this Stitch, ZBrush had already become my second main workspace. I've used it to deliver tons of commercial projects, I work in it almost daily, and most of my portfolio was created using this tool. I found some great reference images showing Stitch's body structure. Among them were official movie references and a stunning high-poly model created by Juan Hernández, a version of Stitch without fur. That model became my primary reference for sculpting.Truth is, Stitch's base form is quite simple, so blocking out the shape didn't take too long. When blocking, I use Blender in combination with ZBrush:I work with primary forms in ZBrushThen check proportions in BlenderFix mistakes, tweak volumes, and refine the silhouetteSince Stitch's shape isn't overly complex, I broke him down into three main sculpting parts:The body: arms, legs, head, and earsThe nose, eyes, and mouth cavityWhile planning the sculpt, I already knew I'd be rigging Stitch, both body and facial rig. So I started sculpting with his mouth open (to later close it and have more flexibility when it comes to rigging and deformation).While studying various references, I noticed something interesting. Stitch from promotional posters, Stitch from the movie, and Stitch as recreated by different artists on ArtStation all look very different from one another. What surprised me the most was how different the promo version of Stitch is compared to the one in the actual movie. They are essentially two separate models:Different proportionsDifferent shapesDifferent texturesEven different fur and overall designThis presented a creative challenge, I had to develop my own take on Stitch's design. Sometimes I liked the way the teeth were done in one version, in another, the eye placement, in another, the fur shape, or the claw design on hands and feet.At first, considering that Stitch is completely covered in fur from head to toe, sculpting his underlying anatomy seemed pointless. I kept asking myself: "Why sculpt muscles and skin detail if everything will be hidden under fur anyway?"But eventually, I found a few solid answers for myself. First, having a defined muscle structure actually makes the fur grooming process easier. That's because fur often follows the flow of muscle lines, so having those muscles helps guide fur direction more accurately across the character's body.Second, it's great anatomy practice, and practice is never a waste. So, I found a solid anatomical reference of Stitch with clearly visible muscle groups and tried to recreate that structure as closely as possible in my own sculpt.In the end, I had to develop a full visual concept by combining elements from multiple versions of Stitch. Through careful reference work and constantly switching between Blender and ZBrush, I gradually, but intentionally, built up the body and overall look of our favorite fluffy alien.Topology & UVsThroughout the sculpting process, I spent quite a bit of time thinking about topology. I was looking for the most balanced solution between quality and production time. Normally, I do manual retopology for my characters, but this time, I knew it would take too much time, and honestly, I didn't have that luxury.So I decided to generate the topology using ZBrush's tools. I split the model into separate parts using Polygroups, assigning individual groups for the ears, the head, the torso, the arms, the legs, and each of Stitch's fingers.With the Polygroups in place, I used ZRemesher with Keep Groups enabled and smoothing on group borders. This gave me a clean and optimized mesh that was perfect for UV unwrapping.Of course, this kind of auto-retopology isn't a full substitute for manual work, but it saved me a huge amount of time, and the quality was still high enough for what I needed. However, there was one tricky issue. Although Stitch looks symmetrical at first glance, his ears are actually asymmetrical. The right ear has a scar on the top, while the left has a scar on the bottomBecause of that, I couldn't just mirror one side in ZBrush without losing those unique features. Here's what I ended up doing: I created a symmetrical model with the right ear, then another symmetrical model with the left ear. I brought both into Blender, detached the left ear from one model, and attached it to the body of the other one. This way, I got a clean, symmetrical base mesh with asymmetrical ears, preserving both topology and detail. And thanks to the clean polygroup-based layout, I was able to unwrap the UVs with nice, even seams and clean islands.When it came to UV mapping, I divided Stitch into two UDIM tiles:The first UDIM includes the head with ears, torso, arms, and legs.The second UDIM contains all the additional parts: teeth, tongue, gums, claws, and nose (For the claws, I used overlapping UVs to preserve texel density for the other parts)Since the nose is one of the most important details, I allocated the largest space to it, which helped me to better capture its intricate details.As for the eyes, I used procedural eyes, so there was no need to assign UV space or create a separate UDIM for texturing them. To achieve this, I used the Tiny Eye add-on by tinynocky for Blender, which allows full control over procedural eyes and their parameters.This approach gave me high-quality eyes with customizable elements tailored exactly to my needs. As a result of all these steps, Stitch ended up with a symmetrical, optimized mesh, asymmetrical ears, and the body split across two UDIMs, one for the main body and one for the additional parts.TexturingWhen planning Stitch's texturing, I understood that the main body texture would be fairly simple, with much of the visual detail enhanced by the fur. However, there were some areas that required much more attention than the rest of the body. The textures for Stitch can be roughly divided into several main parts:The base body, which includes the primary color of his fur, along with additional shading like a lighter tone on the front (belly) and a darker tone on the back and napeThe nose and ears, these zones, demanded separate focusAt the initial texturing/blocking stage, the ears looked too cartoony, which didn’t fit the style I wanted. So, I decided to push them towards a more realistic look. This involved removing bright colors, adding more variation in the roughness map, introducing variation in the base color, and making the ears visually more natural, layered, and textured on the surface. By combining smart materials and masks, I achieved the effect of "living" ears, slightly dirty and looking as natural as possible.The nose was a separate story. It occupies a significant part of the face and thus draws a lot of attention. While studying references, I noticed that the shape and texture of the nose vary a lot between different artists. Initially, I made it dog-like, with some wear and tear around the nostrils and base.For a long time, I thought this version was acceptable. But during test renders, I realized the nose needed improvement. So I reworked its texturing, aiming to make it more detailed. I divided the nose texture into four main layers:Base detail: Baked from the high-poly model. Over this, I applied a smart skin material that added characteristic bumps.Lighter layer: Applied via a mask using the AO channel. This darkened the crevices and brightened the bumps, creating a multi-layered effect.Organic detail (capillaries): In animal references, I noticed slight redness in the nose area. I created another AO-masked layer with reddish capillaries visible through the bumps, adding depth and realism.Softness: To make the nose visually softer, like in references, I added a fill layer with only height enabled, used a paper texture as grayscale, and applied a blurred mask. This created subtle dents and wrinkles that softened the look.All textures were created in 4K resolution to achieve maximum detail. After finishing the main texturing stage, I add an Ambient Occlusion map on the final texture layer, activating only the Color channel, setting the blend mode to Multiply, and reducing opacity to about 35%. This adds volume and greatly improves the overall perception of the model.That covers the texturing of Stitch’s body. I also created a separate texture for the fur. This was simpler, I disabled unnecessary layers like ears and eyelids, and left only the base ones corresponding to the body’s color tones.During grooming (which I'll cover in detail later), I also created textures for the fur's clamps and roughness. In Substance 3D Painter, I additionally painted masks for better fur detail.FurAnd finally, I moved on to the part that was most important to me, the very reason I started this project in the first place. Fur. This entire process was essentially a test of my fur grooming skills. After overcoming self-doubt, I trusted the process and relied on everything I had learned so far. Before diving into the grooming itself, I made sure to gather strong references. I searched for the highest quality and most inspiring examples I could find and analyzed them thoroughly. My goal was to clearly understand the direction of fur growth, its density and volume, the intensity of roughness, and the strength of clumping in different areas of Stitch's body.To create the fur, I used Blender and its Hair Particle System. The overall approach is similar to sculpting a high-detail model: work from broad strokes to finer details. So, the first step was blocking out the main flow and placement of the hair strands.At this point, I ran into a challenge: symmetry. Since the model was purposefully asymmetrical (because of the ears and skin folds), the fur couldn't be mirrored cleanly. To solve this, I created a base fur blocking using Hair Guides with just two segments. After that, I split the fur into separate parts. I duplicated the main Particle System and created individual hair systems for each area where needed.In total, I broke Stitch's body into key sections: head, left ear, right ear, front torso, back torso, arms, hands, upper and lower legs, toes, and additional detailing layers. The final fur setup included 25 separate particle systems.To control fur growth, I used Weight Paint to fine-tune the influence on each body part individually. This separation gave me much more precision and allowed full control over every parameter of the fur on a per-section basis.The most challenging aspect of working with fur is staying patient and focused. Detail is absolutely critical because the overall picture is built entirely from tiny, subtle elements. Once the base layer was complete, I moved on to refining the fur based on my references.The most complex areas turned out to be the front of the torso and the face. When working on the torso, my goal was to create a smooth gradient, from thick, clumped fur on the chest to shorter, softer fur on the stomach.Step by step, I adjusted the transitions, directions, clumps, and volumes to achieve that look. Additionally, I used the fur itself to subtly enhance Stitch's silhouette, making his overall shape feel sharper, more expressive, and visually engaging.During fur development, I used texture maps to control the intensity of the Roughness and Clump parameters. This gave me a high degree of flexibility, textures drove these attributes across the entire model. In areas where stronger clumping or roughness was needed, I used brighter values; in zones requiring a softer look, darker values. This approach allowed for fine-tuned micro-level control of the fur shader and helped achieve a highly realistic appearance in renders.The face required special attention: the fur had to be neat, evenly distributed, and still visually appealing. The biggest challenge here was working around the eye area. Even with properly adjusted Weight Paint, interpolation sometimes caused strands to creep into the eyes.I spent a lot of time cleaning up this region to get an optimal result. I also had to revisit certain patches that looked bald, even though interpolation and weight painting were set correctly, because the fur didn't render properly there. These areas needed manual fixing.As part of the detailing stage, I also increased the number of segments in the Hair Guides.While the blocking phase only used two segments, I went up to three, and in some cases even five, for more complex regions. This gave me much more control over fur shape and flow.The tiniest details really matter, so I added extra fur layers with thinner, more chaotic strands extending slightly beyond the main silhouette. These micro-layers significantly improved the texture depth and boosted the overall realism.Aside from the grooming itself, I paid special attention to the fur material setup, as the shader plays a critical role in the final visual quality of the render. It's not enough to simply plug a color texture into a Principled BSDF node and call it done.I built a more complex shader, giving me precise control over various attributes. For example, I implemented subtle color variation across individual strands, along with darkening near the roots and a gradual brightening toward the tips. This helped add visual depth and made the fur look significantly more natural and lifelike.Working on the fur took up nearly half of the total time I spent on the entire model. And I'm genuinely happy with the result, this stage confirmed that the training I've gone through was solid and that I’m heading in the right direction with my artistic development.Rigging, Posing & SceneOnce I finished working on the fur, I rendered several 4K test shots from different angles to make sure every detail looked the way I intended. When I was fully satisfied with the results, it was time to move on to rigging.I divided the rigging process into three main parts:Body rig, for posing and positioning the characterFacial rig, for expressions and emotionsEar rig, for dynamic ear controlRigging isn't something I consider my strongest skill, but as a 3D generalist, I had to dive into many technical aspects of it. For the ears, I set up a relatively simple system with several bones connected using inverse kinematics (IK). This gave me flexible and intuitive control during posing and allowed for the addition of dynamic movement in animation.For facial rigging, I used the FaceIt add-on, which generates a complete facial control system for mouth, eyes, and tongue. It sped up the process significantly and gave me more precision. For the body, I used the ActorCore Rig by NVIDIA, then converted it to Rigify, which gave me a familiar interface and flexible control over poses.Posing is one of my favorite stages, it's when the character really comes to life. As usual, it started with gathering references. Honestly, it was hard to pick the final poses, Stitch is so expressive and full of personality that I wanted to try hundreds of them. But I focused on those that best conveyed the spirit and mood of the character. Some poses I reworked to fit my style rather than copying directly. For example, in the pose where Stitch licks his nose, I added drool and a bit of "green slime" for comedic effect. To capture motion, I tilted his head back and made the ears fly upward, creating a vivid, emotional snapshot.Just like in sculpting or grooming, minor details make a big difference in posing. Examples include: a slight asymmetry in the facial expression, a raised corner of the mouth, one eye squinting a little more than the other, and ears set at slightly different angles.These are subtle things that might not be noticed immediately, but they’re the key to making the character feel alive and believable.For each pose, I created a separate scene and collection in Blender, including the character, specific lighting setup, and a simple background or environment. This made it easy to return to any scene later, to adjust lighting, reposition the character, or tweak the background.In one of the renders, which I used as the cover image, Stitch is holding a little frog.I want to clearly note that the 3D model of the frog is not mine, full credit goes to the original author of the asset.At first, I wanted to build a full environment around Stitch, to create a scene that would feel like a frame from a film. But after carefully evaluating my skills and priorities, I decided that a weak environment would only detract from the strength of the character. So I opted for a simple, neutral backdrop, designed to keep all the focus on Stitch himself.Rendering, Lighting & Post-ProcessingWhen the character is complete, posed expressively, and integrated into the scene, there's one final step: lighting. Lighting isn't just a technical element of the scene — it’s a full-fledged stage of the 3D pipeline. It doesn't just illuminate; it paints. Proper lighting can highlight the personality of the character, emphasize forms, and create atmosphere.For all my renders, I rely on the classic three-point lighting setup: Key Light, Fill Light, and Rim Light.While this setup is well-known, it remains highly effective. When done thoughtfully, with the right intensity, direction, and color temperature, it creates a strong light-shadow composition that brings the model to life. In addition to the three main lights, I also use an HDRI map, but with very low intensity, around 0.3, just enough to subtly enrich the ambient light without overpowering the scene.Once everything is set, it's time to hit Render and wait for the result. Due to hardware limitations, I wasn’t able to produce full animated shots with fur. Rendering a single 4K image with fur took over an hour, so I limited myself to a 360° turnaround and several static renders.I don't spend too much time on post-processing, just basic refinements in Photoshop. Slight enhancement of the composition, gentle shadow adjustments, color balance tweaks, and adding a logo. Everything is done subtly, nothing overprocessed. The goal is simply to support and enhance what’s already there.Final ThoughtsThis project has been an incredible experience. Although it was my second time creating Stitch (the first was back in 2023), this time the process felt completely different at every stage. And honestly, it wasn't easy.But that was exactly the point: to challenge myself. To reimagine something familiar, to try things I'd never done before, and to walk the full journey from start to finish. The fur, the heart of this project, was especially meaningful to me. It’s what started it all. I poured a lot into this model: time, effort, emotion, and even doubts. But at the same time, I brought all my knowledge, skills, and experience into it.This work became a mirror of my progress from 2023 to 2025. I can clearly see how far I've come, and that gives me the motivation to keep going. Every hour of learning and practice paid off, the results speak for themselves. This model was created for my portfolio. I don't plan to use it commercially, unless, of course, a studio actually wants to license it for a new film (in that case, I'd be more than happy!)It's been a long road: challenging, sometimes exhausting, but above all inspiring and exciting. I know there's still a lot to learn. Many things to study, improve, and polish to perfection. But I'm already on that path, and I'm not stopping.Oleh Yakushev, 3D Character ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    574
    · 2 Commentaires ·0 Parts
  • Gaming Meets Streaming: Inside the Shift

    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone.
    #gaming #meets #streaming #inside #shift
    Gaming Meets Streaming: Inside the Shift
    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone. #gaming #meets #streaming #inside #shift
    Gaming Meets Streaming: Inside the Shift
    80.lv
    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams (Among Us, Vampire Survivors, Only Up! – all made big by streamers).Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate $15.32 billion in revenue. By 2030, that figure is expected to reach $18.92 billion, growing at an annual rate of 4.32%.The average revenue per user (ARPU) in 2025 stands at $10.51, showing consistent monetization across platforms.China remains the single largest market, expected to bring in $2.92 billion this year alone.Source: Statista Market Insights, 2025Viewership & Daily HabitsThe number of users in the live game streaming market is forecast to hit 1.8 billion by 2030, with user penetration rising from 18.6% in 2025 to 22.6% by the end of the decade.In 2023, average daily time spent watching game streams rose to 2.5 hours per user, up 12% year-over-year — a clear sign of streaming becoming part of gamers’ daily routines.Sources: Statista Market Insights, 2025; SNS Insider, 2024What People Are WatchingThe most-watched games on Twitch include League of Legends, GTA V, and Counter-Strike — all regularly topping charts for both viewers and streamers.When it comes to creators, the most-streamed games are Fortnite, Valorant, and Call of Duty: Warzone, showing a strong overlap between what streamers love to broadcast and what audiences enjoy watching.In Q1 2024, Twitch users spent over 249 million hours watching new game releases, while total gaming-related content reached around 3.3 billion hours.Sources: SullyGnome, 2025; Statista, 2025Global Trends & Regional PlatformsChina’s local platforms like Huya (31M MAU) and Douyu (26.6M MAU) remain key players in the domestic market.In South Korea, following Twitch’s 2023 exit, local services like AfreecaTV and newcomer Chzzk have positioned themselves as alternatives.Meanwhile, Japan and Europe continue to see steady engagement driven by strong gaming scenes and dedicated fan communities.Source: Statista, 2025Event Livestreaming Hits New HighsNintendo Direct was the most-watched gaming showcase in 2024, with an average minute audience of 2.6 million.The 2024 Streamer Awards drew over 645,000 peak viewers, highlighting how creator-focused events now rival traditional game showcases.Source: Statista, 2025As game streaming continues to evolve, its role in the broader gaming ecosystem is becoming clearer. It hasn’t replaced traditional gameplay – instead, it’s added a new dimension to how people engage with games, offering a space for connection, discovery, and commentary. For players, creators, and industry leaders alike, streaming now sits alongside playing as a core part of the modern gaming experience – one that continues to grow and shift with the industry itself.
    Like
    Love
    Wow
    Sad
    Angry
    615
    · 2 Commentaires ·0 Parts
  • واش رايكم لو ترامب طلع فوق السطح تاع البيت الأبيض بعد ما أعلن على مشاريع بقيمة 200 مليون دولار!

    في مقال جديد يتحدث عن كيف ترامب قرر يدير "بار" فخم في البيت الأبيض، وثمنه يقدر ب200 مليون دولار! يعني، واش ودنا نقولوا؟ ترامب ديما يجيب المفاجآت، حتى في الأماكن اللي ممكن نظنوا أنها مستقلة.

    شخصياً، أعجبني كيف يمدي نفكروا في مثل هالمشاريع الكبيرة في وقت يحتاج العالم لهدوء واستقرار. كيما نقولوا في الجزائر، "ماشي كل ما يلمع ذهب"، وأحياناً لازم نتساءل على الأولويات.

    أخيراً، مهما كانت الأفكار، كل واحد فينا عندو الحق في الحلم.

    https://forbesmiddleeast.com/featured/politics-security/trump-takes-a-little-walk-on-white-house-roof-after-announcing-$200-million-ballroom-plans-1

    #ترامب #البيت_الأبيض #أخبار_عالمية #BusinessNews #Politique
    واش رايكم لو ترامب طلع فوق السطح تاع البيت الأبيض بعد ما أعلن على مشاريع بقيمة 200 مليون دولار! 😲 في مقال جديد يتحدث عن كيف ترامب قرر يدير "بار" فخم في البيت الأبيض، وثمنه يقدر ب200 مليون دولار! يعني، واش ودنا نقولوا؟ ترامب ديما يجيب المفاجآت، حتى في الأماكن اللي ممكن نظنوا أنها مستقلة. شخصياً، أعجبني كيف يمدي نفكروا في مثل هالمشاريع الكبيرة في وقت يحتاج العالم لهدوء واستقرار. كيما نقولوا في الجزائر، "ماشي كل ما يلمع ذهب"، وأحياناً لازم نتساءل على الأولويات. أخيراً، مهما كانت الأفكار، كل واحد فينا عندو الحق في الحلم. https://forbesmiddleeast.com/featured/politics-security/trump-takes-a-little-walk-on-white-house-roof-after-announcing-$200-million-ballroom-plans-1 #ترامب #البيت_الأبيض #أخبار_عالمية #BusinessNews #Politique
    forbesmiddleeast.com
    Trump Takes ‘A Little Walk’ On White House Roof After Announcing $200 Million Ballroom Plans
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 1 Commentaires ·0 Parts
  • كي شفت القصة هذي، حبيت نشاركها معاكم! MYMORI جابو فكرة مذهلة: كيت بيومaterials يخلي ولادنا يزرعو كتل لعب مصنوعة من الفطر!

    الفكرة هي أنو ما فيش منتج جاهز، ولكن المستخدمين يحضرو الكتل بنفسهم باستخدام الكيت اللي فيه كل المواد البيولوجية اللازمة. يعني، أولادنا راح يشاركوا في عملية النمو والتعلم، وهذا شيء يفتح لهم آفاق جديدة!

    شوفوا، كنا نحب نجرب أشياء جديدة في صغرنا، وهادي فرصة لهم باش يكتشفوا الطبيعة بطريقة ممتعة. نتصور كيفاش راح يكونو مدهوشين كي يشوفو النتائج بعيونهم!

    الفكرة رائعة، وتحبنا نفكروا في كيفاش نقدروا نستثمروا في تجارب تعليمية مميزة لصغارنا.

    https://www.designboom.com/design/mymori-biomaterial-kit-kids-grow-mushroom-toy-blocks-mycelium-08-23-2025/

    #تعليم #ابتكار #Mycelium
    🎉 كي شفت القصة هذي، حبيت نشاركها معاكم! MYMORI جابو فكرة مذهلة: كيت بيومaterials يخلي ولادنا يزرعو كتل لعب مصنوعة من الفطر! 🍄✨ الفكرة هي أنو ما فيش منتج جاهز، ولكن المستخدمين يحضرو الكتل بنفسهم باستخدام الكيت اللي فيه كل المواد البيولوجية اللازمة. يعني، أولادنا راح يشاركوا في عملية النمو والتعلم، وهذا شيء يفتح لهم آفاق جديدة! شوفوا، كنا نحب نجرب أشياء جديدة في صغرنا، وهادي فرصة لهم باش يكتشفوا الطبيعة بطريقة ممتعة. نتصور كيفاش راح يكونو مدهوشين كي يشوفو النتائج بعيونهم! الفكرة رائعة، وتحبنا نفكروا في كيفاش نقدروا نستثمروا في تجارب تعليمية مميزة لصغارنا. https://www.designboom.com/design/mymori-biomaterial-kit-kids-grow-mushroom-toy-blocks-mycelium-08-23-2025/ #تعليم #ابتكار #Mycelium
    MYMORI’s biomaterial kit lets kids grow mushroom toy blocks on their own using mycelium
    www.designboom.com
    there’s no finished product yet since the users need to grow the blocks from a kit, which includes all the biomaterials needed. The post MYMORI’s biomaterial kit lets kids grow mushroom toy blocks on their own using mycelium appeared first on designb
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 1 Commentaires ·0 Parts
  • يا جماعة، سمعتوا الخبر الجديد؟!

    نحن في تعاون جديد مع WAN-IFRA، الجمعية العالمية للناشرين. الهدف؟ نطلق برنامج Accelerator عالمي يدعم أكثر من 100 ناشر في探索 وإدماج الذكاء الاصطناعي في قاعات الأخبار ديالهم. يعني، خدمتنا رح تولي أكثر ذكاءً وابتكاراً!

    شخصياً، نعتقد أن الذكاء الاصطناعي يقدر يفتح آفاق جديدة في عالم الصحافة، بصح حاب نسمع آراءكم وتجاربكم في هاد الموضوع. نعرف أن التكنولوجيا وحدها ما تكفيش، لكن كيفاش ندمجوها بشكل يخدمنا؟

    في الأخير، خلونا نفكروا مع بعض في كيفاش نقدروا نستفيدوا من هاد الفرصة.

    https://openai.com/index/newsroom-ai-catalyst-global-program-with-wan-ifra
    #ذكاء_اصطناعي #صحافة #Innovation #WANIFRA #تكنولوجيا
    🔥 يا جماعة، سمعتوا الخبر الجديد؟! نحن في تعاون جديد مع WAN-IFRA، الجمعية العالمية للناشرين. الهدف؟ نطلق برنامج Accelerator عالمي يدعم أكثر من 100 ناشر في探索 وإدماج الذكاء الاصطناعي في قاعات الأخبار ديالهم. يعني، خدمتنا رح تولي أكثر ذكاءً وابتكاراً! شخصياً، نعتقد أن الذكاء الاصطناعي يقدر يفتح آفاق جديدة في عالم الصحافة، بصح حاب نسمع آراءكم وتجاربكم في هاد الموضوع. نعرف أن التكنولوجيا وحدها ما تكفيش، لكن كيفاش ندمجوها بشكل يخدمنا؟ في الأخير، خلونا نفكروا مع بعض في كيفاش نقدروا نستفيدوا من هاد الفرصة. https://openai.com/index/newsroom-ai-catalyst-global-program-with-wan-ifra #ذكاء_اصطناعي #صحافة #Innovation #WANIFRA #تكنولوجيا
    openai.com
    We’re collaborating with WAN-IFRA, the World Association of News Publishers, to launch a global accelerator program that will assist over 100 news publishers to explore and integrate AI in their newsroom.
    Like
    Love
    Wow
    Sad
    Angry
    493
    · 1 Commentaires ·0 Parts
  • واش راكم يا جماعة؟ عندي لكم خبر مزيان! jgstudio قادرت تحول un WC عمومي في إكواتور إلى installation en béton ondulant!

    الفكرة هي explore تاع التحويل من السائل إلى الصلب في béton، وكيما تعرفوا، هاد الشي يفتح آفاق جديدة في عالم الفن المعماري. المشروع متميز ويقدر يبدل كيفاش نشوفوا الفضاءات العمومية!

    شخصياً، دايمًا كنت نشوف أن الأماكن العمومية تقدر تكون أكثر من مجرد وظيفتها، وكي نشوفوا هاد الابتكارات، يحفزني نفكر كيفاش نقدروا نضيفوا لمسات جمالية في حياتنا اليومية.

    خلينا نفكروا في كيفاش يمكن للفن يغير من محيطنا!

    https://www.designboom.com/architecture/jgstudio-public-bathroom-undulating-concrete-installation-ecuador-quito-umbral-08-22-2025/
    #فن_معماري #innovation #art #Ecuador #مستقبل
    🌟 واش راكم يا جماعة؟ عندي لكم خبر مزيان! jgstudio قادرت تحول un WC عمومي في إكواتور إلى installation en béton ondulant! 😍 الفكرة هي explore تاع التحويل من السائل إلى الصلب في béton، وكيما تعرفوا، هاد الشي يفتح آفاق جديدة في عالم الفن المعماري. المشروع متميز ويقدر يبدل كيفاش نشوفوا الفضاءات العمومية! شخصياً، دايمًا كنت نشوف أن الأماكن العمومية تقدر تكون أكثر من مجرد وظيفتها، وكي نشوفوا هاد الابتكارات، يحفزني نفكر كيفاش نقدروا نضيفوا لمسات جمالية في حياتنا اليومية. خلينا نفكروا في كيفاش يمكن للفن يغير من محيطنا! https://www.designboom.com/architecture/jgstudio-public-bathroom-undulating-concrete-installation-ecuador-quito-umbral-08-22-2025/ #فن_معماري #innovation #art #Ecuador #مستقبل
    jgstudio transforms public bathroom into undulating concrete installation in ecuador
    www.designboom.com
    the project explores concrete’s fluid-to-solid transformation. The post jgstudio transforms public bathroom into undulating concrete installation in ecuador appeared first on designboom | architecture & design magazine.
    1 Commentaires ·0 Parts
  • يا جماعة، مع هاد الجو الربيعي الرائع، حسيت بلي الوقت مناسب باش نهدروا على حاجة جديدة في عالم الهندسة المعمارية.

    المقال الجديد يتحدث عن "Making Space for Nothing"، وين يورّينا كيف بعض المشاريع الضيافية الفائزة بجوائز A+Awards تركوا فراغات مميزة في تصاميمهم. من البريتسوي لما يسمى skywells، بعض الأحيان الفراغات اللي مانديروش فيها شيء هي اللي تعطي الحياة للمكان.

    بصراحة، من تجربتي مع فنادق جديدة، حسيت بلي هاد الفكرة تعطي إحساس بالراحة والهدوء، كي تحس روحك مرتاح وسط الفضاء.

    صحيح أن كل شيء راح يتغير، لكن تخيلوا لو كانت المساحات المنسية تلقى اهتمام أكثر، كيف راح تكون تجاربنا في المستقبل!

    https://architizer.com/blog/inspiration/collections/hospitality-architecture-room-to-breath/

    #هندسة #معمارية #ضيافة #Space #راحة
    يا جماعة، مع هاد الجو الربيعي الرائع، حسيت بلي الوقت مناسب باش نهدروا على حاجة جديدة في عالم الهندسة المعمارية. 🌼 المقال الجديد يتحدث عن "Making Space for Nothing"، وين يورّينا كيف بعض المشاريع الضيافية الفائزة بجوائز A+Awards تركوا فراغات مميزة في تصاميمهم. من البريتسوي لما يسمى skywells، بعض الأحيان الفراغات اللي مانديروش فيها شيء هي اللي تعطي الحياة للمكان. بصراحة، من تجربتي مع فنادق جديدة، حسيت بلي هاد الفكرة تعطي إحساس بالراحة والهدوء، كي تحس روحك مرتاح وسط الفضاء. 🤗 صحيح أن كل شيء راح يتغير، لكن تخيلوا لو كانت المساحات المنسية تلقى اهتمام أكثر، كيف راح تكون تجاربنا في المستقبل! https://architizer.com/blog/inspiration/collections/hospitality-architecture-room-to-breath/ #هندسة #معمارية #ضيافة #Space #راحة
    architizer.com
    From breezeways to skywells, these A+Awards-winning hospitality projects prove that sometimes the most important spaces are the ones left unbuilt. The post Making Space for Nothing: 6 Times Hospitality Architecture Left Room to Breathe appeared first
    Like
    Wow
    Love
    Sad
    Angry
    77
    · 1 Commentaires ·0 Parts
  • واش راكم يا جماعة؟ عندي خبر يفرح عشاق الألعاب! بعد ست سنين من التطوير، لعبة "Vampire: The Masquerade – Bloodlines 2" عندها تاريخ جديد للإصدار!

    المقال يحكي عن كيف "Paradox Interactive" و"The Chinese Room" قدمو لنا لمحة أولى على عالم سياتل المملوء بالزومبي. عالم رهيب يستحق الانتظار وصبرنا راح يجيب ثماره!

    شخصياً، كنت نترقب هاد اللعبة بشغف. من وقت ما لعبت الجزء الأول، حسيت روحي في عالم غريب مليء بالقصص والتشويق. ومن الواضح أن الجزء الثاني راح يكون بنفس القوة أو أكثر!

    أحبوا عالم الألعاب واكتشفوا كل جديد فيه، لأنه كل تجربة في عالم الألعاب تفتح لنا أبواب جديدة للتفكير والإبداع.

    https://www.polygon.com/vampire-the-masquerade-bloodlines-2-release-date-preview/
    #VampireBloodlines2 #ألعاب #JeuxVideo #GamingCommunity #إبداع
    🎮 واش راكم يا جماعة؟ عندي خبر يفرح عشاق الألعاب! بعد ست سنين من التطوير، لعبة "Vampire: The Masquerade – Bloodlines 2" عندها تاريخ جديد للإصدار! 😍 المقال يحكي عن كيف "Paradox Interactive" و"The Chinese Room" قدمو لنا لمحة أولى على عالم سياتل المملوء بالزومبي. عالم رهيب يستحق الانتظار وصبرنا راح يجيب ثماره! شخصياً، كنت نترقب هاد اللعبة بشغف. من وقت ما لعبت الجزء الأول، حسيت روحي في عالم غريب مليء بالقصص والتشويق. ومن الواضح أن الجزء الثاني راح يكون بنفس القوة أو أكثر! أحبوا عالم الألعاب واكتشفوا كل جديد فيه، لأنه كل تجربة في عالم الألعاب تفتح لنا أبواب جديدة للتفكير والإبداع. https://www.polygon.com/vampire-the-masquerade-bloodlines-2-release-date-preview/ #VampireBloodlines2 #ألعاب #JeuxVideo #GamingCommunity #إبداع
    Vampire: The Masquerade – Bloodlines 2 has a new release date
    www.polygon.com
    After six years of development, we got a first look at Paradox Interactive and The Chinese Room's spin on Seattle ruled by the undead
    1 Commentaires ·0 Parts
Plus de résultats
ollo https://www.ollo.ws